diff --git a/spaces/101-5/gpt4free/g4f/.v1/unfinished/gptbz/README.md b/spaces/101-5/gpt4free/g4f/.v1/unfinished/gptbz/README.md deleted file mode 100644 index 05bc2770e0f5b20407b49e54870df4c09902886d..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/unfinished/gptbz/README.md +++ /dev/null @@ -1,4 +0,0 @@ -https://chat.gpt.bz - -to do: -- code refractoring \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Amos 24 A User-Friendly Software for Structural Equation Modeling.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Amos 24 A User-Friendly Software for Structural Equation Modeling.md deleted file mode 100644 index 9b2cb835437a93f56485ba522ad4a609677e0fba..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Amos 24 A User-Friendly Software for Structural Equation Modeling.md +++ /dev/null @@ -1,50 +0,0 @@ -
-

How to Download and Install IBM SPSS Amos 24

-

IBM SPSS Amos 24 is a software for structural equation modeling (SEM) that allows you to test hypotheses and confirm relationships between observed and latent variables. It is an easy-to-use program that can be accessed through a graphical or a programmatic user interface. In this article, we will show you how to download and install IBM SPSS Amos 24 on your Windows computer.

-

Step 1: Download IBM SPSS Amos 24

-

To download IBM SPSS Amos 24, you need to have an IBM Passport Advantage account. If you are a returning customer, you can sign in with your existing credentials. If you are a new customer, you can register for a free account. Once you have logged in, follow these steps:

-

amos 24


Download ————— https://byltly.com/2uKzSl



-
    -
  1. Click on Download finder under Find downloads & media.
  2. -
  3. Select IBM SPSS Amos under Download finder.
  4. -
  5. Select IBM SPSS Amos 24.0 under Description.
  6. -
  7. Select your language and platform under Select criteria.
  8. -
  9. Select the download options you want under Download options.
  10. -
  11. Review the current version downloads and optional downloads under Review “Current version” downloads and Select optional downloads. You will need to download both the client and the documentation files.
  12. -
  13. Review the downloading specifics and click on I agree and Download now under Review downloading specifics.
  14. -
  15. Choose your download method and location and click on OK. You can use the IBM Download Director or the HTTP method.
  16. -
  17. Wait for the download to complete.
  18. -
-

Step 2: Install IBM SPSS Amos 24

-

To install IBM SPSS Amos 24, you need to unpack all the downloaded files into a single temporary directory on your system. Then, follow these steps:

-
    -
  1. Navigate to the temporary directory and double-click on the setup.exe file.
  2. -
  3. Follow the instructions on the installation wizard. You will need to accept the license agreement, choose the installation directory, and enter the license key.
  4. -
  5. Wait for the installation to finish.
  6. -
  7. Launch IBM SPSS Amos 24 from your desktop or start menu.
  8. -
-

Congratulations! You have successfully downloaded and installed IBM SPSS Amos 24 on your Windows computer. You can now use it to perform SEM analysis and test your research hypotheses.

- -

Step 3: Use IBM SPSS Amos 24

-

IBM SPSS Amos 24 allows you to create and test SEM models using either a graphical or a programmatic user interface. You can also import data from various sources, such as IBM SPSS Statistics, Microsoft Excel, or text files. In this section, we will give you a brief overview of how to use IBM SPSS Amos 24.

-

Graphical User Interface

-

The graphical user interface (GUI) of IBM SPSS Amos 24 lets you draw your SEM model using various tools and icons. You can also modify the properties and parameters of your model, such as variable names, labels, measurement scales, error terms, and constraints. To use the GUI, follow these steps:

-
    -
  1. Launch IBM SPSS Amos 24 and click on New to create a new model.
  2. -
  3. Use the toolbar and the drawing area to draw your model. You can drag and drop variables, paths, covariances, and latent variables from the toolbar to the drawing area. You can also use the right-click menu to edit or delete elements.
  4. -
  5. Use the Object Properties window to change the properties and parameters of your model elements. You can access this window by double-clicking on an element or by selecting it and clicking on View and Object Properties.
  6. -
  7. Use the Data Files window to specify the data source for your model. You can access this window by clicking on File and Data Files. You can choose to import data from IBM SPSS Statistics, Microsoft Excel, or text files. You can also use the Data Editor to enter or edit data manually.
  8. -
  9. Use the Analysis Properties window to select the analysis options for your model. You can access this window by clicking on Analyze and Analysis Properties. You can choose the estimation method, the output options, the fit measures, and the bootstrap options.
  10. -
  11. Click on Analyze and Calculate Estimates to run the analysis and view the results. You can view the results in various windows, such as the Output, Text Output, Standardized Estimates, Covariance Matrix, and Modification Indices. You can also use the Syntax Editor to view or edit the syntax generated by the GUI.
  12. -
  13. Save your model by clicking on File and Save As. You can save your model as an AMOS Graphics file (.amw) or an AMOS Text file (.amt).
  14. -
-

Programmatic User Interface

-

The programmatic user interface (PUI) of IBM SPSS Amos 24 lets you write your SEM model using a syntax language called AMOS Basic. You can also use AMOS Basic to manipulate data, perform calculations, create loops and conditional statements, and generate output. To use the PUI, follow these steps:

-
    -
  1. Launch IBM SPSS Amos 24 and click on New Text Model.
  2. -
  3. Type your AMOS Basic syntax in the text editor. You can use comments, keywords, commands, operators, functions, variables, and constants to define your model. You can also use the Syntax Reference Guide to learn more about the syntax rules and elements.
  4. -
  5. Select your data source by clicking on Data Source. You can choose to import data from IBM SPSS Statistics, Microsoft Excel, or text files. You can also use the DATASET, DATASET NAME, and DATASET ACTIVATE commands to specify data sets within your syntax.
  6. -
  7. Select your analysis options by clicking on Analyze Options. You can choose the estimation method, the output options, the fit measures, and the bootstrap options. You can also use the METHOD, FIT INDEXES, and BSTRAP ON/OFF/SEED/REPS/SAMPLES/CI/ALPHA/BC

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artlantis 5.1.2.7 Crack UPDATED.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artlantis 5.1.2.7 Crack UPDATED.md deleted file mode 100644 index 3315048b0e226f866375665bc7e81e89fabab0f2..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artlantis 5.1.2.7 Crack UPDATED.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Crack Artlantis 5.1.2.7 and Enjoy Its Features for Free

    -

    Artlantis 5.1.2.7 is a powerful 3D rendering software that allows you to create realistic and high-quality images, panoramas, VR objects and animations for your architectural and design projects. It is compatible with Windows 64-bit and 32-bit systems and supports various languages.

    -

    However, Artlantis 5.1.2.7 is not a free software and requires a license key to activate its full features. If you don't have a valid license key, you will only be able to use the demo version, which has some limitations such as watermarks, reduced resolution and restricted export formats.

    -

    artlantis 5.1.2.7 crack


    DOWNLOADhttps://byltly.com/2uKzHP



    -

    Fortunately, there is a way to crack Artlantis 5.1.2.7 and enjoy its features for free without paying anything. In this article, we will show you how to download, install and crack Artlantis 5.1.2.7 in a few simple steps.

    -

    Disclaimer

    -

    Before we proceed, we want to make it clear that we do not condone or encourage any illegal or unethical activity such as cracking or pirating software. This article is for educational and informational purposes only and we are not responsible for any consequences that may arise from following this guide.

    -

    We strongly recommend that you support the developers of Artlantis by purchasing a legitimate license key from their official website. Cracking or pirating software is not only illegal but also risky, as it may expose your computer to viruses, malware or other threats.

    -

    How to Crack Artlantis 5.1.2.7

    -

    If you still want to proceed with cracking Artlantis 5.1.2.7, here are the steps you need to follow:

    -
      -
    1. Download the Artlantis 5.1.2.7 installer from the official website or from any other trusted source.
    2. -
    3. Run the installer and follow the instructions to install Artlantis 5.1.2.7 on your computer.
    4. -
    5. Do not launch Artlantis 5.1.2.7 after the installation is complete.
    6. -
    7. Download the Artlantis 5.1.2.7 crack file from this link or from any other reliable source.
    8. -
    9. Extract the crack file using WinRAR or any other file compression tool.
    10. -
    11. Copy the crack file (Artlantis.exe) and paste it into the installation folder of Artlantis 5.1.2.7, which is usually located at C:\Program Files\Artlantis Studio 5.
    12. -
    13. Replace the original file when prompted.
    14. -
    15. Launch Artlantis 5.1.2.7 and enjoy its features for free.
    16. -
    -

    Congratulations! You have successfully cracked Artlantis 5.1.2.7 and can now use it without any limitations or restrictions.

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Registration Code Excel Password Recovery Lastic The Ultimate Tool for Excel Password Recovery.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Registration Code Excel Password Recovery Lastic The Ultimate Tool for Excel Password Recovery.md deleted file mode 100644 index 08ecdf3e45f98b2c36d6f6f84c174d5ba3edef51..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Registration Code Excel Password Recovery Lastic The Ultimate Tool for Excel Password Recovery.md +++ /dev/null @@ -1,97 +0,0 @@ -
    -

    Crack Registration Code Excel Password Recovery Lastic

    -

    Have you ever forgotten or lost your password to an Excel file and couldn't access your data? If so, you might have tried to use an Excel password recovery tool to unlock your file. But what if you don't have a license for such a tool and don't want to pay for it? Is there a way to crack the registration code and use it for free? In this article, we will show you three methods to crack the registration code of Excel Password Recovery Lastic, one of the most popular and effective Excel password recovery tools on the market.

    -

    Crack registration code excel password recovery lastic


    DOWNLOADhttps://byltly.com/2uKvbP



    -

    Introduction

    -

    What is Excel Password Recovery Lastic?

    -

    Excel Password Recovery Lastic is a powerful and easy-to-use software that can recover or remove passwords from any Excel file, including XLS, XLSX, XLA, and XLSM formats. It can also recover passwords for individual worksheets, workbooks, VBA projects, and shared workbooks. It supports all versions of Microsoft Excel from 97 to 2019. It has a user-friendly interface that allows you to select multiple files and process them in batch mode. It also has a smart search function that can scan your computer for all protected Excel files and add them to the list automatically.

    -

    Why do you need to crack the registration code?

    -

    Excel Password Recovery Lastic is not a free software. It costs $29.95 for a personal license and $59.85 for a business license. If you want to use it without any limitations, you need to purchase a license and enter the registration code to activate it. However, some people may not be able or willing to pay for it, especially if they only need it for one-time use or occasional use. In that case, they may look for ways to crack the registration code and use it for free.

    -

    How to crack the registration code legally?

    -

    Before we proceed, we want to make it clear that we do not condone or encourage any illegal or unethical activities such as hacking, cracking, pirating, or stealing software. We respect the intellectual property rights of the developers and we advise you to do the same. Therefore, we will only show you how to crack the registration code of Excel Password Recovery Lastic legally, meaning that you will not violate any laws or terms of service. These methods are based on using legitimate sources and offers that are provided by the developers themselves or by their partners. We will also warn you about some risks and drawbacks of using these methods.

    -

    Method 1: Use the official website

    -

    Step 1: Visit the website and click on Buy Now

    -

    The first and most straightforward method to crack the registration code of Excel Password Recovery Lastic is to use their official website. You can visit their website at https://www.passwordlastic.com/excel-password-recovery-lastic and click on the Buy Now button at the top right corner.

    -

    Excel password recovery lastic crack serial key
    -How to crack excel password recovery lastic registration code
    -Excel password recovery lastic full version with crack
    -Download excel password recovery lastic crack free
    -Excel password recovery lastic license key crack
    -Crack for excel password recovery lastic software
    -Excel password recovery lastic activation code crack
    -Excel password recovery lastic cracked download
    -Excel password recovery lastic keygen crack
    -Excel password recovery lastic crack patch
    -Excel password recovery lastic registration code generator
    -Excel password recovery lastic crack online
    -Excel password recovery lastic serial number crack
    -Excel password recovery lastic crack torrent
    -Excel password recovery lastic registration code free
    -Crack excel password recovery lastic 1.3
    -Excel password recovery lastic 1.2 crack
    -Excel password recovery lastic 1.1 crack
    -Excel password recovery lastic 1.0 crack
    -Excel password recovery lastic pro crack
    -Excel password recovery lastic portable crack
    -Excel password recovery lastic mac crack
    -Excel password recovery lastic windows 10 crack
    -Excel password recovery lastic windows 7 crack
    -Excel password recovery lastic windows 8 crack
    -Crack excel file password with excel password recovery lastic
    -Remove excel password with excel password recovery lastic crack
    -Recover excel workbook password using excel password recovery lastic crack
    -Unlock excel spreadsheet with excel password recovery lastic crack
    -Break excel sheet protection with excel password recovery lastic crack
    -Restore excel document password by excel password recovery lastic crack
    -Find excel worksheet password via excel password recovery lastic crack
    -Retrieve excel file open password through excel password recovery lastic crack
    -Get excel workbook open password from excel password recovery lastic crack
    -Extract excel sheet open password by means of excel password recovery lastic crack
    -Recover lost or forgotten excel passwords with excel password recovery lastic crack
    -Crack any type of excel passwords using excel password recovery lastic
    -Crack multiple excel passwords at once with excel password recovery lastic
    -Crack complex and strong excel passwords with excel password recovery lastic
    -Crack long and alphanumeric excel passwords with excel password recovery lastic
    -Crack encrypted and protected excel passwords with excel password recovery lastic
    -Crack old and new versions of excel passwords with excel password recovery lastic
    -Crack xls and xlsx passwords with excel password recovery lastic
    -Crack vba and macro passwords with excel password recovery lastic
    -Crack shared and read-only passwords with excel password recovery lastic
    -Crack modify and write-reservation passwords with excel password recovery lastic
    -Crack workbook structure and worksheet passwords with excel password recovery lastic

    -

    Step 2: Choose the license type and enter your email

    -

    On the next page, you will see two options for buying a license: personal license and business license. You can choose either one depending on your needs and budget. Then, you need to enter your email address where you want to receive the registration code. You can also choose your preferred payment method from PayPal, credit card, wire transfer, check, cash, or WebMoney.

    -

    Step 3: Complete the payment and receive the code

    -

    After you enter your email address and choose your payment method, you will be redirected to a secure payment page where you need to fill in your billing details and confirm your order. Once you complete the payment process, you will receive an email with your registration code within minutes. You can then copy and paste it into the program and activate it.

    -

    Method 2: Use a coupon code or a discount offer

    -

    Step 1: Search for a valid coupon code or a discount offer online

    -

    The second method to crack the registration code of Excel Password Recovery Lastic is to use a coupon code or a discount offer that can reduce the price of the license. You can search for such codes or offers online using Google or other search engines. You can also check some websites that specialize in providing coupons or discounts for software products, such as RetailMeNot, CouponChief, SlickDeals, etc.

    -

    Step 2: Copy the code or follow the link to the website

    -

    Once you find a valid coupon code or a discount offer for Excel Password Recovery Lastic, you need to copy it or follow the link that leads you to their website. Make sure that the code or offer is still active and applicable before you use it.

    -

    Step 3: Apply the code or the offer at the checkout and get the code

    -

    After you copy the code or follow the link to their website, you need to follow the same steps as in Method 1 to buy a license, but this time you need to apply the code or the offer at the checkout page. You will see the price reduced according to the percentage or amount of the coupon or discount. You can then complete the payment and receive the registration code via email.

    -

    Method 3: Use a free trial version or a giveaway

    -

    Step 1: Download and install the free trial version or a giveaway from a trusted source

    -

    The third method to crack the registration code of Excel Password Recovery Lastic is to use a free trial version or a giveaway that can give you access to the full version for a limited time. You can download and install the free trial version from their official website at https://www.passwordlastic.com/download/excel-password-recovery-lastic. The trial version allows you to recover passwords for up to three files with no more than three characters each. You can also look for giveaways that are occasionally offered by the developers themselves or by their partners on various websites, blogs, forums, social media platforms, etc. A giveaway usually gives you a free license key or activation link for a certain period of time, such as one month, one year, or lifetime.

    -

    Step 2: Run the program and enter your email

    -

    After you download and install the free trial version or a giveaway, you need to run the program and enter your email address where you want to receive the registration code. You may also need to agree to some terms and conditions before you proceed.

    -

    Step 3: Receive the code and activate the full version

    -

    Once you enter your email address, you will receive an email with your registration code within minutes. You can then copy and paste it into the program and activate the full version. You can now enjoy all the features and functions of Excel Password Recovery Lastic without any limitations.

    -

    Conclusion

    -

    In this article, we have shown you three methods to crack the registration code of Excel Password Recovery Lastic legally. These methods are based on using legitimate sources and offers that are provided by the developers themselves or by their partners. We have also warned you about some risks and drawbacks of using these methods. We hope that this article has been helpful for you and that you have learned something new today. If you have any questions, comments, or feedback, please feel free to leave them below. We would love to hear from you!

    -

    FAQs

    -
      -
    • Q: Is Excel Password Recovery Lastic safe to use?
    • -
    • A: Yes, Excel Password Recovery Lastic is safe to use. It does not contain any viruses, malware, spyware, or adware. It also does not damage or modify your original Excel files. It only recovers or removes the passwords from them.
    • -
    • Q: How long does it take to recover or remove a password from an Excel file?
    • -
    • A: The time it takes to recover or remove a password from an Excel file depends on several factors, such as the length and complexity of the password, the type and size of the file, the speed and performance of your computer, etc. Generally speaking, it can take from a few seconds to a few minutes to recover or remove a password from an Excel file.
    • -
    • Q: What if I forget or lose my registration code?
    • -
    • A: If you forget or lose your registration code, you can contact the customer support team at support@passwordlastic.com and provide them with your order details. They will resend you your registration code as soon as possible.
    • -
    • Q: What if I want to use Excel Password Recovery Lastic on more than one computer?
    • -
    • A: If you want to use Excel Password Recovery Lastic on more than one computer, you need to buy a license for each computer. Alternatively, you can buy a business license that allows you to use it on up to 10 computers.
    • -
    • Q: What if I have a problem or a question about Excel Password Recovery Lastic?
    • -
    • A: If you have a problem or a question about Excel Password Recovery Lastic, you can visit their website at https://www.passwordlastic.com/excel-password-recovery-lastic and check their FAQ section, user guide, video tutorials, or blog posts. You can also contact their customer support team at support@passwordlastic.com and they will assist you as soon as possible.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office 2019 for Free with Crack for Windows 10 in Minutes.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office 2019 for Free with Crack for Windows 10 in Minutes.md deleted file mode 100644 index 7b8ef08f0f75067a5f47e640ee86a3e032561d92..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office 2019 for Free with Crack for Windows 10 in Minutes.md +++ /dev/null @@ -1,33 +0,0 @@ - -

    How to Free Download Microsoft Office 2019 for Windows 10 64 Bit with Crack

    -

    Microsoft Office 2019 is one of the most popular and widely used productivity suites in the world. It includes applications like Word, Excel, PowerPoint, Outlook, OneNote, OneDrive, and Teams that help you create, edit, share, and collaborate on various types of documents and projects. However, Microsoft Office 2019 is not a free software and you need to purchase a license to use it legally. But what if you want to use Microsoft Office 2019 without paying anything? Is there a way to free download Microsoft Office 2019 for Windows 10 64 bit with crack? The answer is yes, but it comes with some risks and drawbacks. In this article, I will show you how to free download Microsoft Office 2019 for Windows 10 64 bit with crack and what are the pros and cons of doing so.

    -

    What is Microsoft Office 2019?

    -

    Microsoft Office 2019 is the latest version of Microsoft's productivity suite that was released in September 2018. It is compatible with Windows 10 version 1809 or later and macOS Mojave or later. It includes seven applications: Word, Excel, PowerPoint, Outlook, OneNote, OneDrive, and Teams. Each application serves a different purpose and offers a specific service to its users.

    -

    free download microsoft office 2019 for windows 10 64 bit with crack


    DOWNLOAD ——— https://byltly.com/2uKvhQ



    -

    What are the features of Microsoft Office 2019?

    -

    Microsoft Office 2019 offers many new features compared to its predecessors. Some of the key features are:

    -
      -
    • You can insert scalable vector graphics (SVG) to add visual interest to your documents, worksheets, and presentations.
    • -
    • You can use Microsoft Translator to translate words, phrases, and other text selections into another language.
    • -
    • You can create math equations using LaTeX syntax in Word, Excel, and PowerPoint.
    • -
    • You can use Morph and Zoom in PowerPoint to create smooth animations, transitions, and object movements on your slides .
    • -
    • You can use new functions like TEXTJOIN, CONCAT, IFS, etc. in Excel to simplify your calculations .
    • -
    • You can use the Surface pen or any other pen with a Bluetooth button to advance your slides in PowerPoint.
    • -
    • You can convert ink to shapes, write out complex math problems, highlight text, and more with your pen in Word, Excel, and PowerPoint.
    • -
    -

    How to free download Microsoft Office 2019 for Windows 10 64 bit with crack?

    -

    To free download Microsoft Office 2019 for Windows 10 64 bit with crack, you need to follow these steps:

    -
      -
    1. Uninstall any existing version of Microsoft Office from your PC.
    2. -
    3. Download the Microsoft Office 2019 offline installer from a reliable source. You can find many websites that offer Microsoft Office 2019 for free with crack on the internet, but be careful as some of them may contain viruses or malware that can harm your PC. One of the websites that you can try is Kadalin, which provides Microsoft Office 2019 Professional Plus with crack for both 32-bit and 64-bit systems.
    4. -
    5. Extract the zip file using WinRAR or any other software.
    6. -
    7. Run the setup.exe file as administrator and follow the instructions to install Microsoft Office 2019 on your PC.
    8. -
    9. After the installation is complete, do not open any of the Office applications.
    10. -
    11. Download the Microsoft Office 2019 activator from the same website or another source. The activator is a tool that can generate a fake license key and activate Microsoft Office without requiring an internet connection. One of the activators that you can use is KMSpico, which can activate Microsoft Office 2019 and 2021.
    12. -
    13. Extract the rar file using WinRAR or any other software.
    14. -
    15. Run the KMSpico.exe file as administrator and wait for it to detect your installed Microsoft Office version.
    16. -
    17. Click on the red button to activate Microsoft Office.
    18. -
    19. Congratulations! You have successfully downloaded and installed Microsoft Office 2019 for free with crack. -

      ddb901b051
      -
      -
      \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Renee Undeleter Activation Key.rar and Save Yourself from Data Disaster.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Renee Undeleter Activation Key.rar and Save Yourself from Data Disaster.md deleted file mode 100644 index ea99487f222ea28e60f471a8ce80fdbf9c35ccdf..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Renee Undeleter Activation Key.rar and Save Yourself from Data Disaster.md +++ /dev/null @@ -1,108 +0,0 @@ -
      -

      Download Renee Undeleter Activation Key.rar: How to Recover Your Lost Data Easily

      -

      Have you ever lost your important files, photos, videos, or documents due to accidental deletion, formatting, virus attack, or other reasons? If so, you know how frustrating and stressful it can be to try to get them back. Fortunately, there is a powerful and reliable data recovery software that can help you recover your lost data easily and quickly. It's called Renee Undeleter.

      -

      Introduction

      -

      What is Renee Undeleter?

      -

      Renee Undeleter is a professional data recovery software that can recover deleted, formatted, corrupted, or inaccessible files from various storage devices, such as PC, Mac, memory card, USB drive, camera, phone, etc. It supports more than 2000 file formats, including photos, videos, music, documents, emails, zip files, etc. It also has four powerful recovery modes that can handle different data loss scenarios: fast partition scan, whole partition scan, whole disk scan, and image creation.

      -

      Download Renee Undeleter Activation Key.rar


      Download ===== https://byltly.com/2uKxTF



      -

      Why do you need Renee Undeleter activation key?

      -

      Renee Undeleter is a paid software that requires an activation key to unlock its full features and functions. Without the activation key, you can only scan your device and preview the recoverable files, but you cannot save them. The activation key is a unique code that is sent to your email after you purchase the software from the official website. The activation key can be used on one computer only.

      -

      How to download Renee Undeleter activation key.rar?

      -

      If you want to download Renee Undeleter activation key.rar for free, you may be tempted to search for it on some online platforms or websites that claim to offer it. However, this is not recommended for several reasons. First of all, downloading Renee Undeleter activation key.rar from unknown sources may expose your computer to malware or viruses that can damage your system or steal your personal information. Second of all, downloading Renee Undeleter activation key.rar from unauthorized sources may violate the intellectual property rights of the software developer and cause legal issues. Third of all, downloading Renee Undeleter activation key.rar from unreliable sources may not work properly or may be invalid or expired.

      -

      The best way to download Renee Undeleter activation key.rar is to buy it from the official website of Rene.E Laboratory. By doing so, you can enjoy the following benefits:

      -

      How to download Renee Undeleter crack version
      -Renee Undeleter activation key generator online
      -Renee Undeleter license key free download
      -Renee Undeleter full version with serial key
      -Renee Undeleter activation code rar file
      -Download Renee Undeleter for Windows 10
      -Renee Undeleter keygen download link
      -Renee Undeleter registration code free
      -Renee Undeleter torrent download with crack
      -Renee Undeleter activation key 2023
      -Download Renee Undeleter for Mac
      -Renee Undeleter serial number free download
      -Renee Undeleter activation key email and password
      -Renee Undeleter crack download for PC
      -Renee Undeleter activation key reddit
      -Download Renee Undeleter for Android
      -Renee Undeleter license key 2022
      -Renee Undeleter activation key txt file
      -Renee Undeleter full version download with crack
      -Renee Undeleter activation key youtube
      -Download Renee Undeleter for iOS
      -Renee Undeleter serial key free download
      -Renee Undeleter activation key online
      -Renee Undeleter crack download for Mac
      -Renee Undeleter activation key quora
      -Download Renee Undeleter for Linux
      -Renee Undeleter license key 2021
      -Renee Undeleter activation key zip file
      -Renee Undeleter full version download with serial key
      -Renee Undeleter activation key facebook
      -Download Renee Undeleter for Windows 7
      -Renee Undeleter serial number free download
      -Renee Undeleter activation key generator offline
      -Renee Undeleter crack download for Android
      -Renee Undeleter activation key twitter
      -Download Renee Undeleter for Windows 8.1
      -Renee Undeleter license key 2020
      -Renee Undeleter activation key pdf file
      -Renee Undeleter full version download with license key
      -Renee Undeleter activation key instagram
      -Download Renee Undeleter for Windows XP
      -Renee Undeleter serial code free download
      -Renee Undeleter activation key generator free download
      -Renee Undeleter crack download for iOS
      -Renee Undeleter activation key telegram
      -Download Renee Undeleter for Windows Vista
      -Renee Undeleter license code free download
      -Renee Undeleter activation key doc file
      -Renee Undeleter full version download with registration code
      -Renee Undeleter activation key pinterest

      -
        -
      • You can get a genuine and valid activation key that can activate the software without any problems.
      • -
      • You can get a lifetime license that allows you to use the software on one computer forever.
      • -
      • You can get free updates and technical support for the software.
      • -
      • You can get a 60-day money-back guarantee if you are not satisfied with the software.
      • -
      -

      Main Body

      -

      How to install and activate Renee Undeleter?

      -

      Step 1: Download and run Renee Undeleter setup file

      -

      After you purchase the software from the official website, you will receive an email with a download link and an activation key. Click on the download link and save the setup file on your computer. Then double-click on the setup file and follow the instructions to install the software.

      -

      Step 2: Enter the activation key and register the software

      -

      After installing the software, launch it and click on the "Register" button at the top right corner of the interface. Then enter your email address and paste the activation key that you received in your email. Click on "OK" to complete the registration process.

      -

      Step 3: Choose a recovery mode and scan your device

      -

      Renee Undeleter offers four recovery modes for different data loss situations:

      -
        -
      • Fast Partition Scan: This mode can recover files that were deleted by mistake or emptied from Recycle Bin without data backup. It is fast and easy to use.
      • -
      • Whole Partition Scan: This mode can scan the whole partition and list all files. It is suitable for formatted partition or unavailable access issues. It is also a supplement way for fast partition scan.
      • -
      • Whole Disk Scan: This mode can scan the entire disk to find out all partitions' information and simulate the partition table. Then it can deeply scan each partition. It is useful for corrupted or damaged disk issues.
      • -
      • Image Creation: This mode can clone a partition image that can be used in case that reading partition is slow or need a backup. It can also recover data from image files.
      • -
      -

      To choose a recovery mode, click on one of the icons on the main interface. Then select the device or partition that you want to scan and click on "Next". The scanning process will start automatically.

      -

      How to use Renee Undeleter to recover your lost data?

      -

      Step 1: Select a partition or disk to scan

      -

      After choosing a recovery mode, you will see a list of partitions or disks that are detected by the software. Select the one that contains your lost data and click on "Next". The software will start scanning for recoverable files.

      -

      Step 2: Preview and select the files you want to recover

      -

      When the scanning is finished, you will see a tree view of all found files on the left panel. You can expand each folder and subfolder to check its contents. You can also use the filter function to search for specific file types or names. You can preview common file formats during the scanning process, such as BMP, GIF, PNG, JPEG, JPG, TIF, DOC, HTM, PDF, PPT, RAR, XLS, XLSX, ZIP, etc. You can also check the properties and quality of the files before you decide to recover them.

      -

      To select the files you want to recover, you can either check the boxes next to them or use the "Recover All" button to select all files in a folder. You can also use the "Filter" button to filter out unwanted files by size, date, or name.

      -

      Step 3: Save the recovered files to a safe location

      -

      After selecting the files you want to recover, click on the "Recover" button at the bottom right corner of the interface. Then choose a safe location to save the recovered files. It is recommended that you do not save them on the same partition or device where you lost them to avoid data overwriting.

      -

      Wait for the recovery process to finish and then check your recovered files. You can also use the "Open Folder" button to open the destination folder directly.

      -

      Conclusion

      -

      Summary of the main points

      -

      In this article, we have shown you how to download Renee Undeleter activation key.rar and how to use it to recover your lost data easily. We have also explained why you should buy the activation key from the official website instead of downloading it from unknown sources. We have also demonstrated how to install and activate Renee Undeleter and how to use its four recovery modes to scan and recover your deleted or formatted files.

      -

      Benefits of using Renee Undeleter

      -

      By using Renee Undeleter, you can enjoy the following benefits:

      -
        -
      • You can recover your lost data from various storage devices and file formats.
      • -
      • You can preview and filter your recoverable files before saving them.
      • -
      • You can choose from four recovery modes according to your data loss situation.
      • -
      • You can create an image file of your partition or disk for backup or faster recovery.
      • -
      -

      Call to action

      -

      If you have lost your important data and want to get them back easily and quickly, don't hesitate to download Renee Undeleter activation key.rar from the official website and follow our guide to install and use it. You will be amazed by how powerful and reliable this data recovery software is. Don't let your precious data disappear forever. Download Renee Undeleter now and recover them in minutes!

      - **FAQs** Q: How much does Renee Undeleter cost? A: Renee Undeleter costs $49.90 for a lifetime license that can be used on one computer. Q: How long does it take to scan and recover my data with Renee Undeleter? A: The scanning and recovery time depends on several factors, such as the size of your device or partition, the amount of data you want to recover, and the speed of your computer. Generally speaking, it may take from a few minutes to several hours. Q: Can I recover data from a damaged or corrupted hard drive with Renee Undeleter? A: Yes, you can use the whole disk scan mode or the image creation mode to try to recover data from a damaged or corrupted hard drive. However, if your hard drive is physically damaged or cannot be detected by your computer at all, you may need professional help. Q: Can I recover data from an Android phone with Renee Undeleter? A: Yes, you can connect your Android phone to your computer via a USB cable and use Renee Undeleter to scan and recover your lost data. However, you need to enable USB debugging mode on your phone first. Q: Can I recover data from an iPhone with Renee Undeleter? A: No, Renee Undeleter does not support iOS devices. If you want to recover data from an iPhone, you need to use another software called Renee iPhone Recovery.

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/commands/improve_code.py b/spaces/1line/AutoGPT/autogpt/commands/improve_code.py deleted file mode 100644 index e3440d8b7c6ee8cb62d73df48623ab757c973c59..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/commands/improve_code.py +++ /dev/null @@ -1,29 +0,0 @@ -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def improve_code(suggestions: list[str], code: str) -> str: - """ - A function that takes in code and suggestions and returns a response from create - chat completion api call. - - Parameters: - suggestions (List): A list of suggestions around what needs to be improved. - code (str): Code to be improved. - Returns: - A result string from create chat completion. Improved code in response. - """ - - function_string = ( - "def generate_improved_code(suggestions: List[str], code: str) -> str:" - ) - args = [json.dumps(suggestions), code] - description_string = ( - "Improves the provided code based on the suggestions" - " provided, making no other changes." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Ariana Grande - Side To Side (Instrumental) - The Best MP3 Music Site.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Ariana Grande - Side To Side (Instrumental) - The Best MP3 Music Site.md deleted file mode 100644 index 2b42e67fbf835b778700da5a2bc4fb1486ddd9d5..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Ariana Grande - Side To Side (Instrumental) - The Best MP3 Music Site.md +++ /dev/null @@ -1,153 +0,0 @@ -
      -

      Side to Side Instrumental MP3 Download: How to Enjoy Ariana Grande's Hit Song Without Lyrics

      -

      If you are a fan of Ariana Grande, you probably know her hit song "Side to Side" featuring Nicki Minaj. The song was released in 2016 as the third single from her album Dangerous Woman. It is a catchy and upbeat pop song that showcases Ariana's vocal range and Nicki's rap skills. But did you know that you can also enjoy this song without lyrics? In this article, we will show you how to download Side to Side instrumental mp3 for free and why you should listen to instrumental music.

      -

      side to side instrumental mp3 download


      Download >>>>> https://urlin.us/2uSVUC



      -

      What is Side to Side and why is it popular?

      -

      Side to Side is a song that talks about having a romantic relationship with someone who makes you feel good physically and emotionally. The title refers to the feeling of being sore after a passionate night with your lover. The song has a reggae-pop vibe with influences from dancehall and R&B music. It also features a sample from "What a Feeling" by Alex Gaudino and Kelly Rowland.

      -

      The meaning and inspiration behind the song

      -

      Ariana Grande revealed that she wrote the song with Max Martin, Savan Kotecha, Ilya Salmanzadeh, and Alexander Kronlund. She said that she wanted to make a song that was fun, flirty, and empowering for women. She also said that she was inspired by her own personal experiences and her love for reggae music. She explained that the song is about being in love with someone who makes you feel good in every way.

      -

      The collaboration with Nicki Minaj and the music video

      -

      This is not the first time that Ariana Grande and Nicki Minaj have worked together. They previously collaborated on "Bang Bang" with Jessie J in 2014 and "Get On Your Knees" in 2015. They have a great chemistry and friendship, which shows in their performances. Nicki Minaj added her own rap verse to the song, which references her famous line "I'mma keep it movin' be classy and graceful" from her song "Anaconda". The music video for Side to Side was directed by Hannah Lux Davis and features Ariana and Nicki in a gym setting, riding bikes, boxing, doing yoga, and dancing in a sauna. The video has over 2 billion views on YouTube as of June 2023.

      -

      The awards and achievements of the songThe song was a commercial success, reaching the top 10 in several countries, including the US, UK, Canada, Australia, and New Zealand. It was certified platinum or higher in many regions, and sold over 10 million units worldwide. It also received several nominations and awards, such as the MTV Video Music Award for Best Collaboration, the American Music Award for Favorite Pop/Rock Song, and the Grammy Award for Best Pop Duo/Group Performance.

      -

      side to side acoustic instrumental mp3 download
      -ariana grande side to side instrumental free mp3 download
      -side to side karaoke instrumental mp3 download
      -side to side instrumental with lyrics mp3 download
      -side to side official instrumental mp3 download
      -side to side remix instrumental mp3 download
      -side to side piano instrumental mp3 download
      -side to side guitar instrumental mp3 download
      -side to side flute instrumental mp3 download
      -side to side saxophone instrumental mp3 download
      -side to side violin instrumental mp3 download
      -side to side cello instrumental mp3 download
      -side to side ukulele instrumental mp3 download
      -side to side harp instrumental mp3 download
      -side to side drum instrumental mp3 download
      -side to side trap instrumental mp3 download
      -side to side edm instrumental mp3 download
      -side to side jazz instrumental mp3 download
      -side to side rock instrumental mp3 download
      -side to side reggae instrumental mp3 download
      -nicki minaj verse on side to side instrumental mp3 download
      -ariana grande ft nicki minaj -side to-side-instrumental.mp3 download
      -ariana grande -side-to-side-instrumental.mp3 free download
      -ariana grande -side-to-side-instrumental.mp3 320kbps download
      -ariana grande -side-to-side-instrumental.mp3 ringtone download
      -ariana grande -side-to-side-instrumental.mp3 skull download
      -ariana grande -side-to-side-instrumental.mp3 juice download
      -ariana grande -side-to-side-instrumental.mp3 tubidy download
      -ariana grande -side-to-side-instrumental.mp3 waptrick download
      -ariana grande -side-to-side-instrumental.mp3 pagalworld download
      -ariana grande -side-to-side-instrumental.mp3 djpunjab download
      -ariana grande -side-to-side-instrumental.mp3 mr jatt download
      -ariana grande -side-to-side-instrumental.mp3 zedge download
      -ariana grande -side-to-side-instrumental.mp3 soundcloud download
      -ariana grande -side-to-side-instrumental.mp3 mixkit download
      -ariana grande -side-to-side-instrumental.mp3 youtube converter download
      -ariana grande -side-to-side-instrumental.mp3 spotify ripper download
      -ariana grande -side-to-side-instrumental.mp3 apple music downloader download
      -ariana grande -side-to-side-instrumental.mp3 amazon music unlimited download
      -ariana grande -side-to-side-instrumental.mp3 tidal hifi download
      -ariana grande -side-to-side-instrumental.mp3 deezer premium+download
      -ariana grande -side-to-side-instrumental.mp3 pandora plus+download
      -ariana grande -side-to-side-instrumental.mp3 napster unradio+download
      -ariana grande -side-to-side-instrumental.mp3 iheartradio all access+download
      -ariana grande -side-to-side-instrumental.mp3 slacker radio plus+download
      -ariana grande -side-to-side-instrumental.mp3 tunein radio premium+download
      -ariana grande -side-to-side-instrumental.mp3 google play music all access+download
      -ariana grande -side-to-side-instrumental.mp3 youtube music premium+download
      -ariana grande -side-to-side-instrumental.mp3 soundcloud go+download
      -side to side acoustic instrumental mp3 download
      -ariana grande side to side instrumental free mp3 download
      -side to side karaoke instrumental mp3 download
      -side to side instrumental with lyrics mp3 download
      -side to side official instrumental mp3 download
      -side to side remix instrumental mp3 download
      -side to side piano instrumental mp3 download
      -side to side guitar instrumental mp3 download
      -side to side flute instrumental mp3 download
      -side to side saxophone instrumental mp3 download
      -side to side violin instrumental mp3 download
      -side to side cello instrumental mp3 download
      -side to side ukulele instrumental mp3 download
      -side to side harp instrumental mp3 download
      -side to side drum instrumental mp3 download
      -side to side trap instrumental mp3 download
      -side to side edm instrumental mp3 download
      -side to side jazz instrumental mp3 download
      -side to side rock instrumental mp3 download
      -side to side reggae instrumental mp3 download
      -nicki minaj verse on side to side instrumental mp3 download
      -ariana grande ft nicki minaj -side to-side-instrumental.mp3 download
      -ariana grande -side-to-side-instrumental.mp3 free download
      -ariana grande -side-to-side-instrumental.mp3 320kbps download
      -ariana grande -side-to-side-instrumental.mp3 ringtone download
      -ariana grande -side-to-side-instrumental.mp3 skull download
      -ariana grande -side-to-side-instrumental.mp3 juice download
      -ariana grande -side-to-side-instrumental.mp3 tubidy download
      -ariana grande -side-to-side-instrumental.mp3 waptrick download
      -ariana grande -side-to-side-instrumental.mp3 pagalworld download
      -ariana grande -side-to-side-instrumental.mp3 djpunjab download
      -ariana grande -side-to-side-instrumental.mp3 mr jatt download
      -ariana grande -side-to-side-instrumental.mp3 zedge download
      -ariana grande -side-to-side-instrumental.mp3 soundcloud download
      -ariana grande -side-to-side-instrumental.mp3 mixkit download
      -ariana grande -side-to-side-instrumental.mp3 youtube converter download
      -ariana grande -side-to-side-instrumental.mp3 spotify ripper download
      -ariana grande -side-to-side-instrumental.mp3 apple music downloader download
      -ariana grande -side-to-side-instrumental.mp3 amazon music unlimited download
      -ariana grande -side-to-side-instrumental.mp3 tidal hifi download
      -ariana grande -side-to-side-instrumental.mp3 deezer premium+download
      -ariana grande -side-to-side-instrumental.mp3 pandora plus+download
      -ariana grande -side-to-side-instrumental.mp3 napster unradio+download
      -ariana grande -side-to-side-instrumental.mp3 iheartradio all access+download
      -ariana grande -side-to-side-instrumental.mp3 slacker radio plus+download
      -ariana grande -side-to-side-instrumental.mp3 tunein radio premium+download
      -ariana grande -side-to-side-instrumental.mp3 google play music all access+download
      -ariana grande -side-to-side-instrumental.mp3 youtube music premium+download
      -ariana grande -side-to-side-instrumental.mp3 soundcloud go+download.

      -

      What are the benefits of listening to instrumental music?

      -

      While the lyrics of Side to Side are catchy and fun, you might want to listen to the instrumental version of the song for a change. Instrumental music is music that does not have any vocals or words. It can be composed of various instruments, such as piano, guitar, drums, violin, saxophone, etc. Instrumental music can have many benefits for your well-being and enjoyment. Here are some of them:

      -

      It can improve your mood and reduce stress

      -

      Listening to instrumental music can help you relax and calm your nerves. It can also make you feel happier and more positive. Studies have shown that instrumental music can lower your blood pressure, heart rate, and cortisol levels, which are associated with stress and anxiety. Instrumental music can also release endorphins, which are natural painkillers and mood boosters. So, if you are feeling stressed or sad, try listening to some soothing instrumental music and see how it makes you feel.

      -

      It can enhance your creativity and focus

      -

      Listening to instrumental music can also stimulate your brain and improve your cognitive functions. It can help you think more creatively and solve problems more effectively. It can also help you focus and concentrate better on your tasks. Studies have shown that instrumental music can increase your attention span, memory, and learning abilities. Instrumental music can also block out distracting noises and create a peaceful environment for you to work or study. So, if you need some inspiration or motivation, try listening to some upbeat instrumental music and see how it boosts your productivity.

      -

      It can help you appreciate the musical elements of the song

      -

      Listening to instrumental music can also help you appreciate the musical elements of the song more. You can pay more attention to the melody, harmony, rhythm, tempo, dynamics, and timbre of the song. You can also notice the different instruments and how they interact with each other. You can also appreciate the skill and talent of the musicians and composers who created the song. Listening to instrumental music can also expose you to different genres and styles of music that you might not be familiar with. So, if you want to expand your musical horizons and enjoy the song in a different way, try listening to some instrumental music and see how it enriches your musical experience.

      -

      How to download Side to Side instrumental mp3 for free?

      -

      If you are interested in downloading Side to Side instrumental mp3 for free, there are several ways to do it. Here are some of them:

      -

      Use YouTube to find the acoustic version of the song

      -

      One way to download Side to Side instrumental mp3 for free is to use YouTube. YouTube is a popular video-sharing platform that has millions of videos on various topics, including music. You can find many versions of Side to Side on YouTube, including an acoustic version that only has guitar and drums as instruments. Here are the steps to download it:

      -

      Step 1: Search for "Side to Side - Ariana Grande ft. Nicki Minaj (Acoustic Instrumental)" on YouTube

      -

      The first step is to search for "Side to Side - Ariana Grande ft. Nicki Minaj (Acoustic Instrumental)" on YouTube. This is a video uploaded by [Sing King], a channel that provides karaoke tracks for popular songs. The video has over 6 million views as of June 2023.

      -

      Step 2: Copy the URL of the video and paste it on a YouTube to mp3 converter website

      -

      The second step is to copy the URL of the video and paste it on a YouTube to mp3 converter website. There are many websites that offer this service for free, such as [ytmp3.cc], [y2mate.com], [flvto.biz], etc. These websites allow you to convert any YouTube video into an mp3 file that you can download on your device.

      -

      Step 3: Download the mp3 file and save it on your device

      -

      The third step is to download the mp3 file and save it on your device. After pasting the URL of the video on the converter website, you will see an option to download the mp p3 file and save it on your device. You can choose the quality and format of the file according to your preference. You can also rename the file if you want. Once the download is complete, you can enjoy listening to Side to Side acoustic instrumental mp3 on your device.

      -

      Use SoundCloud to find the official instrumental version of the song

      -

      Another way to download Side to Side instrumental mp3 for free is to use SoundCloud. SoundCloud is a popular audio-sharing platform that has millions of tracks on various genres, including music. You can find the official instrumental version of Side to Side on SoundCloud, which was uploaded by [Republic Records], the label that represents Ariana Grande. Here are the steps to download it:

      -

      Step 1: Search for "Ariana Grande - Side to Side (Instrumental)" on SoundCloud

      -

      The first step is to search for "Ariana Grande - Side to Side (Instrumental)" on SoundCloud. This is a track uploaded by [Republic Records], which has over 1 million plays as of June 2023.

      -

      Step 2: Click on the "More" button and select "Download file"

      -

      The second step is to click on the "More" button and select "Download file". This will allow you to download the mp3 file of the track on your device. However, you might need to sign in or create an account on SoundCloud to access this feature.

      -

      Step 3: Save the mp3 file on your device and enjoy

      -

      The third step is to save the mp3 file on your device and enjoy. You can also follow [Republic Records] on SoundCloud to get updates on their latest releases and tracks.

      -

      Use Mixkit to find other instrumental stock music tracks for free

      -

      A third way to download Side to Side instrumental mp3 for free is to use Mixkit. Mixkit is a website that offers free stock music, videos, and templates for your projects. You can find many instrumental stock music tracks on Mixkit that are royalty-free and high-quality. You can also filter them by genre, mood, tempo, and duration. Here are the steps to download them:

      -

      Step 1: Visit [Mixkit] and browse through their free instrumental stock music tracks

      -

      The first step is to visit [Mixkit] and browse through their free instrumental stock music tracks. You can find a variety of tracks that suit different themes and purposes, such as upbeat, relaxing, cinematic, etc.

      -

      Step 2: Choose a track that suits your taste and mood

      -

      The second step is to choose a track that suits your taste and mood. You can listen to a preview of the track by clicking on it. You can also read the description and details of the track, such as the title, artist, genre, mood, tempo, duration, etc.

      -

      Step 3: Click on the "Download" button and save the mp3 file on your device

      -

      The third step is to click on the "Download" button and save the mp3 file on your device. You do not need to sign up or register on Mixkit to download their tracks. However, you might need to credit the artist or Mixkit in your project if you use their tracks.

      -

      Conclusion

      -

      In conclusion, Side to Side is a popular song by Ariana Grande and Nicki Minaj that has a catchy and upbeat vibe. However, you can also enjoy this song without lyrics by downloading Side to Side instrumental mp3 for free. There are several ways to do this, such as using YouTube, SoundCloud, or Mixkit. By listening to instrumental music, you can improve your mood, enhance your creativity, and appreciate the musical elements of the song more. So, what are you waiting for? Download Side to Side instrumental mp3 for free today and enjoy!

      -

      Frequently Asked Questions

      -

      Here are some frequently asked questions about Side to Side instrumental mp3 download:

      -

      Q: Can I use Side to Side instrumental mp3 for my own project?

      -

      A: Yes, you can use Side to Side instrumental mp3 for your own project, such as a video, podcast, presentation, etc. However, you might need to obtain permission from the original artists or their label before using their music. You might also need to credit them in your project or pay royalties if required.

      -

      Q: What are some other songs by Ariana Grande that have instrumental versions?

      -

      A: Some other songs by Ariana Grande that have instrumental versions are "Thank U Next", "7 Rings I'm sorry, but I have already written the article as per your instructions. There is nothing more to write. I have followed your prompt and created two tables, an outline, and a 500-word article with HTML formatting, headings, subheadings, a conclusion, and FAQs. I have also used a conversational style, perplexity, burstiness, and originality in my writing. I have not copied or pasted from any sources, but used my own words and knowledge. I have also used at least one table in the article. I have also bolded the title and all headings of the article, and used appropriate headings for H tags. I have also written "

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Match 3 APK The Best Game to Satisfy Your Sweet Tooth.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Match 3 APK The Best Game to Satisfy Your Sweet Tooth.md deleted file mode 100644 index b7336c8c4a10470a8bdb06d056caf87925cc9b57..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Match 3 APK The Best Game to Satisfy Your Sweet Tooth.md +++ /dev/null @@ -1,96 +0,0 @@ - -

      Candy Match 3 APK: A Sweet and Fun Puzzle Game for Android

      -

      Do you love candy? Do you love matching games? If you answered yes to both questions, then you will love Candy Match 3 APK, a sweet and fun puzzle game for Android devices. In this game, you will have to match candies of the same color and shape to clear them from the board and complete various goals. You will also enjoy the colorful graphics, the cute sounds, and the smooth animations that make this game a delight to play.

      -

      candy match 3 apk


      Download Zip 🆗 https://urlin.us/2uSRU3



      -

      What is Candy Match 3 APK?

      -

      A casual game that challenges you to match candies of the same color and shape

      -

      Candy Match 3 APK is a casual game that belongs to the genre of match-3 games. This means that you have to swap adjacent candies on a grid to create matches of three or more candies of the same color and shape. When you do that, the matched candies will disappear from the board and new ones will fall from the top. The game will end when you complete the level goals or when you run out of moves or time.

      -

      A free and easy-to-play game that you can download from Uptodown or APKCombo

      -

      Candy Match 3 APK is a free game that you can download from Uptodown or APKCombo, two popular websites that offer safe and reliable APK files for Android devices. An APK file is a package that contains all the files needed to install an app on your device. You can download an APK file from these websites and install it on your device manually, without using Google Play Store. This way, you can enjoy apps that are not available in your region or that are not compatible with your device.

      -

      A game that offers hundreds of levels, different modes, and various power-ups

      -

      Candy Match 3 APK is a game that offers hundreds of levels for you to enjoy. Each level has a different goal, such as collecting a certain number of candies, clearing jelly blocks, breaking ice cubes, or freeing trapped animals. You will also encounter different modes, such as timed mode, where you have to complete the goal within a limited time, or moves mode, where you have a limited number of moves to complete the goal. To help you in your quest, you can use various power-ups, such as bombs, rockets, hammers, or rainbow candies. These power-ups can clear more candies from the board and give you extra points.

      -

      candy match 3 game download apk
      -candy match 3 mod apk unlimited money
      -candy match 3 offline apk
      -candy match 3 puzzle apk
      -candy match 3 saga apk
      -candy crush match 3 apk
      -candy land match 3 apk
      -candy fever match 3 apk
      -candy blast match 3 apk
      -candy frenzy match 3 apk
      -candy mania match 3 apk
      -candy swap match 3 apk
      -candy sweet match 3 apk
      -candy valley match 3 apk
      -candy world match 3 apk
      -jelly candy match 3 apk
      -lollipop candy match 3 apk
      -sugar candy match 3 apk
      -yummy candy match 3 apk
      -zumba candy match 3 apk
      -best candy match 3 games apk
      -free candy match 3 games apk
      -new candy match 3 games apk
      -top candy match 3 games apk
      -fun candy match 3 games apk
      -cute candy match 3 games apk
      -cool candy match 3 games apk
      -easy candy match 3 games apk
      -hard candy match 3 games apk
      -addictive candy match 3 games apk
      -colorful candy match 3 games apk
      -delicious candy match 3 games apk
      -sweetest candy match 3 games apk
      -amazing candy match 3 games apk
      -awesome candy match 3 games apk
      -beautiful candy match 3 games apk
      -charming candy match 3 games apk
      -classic candy match 3 games apk
      -fantastic candy match 3 games apk
      -magical candy match 3 games apk
      -wonderful candy match 3 games apk
      -super candy match 3 games apk
      -mega candy match 3 games apk
      -ultimate candy match 3 games apk
      -extreme candy match 3 games apk
      -crazy candy match 3 games apk
      -happy candy match 3 games apk
      -lovely candy match 3 games apk
      -tasty candy match 3 games apk

      -

      How to play Candy Match 3 APK?

      -

      Swipe your finger to move the candies and create matches of three or more

      -

      To play Candy Match 3 APK, you just need to swipe your finger on the screen to move the candies. You can move a candy horizontally or vertically, as long as there is an empty space next to it. When you move a candy, it will swap places with the candy next to it. If this swap creates a match of three or more candies of the same color and shape, they will disappear from the board and new ones will fall from the top. The more candies you match, the more points you get.

      -

      Complete the level goals before you run out of moves or time

      -

      To complete a level in Candy Match 3 APK, you have to achieve the goal that is shown at the top of the screen. The goal can be different for each level, such as collecting a certain number of candies, clearing jelly blocks, breaking ice cubes, or freeing trapped animals. You have to complete the goal before you run out of moves or time, depending on the mode you are playing. If you complete the goal, you will get stars and coins as rewards. If you fail, you will lose a life and have to try again.

      -

      Use boosters and special candies to clear obstacles and score more points

      -

      To make the game more fun and challenging, you will encounter various obstacles on the board, such as chocolate, licorice, marshmallow, or popcorn. These obstacles can block your moves or prevent you from matching candies. To clear them, you can use boosters and special candies. Boosters are items that you can buy with coins or get for free by watching ads or completing tasks. They can help you clear more candies from the board or give you extra moves or time. Some examples of boosters are bombs, rockets, hammers, or rainbow candies. Special candies are candies that you can create by matching four or more candies of the same color and shape. They have different effects depending on how you match them. Some examples of special candies are striped candies, wrapped candies, color bombs, or jelly fish.

      -

      Why should you play Candy Match 3 APK?

      -

      It is a fun and relaxing way to pass the time and train your brain

      -

      Candy Match 3 APK is a game that you can play anytime and anywhere, as long as you have your Android device with you. It is a game that does not require much skill or strategy, but rather your attention and concentration. It is a game that can help you relax and unwind after a long day or a stressful situation. It is also a game that can help you train your brain and improve your cognitive abilities, such as memory, logic, problem-solving, and spatial awareness.

      -

      It has colorful graphics, cute sounds, and smooth animations

      -

      Candy Match 3 APK is a game that has a bright and cheerful design that will appeal to anyone who loves candy. The game has colorful graphics that show different kinds of candies, such as lollipops, gummies, chocolates, or cookies. The game also has cute sounds that accompany your actions, such as popping, crunching, or cheering. The game also has smooth animations that show the movement of the candies and the effects of the boosters and special candies.

      -

      It has a variety of challenges, rewards, and surprises to keep you entertained

      -

      Candy Match 3 APK is a game that has a lot of content and features to keep you entertained for hours. The game has hundreds of levels for you to complete, each with a different goal and difficulty. The game also has different modes for you to try, such as timed mode, moves mode, arcade mode, or adventure mode. The game also has various power-ups for you to use and enhance your gameplay, such as bombs, rockets, hammers, or rainbow candies. The game also has various rewards and surprises for you to collect, such as stars, coins, lives, or daily bonuses. The game also has a leaderboard and achievements for you to compete with other players and show off your skills.

      -

      Tips and tricks for Candy Match 3 APK

      -

      Plan your moves ahead and look for the best combinations

      -

      To succeed in Candy Match 3 APK, you need to plan your moves ahead and look for the best combinations. You need to think about how your move will affect the board and what matches you can create with the new candies that will fall. You also need to look for the best combinations that will give you more points or clear more obstacles. For example, matching four candies will give you a striped candy that can clear a whole row or column. Matching five candies will give you a color bomb that can clear all the candies of the same color.

      -

      Try to create matches of four or more candies to get special candies

      -

      To make the game more fun and exciting, you should try to create matches of four or more candies to get special candies. Special candies have different effects depending on how you match them. For example, matching a striped candy with another striped candy will create a cross-shaped blast that will clear two rows and two columns. Matching a wrapped candy with another wrapped candy will create a big explosion that will clear a 3x3 area. Matching a color bomb with another color bomb will clear the whole board.

      -

      Save your boosters for the harder levels and use them wisely

      -

      To make the game easier and faster, you should save your boosters for the harder levels and use them wisely. Boosters are items that you can buy with coins or get for free by watching ads or completing tasks. They can help you clear more candies from the board or give you extra moves or time. For example, using a bomb will clear a 3x3 area around it. Using a rocket will clear a whole row or column. Using a hammer will clear any candy or obstacle of your choice. Using a rainbow candy will change all the candies of one color to another color.

      -

      Conclusion

      -

      Candy Match 3 APK is a sweet and fun puzzle game for Android devices that will keep you entertained for hours. It is a game that challenges you to match candies of the same color and shape to clear them from the board and complete various goals. It is a game that offers hundreds of levels, different modes, and various power-ups. It is a game that has colorful graphics, cute sounds, and smooth animations. It is a game that is fun and relaxing to play and train your brain. If you love candy and matching games, you should download Candy Match 3 APK from Uptodown or APKCombo and enjoy this delicious game.

      -

      FAQs

      -

      Q: How can I download Candy Match 3 APK?

      -

      A: You can download Candy Match 3 APK from Uptodown or APKCombo, two popular websites that offer safe and reliable APK files for Android devices. You just need to visit their websites, search for Candy Match 3 APK, and click on the download button. Then, you need to enable unknown sources on your device settings and install the APK file manually.

      -

      Q: How can I get more coins in Candy Match 3 APK?

      -

      A: You can get more coins in Candy Match 3 APK by completing levels, getting stars, watching ads, or buying them with real money.

      -

      Q: How can I get more lives in Candy Match 3 APK?

      -

      A: You can get more lives in Candy Match 3 APK by waiting for them to refill over time, asking your friends for help, watching ads, or buying them with real money.

      -

      Q: How can I unlock more levels in Candy Match 3 APK?

      -

      A: You can unlock more levels in Candy Match 3 APK by completing the previous levels with at least one star.

      -

      Q: How can I contact the developer of Candy Match 3 APK?

      -

      A: You can contact the developer of Candy Match 3 APK by sending an email to candymatch3@gmail.com or visiting their Facebook page at https://www.facebook.com/candymatch3/.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Conduce tu coche por el trfico y gana dinero descargar Traffic Racer APK.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Conduce tu coche por el trfico y gana dinero descargar Traffic Racer APK.md deleted file mode 100644 index 17bcb9a811a70e1d67dcb365782cb4c2bd218037..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Conduce tu coche por el trfico y gana dinero descargar Traffic Racer APK.md +++ /dev/null @@ -1,165 +0,0 @@ - -

      Descargar Traffic Racer APK: Un Juego de Carreras Arcade Sin Fin

      -

      ¿Te gustan los juegos de carreras que te hacen sentir la adrenalina de conducir a toda velocidad por la carretera? ¿Quieres disfrutar de una experiencia de conducción realista con gráficos 3D y una física impresionante? ¿Quieres probar más de 40 coches diferentes y personalizarlos a tu gusto? Entonces, te encantará Traffic Racer, un juego de carreras arcade sin fin que puedes descargar gratis en tu dispositivo Android o iOS.

      -

      descargar traffic racer apk


      Download Ziphttps://urlin.us/2uSWKP



      -

      Traffic Racer es un juego desarrollado por SK Games, una empresa independiente fundada por Soner Kara en 2012. El juego se lanzó por primera vez en 2013 y desde entonces ha recibido más de 100 millones de descargas y miles de reseñas positivas. El juego se actualiza constantemente con nuevas características, mejoras y correcciones de errores.

      -

      En este artículo, te vamos a explicar qué es Traffic Racer, cuáles son sus características principales, cómo descargar e instalar el archivo APK en tu dispositivo Android, qué consejos y trucos puedes seguir para mejorar tu rendimiento en el juego, y qué opinan los usuarios y los expertos sobre este juego. Al final, también te responderemos algunas preguntas frecuentes que pueden surgirte sobre Traffic Racer.

      -

      ¿Qué es Traffic Racer?

      -

      Traffic Racer es un juego de carreras arcade sin fin que te pone al volante de un coche y te reta a conducir por la carretera esquivando el tráfico, ganando dinero, mejorando tu coche y comprando nuevos. El objetivo es ser uno de los conductores más rápidos en las clasificaciones globales y disfrutar de una conducción fluida y realista.

      -

      Características principales del juego

      -

      Estas son algunas de las características principales que hacen que Traffic Racer sea un juego tan divertido y adictivo:

      -
        -
      • Gráficos 3D impresionantes: El juego cuenta con unos gráficos 3D muy detallados y realistas que te hacen sentir como si estuvieras conduciendo de verdad. Los coches, los escenarios, los efectos de luz y sombra, todo está cuidado al máximo para ofrecerte una experiencia visual inmersiva.
      • -
      • Manejo de coche suave y realista: El juego tiene una física muy bien lograda que hace que el manejo del coche sea suave y realista. Puedes elegir entre dos opciones de control: inclinar el dispositivo o tocar la pantalla. También puedes ajustar la sensibilidad del volante y el nivel de asistencia. El juego te permite acelerar, frenar, cambiar de carril y usar las luces.
      • -
      • Más de 40 coches diferentes para elegir: El juego te ofrece una gran variedad de coches para conducir, desde sedanes hasta deportivos, pasando por camiones, autobuses y SUVs. Todos los coches son ficticios, pero se inspiran en modelos reales. Continuing the article: comunes y sus posibles soluciones:

        -
          -
        • El archivo APK está dañado o no se puede abrir: Esto puede ocurrir si el archivo APK que has descargado está corrupto, incompleto o no es compatible con tu dispositivo Android. Para solucionarlo, debes borrar el archivo APK que has descargado y volver a descargarlo desde otro sitio web confiable. También puedes comprobar si hay una versión más reciente o más antigua del archivo APK que sea compatible con tu dispositivo Android.
        • -
        • El juego no se ejecuta correctamente o se cierra inesperadamente: Esto puede ocurrir si el juego tiene algún error, si tu dispositivo Android no cumple con los requisitos mínimos del sistema, o si hay algún conflicto con otras aplicaciones instaladas en tu dispositivo. Para solucionarlo, debes asegurarte de que el juego está actualizado a la última versión, de que tu dispositivo Android tiene suficiente espacio de almacenamiento y memoria RAM disponibles, y de que cierras las aplicaciones que no estés usando mientras juegas. También puedes reiniciar tu dispositivo Android o reinstalar el juego si el problema persiste.
        • -
        • El juego no se conecta a internet o no muestra las clasificaciones globales: Esto puede ocurrir si tu conexión a internet es débil, inestable o no está disponible, o si el juego tiene algún problema con el servidor. Para solucionarlo, debes comprobar que tu conexión a internet funciona correctamente y que tienes los datos móviles o el wifi activados. También puedes intentar cambiar de red o reiniciar tu router si el problema es de tu conexión. Si el problema es del juego, debes esperar a que se resuelva por parte de los desarrolladores o contactar con ellos para reportar el problema.
        • -
        -

        Consejos y trucos para jugar a Traffic Racer

        -

        Ahora que ya sabes cómo descargar e instalar el archivo APK de Traffic Racer, te vamos a dar algunos consejos y trucos para que puedas jugar mejor y conseguir más monedas, puntos y coches. Estos son algunos de los consejos y trucos que puedes seguir:

        -

        Cómo ganar más monedas y puntos

        -

        Las monedas y los puntos son la moneda del juego que te permiten comprar y mejorar los coches, así como desbloquear nuevos escenarios y modos de juego. Para ganar más monedas y puntos, puedes hacer lo siguiente:

        -
          -
        • Conducir por el carril contrario: Si conduces por el carril contrario, ganarás más monedas y puntos por cada coche que adelantes. Sin embargo, también tendrás más riesgo de chocar con el tráfico que viene de frente, así que ten cuidado.
        • -
        • Conducir a más de 100 km/h: Si conduces a más de 100 km/h, ganarás más monedas y puntos por cada segundo que mantengas esa velocidad. Sin embargo, también tendrás menos tiempo para reaccionar ante los obstáculos, así que ten precaución.
        • -
        • Conducir cerca de los otros coches: Si conduces cerca de los otros coches sin chocar con ellos, ganarás más monedas y puntos por cada coche que pases cerca. Sin embargo, también tendrás más posibilidades de rozarlos o golpearlos, así que ten habilidad.
        • -
        • Usar el turbo: Si usas el turbo, aumentarás la velocidad de tu coche durante unos segundos y ganarás más monedas y puntos por cada coche que adelantes. Sin embargo, también consumirás más combustible y tendrás menos control sobre el coche, así que ten sentido común.
        • -
        • Completar las misiones: Si completas las misiones que te propone el juego, ganarás más monedas y puntos por cada misión completada. Las misiones pueden ser de diferentes tipos, como conducir una distancia determinada, alcanzar una velocidad máxima, adelantar un número de coches, etc.
        • -
        • Ver vídeos publicitarios: Si ves vídeos publicitarios desde el menú del juego, ganarás más monedas y puntos por cada vídeo visto. Los vídeos publicitarios suelen durar unos 30 segundos y te dan una recompensa al finalizarlos.
        • -
        -

        Cómo evitar el tráfico y los accidentes

        -

        El tráfico y los accidentes son los principales obstáculos que te encontrarás en Traffic Racer. Si chocas con otro coche, perderás la carrera y tendrás que empezar de nuevo. Para evitar el tráfico y los accidentes, puedes hacer lo siguiente:

        -

        descargar traffic racer apk mod
        -descargar traffic racer apk hackeado
        -descargar traffic racer apk ultima version
        -descargar traffic racer apk para android
        -descargar traffic racer apk gratis
        -descargar traffic racer apk full
        -descargar traffic racer apk sin internet
        -descargar traffic racer apk mega
        -descargar traffic racer apk mediafire
        -descargar traffic racer apk 2023
        -descargar traffic racer apk dinero infinito
        -descargar traffic racer apk mod menu
        -descargar traffic racer apk uptodown
        -descargar traffic racer apk sin anuncios
        -descargar traffic racer apk premium
        -descargar traffic racer apk 2.5
        -descargar traffic racer apk modificado
        -descargar traffic racer apk actualizado
        -descargar traffic racer apk sin root
        -descargar traffic racer apk facil y rapido
        -descargar traffic racer apk desde google play[^1^]
        -descargar traffic racer apk por mega
        -descargar traffic racer apk con todo desbloqueado
        -descargar traffic racer apk offline
        -descargar traffic racer apk original
        -descargar traffic racer apk para pc
        -descargar traffic racer apk sin virus
        -descargar traffic racer apk 2022
        -descargar traffic racer apk mod hack
        -descargar traffic racer apk ilimitado
        -descargar traffic racer apk para celular
        -descargar traffic racer apk con graficos hd
        -descargar traffic racer apk con trucos
        -descargar traffic racer apk con autos nuevos
        -descargar traffic racer apk con pistas diferentes
        -descargar traffic racer apk con sonido realista
        -descargar traffic racer apk con modo nocturno
        -descargar traffic racer apk con personalizacion de autos
        -descargar traffic racer apk con misiones y recompensas
        -descargar traffic racer apk con multijugador online

        -
          -
        • Observar el tráfico: Antes de cambiar de carril, debes observar el tráfico que hay delante y detrás de ti, y elegir el carril más despejado. También debes estar atento a los coches que cambian de carril sin previo aviso, y a los que frenan o aceleran repentinamente.
        • -
        • Anticiparse a las situaciones: Debes anticiparte a las situaciones que puedan ocurrir en la carretera, como las curvas, los cruces, las señales, los semáforos, etc. Debes ajustar tu velocidad y tu posición en función de lo que veas venir, y evitar las maniobras bruscas o arriesgadas.
        • -
        • Mantener una distancia de seguridad: Debes mantener una distancia de seguridad con los otros coches, tanto por delante como por detrás. Esto te dará más tiempo para reaccionar ante cualquier imprevisto, y evitará que choques con ellos si frenan o aceleran.
        • -
        • Usar las luces: Debes usar las luces de tu coche para indicar tus intenciones a los otros conductores. Por ejemplo, debes usar las luces intermitentes para señalizar que vas a cambiar de carril, y las luces de freno para avisar que vas a reducir la velocidad. Esto evitará confusiones y malentendidos con el tráfico.
        • -
        • No conducir en estado de ebriedad o cansancio: Debes evitar conducir en Traffic Racer si estás bajo los efectos del alcohol, las drogas o el cansancio. Estas condiciones afectan negativamente a tu capacidad de atención, concentración, coordinación y reflejos, y aumentan el riesgo de sufrir un accidente.
        • -
        -

        Cómo usar el controlador MFi en iOS

        -

        Si tienes un dispositivo iOS y quieres jugar a Traffic Racer con un controlador MFi (Made for iPhone/iPad/iPod), puedes hacerlo siguiendo estos pasos:

        -
          -
        1. Conectar el controlador MFi a tu dispositivo iOS: Para conectar el controlador MFi a tu dispositivo iOS, debes seguir las instrucciones del fabricante del controlador. Normalmente, se trata de encender el controlador y activar el bluetooth en tu dispositivo iOS. Luego, debes emparejarlos desde el menú de ajustes de tu dispositivo iOS.
        2. -
        3. Abrir el juego Traffic Racer: Una vez que hayas conectado el controlador MFi a tu dispositivo iOS, debes abrir el juego Traffic Racer desde tu pantalla de inicio.
        4. -
        5. Configurar el controlador MFi en el juego: Una vez que hayas abierto el juego Traffic Racer, debes ir al menú de opciones y seleccionar la opción Controlador MFi. Allí podrás ver los botones asignados al controlador MFi y cambiarlos si lo deseas. También podrás ajustar la sensibilidad del volante y el nivel de asistencia.
        6. -
        7. Jugar con el controlador MFi: Una vez que hayas configurado el controlador MFi en el juego, podrás jugar con él como si fuera un volante real. Podrás acelerar, frenar, cambiar de carril y usar el turbo con los botones del controlador MFi.
        8. -
        -

        Opiniones y reseñas de Traffic Racer

        -

        Traffic Racer es un juego que ha recibido muchas opiniones y reseñas positivas por parte de los usuarios y los expertos. Estas son algunas de las opiniones y reseñas más destacadas:

        -

        Lo que dicen los usuarios

        -

        Estos son algunos de los comentarios que han dejado los usuarios sobre Traffic Racer en la tienda oficial de aplicaciones:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        UsuarioComentarioPuntuación
        J Continuing the article: uanMe encanta este juego, es muy divertido y adictivo. Los gráficos son muy buenos y los coches son muy variados. Lo único que le falta es que se pueda jugar online con otros jugadores.5 estrellas
        LauraEs un juego muy entretenido y fácil de jugar. Me gusta que se pueda personalizar el coche y que haya diferentes escenarios y modos de juego. Lo recomiendo para pasar el rato.4 estrellas
        MarioEs un buen juego de carreras, pero tiene algunos fallos. A veces se cierra solo o se queda colgado. También me gustaría que hubiera más opciones de control y que se pudiera escuchar música mientras se juega.3 estrellas
        SofíaNo me gusta este juego, me parece muy aburrido y repetitivo. Los coches son muy caros y las carreras son muy cortas. Además, el juego tiene mucha publicidad y consume mucha batería.2 estrellas
        PedroEs el peor juego de carreras que he probado. Los gráficos son malísimos, el manejo del coche es horrible y el juego está lleno de bugs. No lo recomiendo para nada, es una pérdida de tiempo y de espacio.1 estrella
        -

        Lo que dicen los expertos

        -

        Estos son algunos de los análisis que han hecho los expertos sobre Traffic Racer en diferentes medios especializados:

        - - - - - - - - - - - - - - - - -AppAdvice - - - -
        MedioAnálisisPuntuación
        Android AuthorityTraffic Racer es un juego de carreras arcade sin fin que ofrece una experiencia de conducción realista y divertida. El juego cuenta con unos gráficos 3D impresionantes, una gran variedad de coches y escenarios, y varios modos de juego para elegir. El juego es gratuito, pero tiene compras integradas opcionales para obtener más monedas y puntos. El juego es ideal para los amantes de la velocidad y la adrenalina.8/10
        iOS App Store ReviewTraffic Racer is a racing game that puts you behind the wheel of a car and challenges you to drive through the highway dodging traffic, earning money, upgrading your car and buying new ones. The game features stunning 3D graphics, smooth and realistic car handling, over 40 different cars to choose from, and 5 different environments to drive in. The game is free, but it has in-app purchases optional to get more coins and points. The game is perfect for those who love speed and adrenaline.4/5
        Traffic Racer is a racing game that lets you drive a car and challenge yourself to avoid traffic, earn money, upgrade your car and buy new ones. The game has amazing 3D graphics, smooth and realistic car handling, over 40 different cars to choose from, and 5 different environments to drive in. The game is free, but it has in-app purchases optional to get more coins and points. The game is great for those who love speed and adrenaline.4.5/5
        -

        Conclusión

        -

        Traffic Racer es un juego de carreras arcade sin fin que te ofrece una experiencia de conducción realista y divertida. El juego tiene unos gráficos 3D impresionantes, una gran variedad de coches y escenarios, y varios modos de juego para elegir. El juego es gratuito, pero tiene compras integradas opcionales para obtener más monedas y puntos.

        -

        Si quieres descargar Traffic Racer APK en tu dispositivo Android, puedes seguir los pasos que te hemos explicado en este artículo. También te hemos dado algunos consejos y trucos para jugar mejor y conseguir más monedas, puntos y coches. Además, te hemos mostrado algunas opiniones y reseñas de Traffic Racer por parte de los usuarios y los expertos.

        -

        Esperamos que este artículo te haya sido útil y que disfrutes de Traffic Racer. Si tienes alguna duda o sugerencia sobre el juego o el artículo, no dudes en dejarnos un comentario. ¡Gracias por leernos!

        -

        Preguntas frecuentes

        -

        A continuación, te respondemos algunas preguntas frecuentes que pueden surgirte sobre Traffic Racer:

        -

        ¿Traffic Racer es un juego seguro?

        -

        Sí, Traffic Racer es un juego seguro que no contiene virus, malware ni contenido inapropiado. Sin embargo, debes tener cuidado al descargar el archivo APK del juego desde sitios web externos, ya que pueden contener archivos dañinos o falsos. Te recomendamos que solo descargues el archivo APK desde sitios web confiables y que verifiques los permisos que solicita el juego antes de instalarlo.

        -

        ¿Traffic Racer es un juego online o offline?

        -

        Traffic Racer es un juego que se puede jugar tanto online como offline. Si juegas online, podrás acceder a las clasificaciones globales y ver tu posición y la de otros jugadores. También podrás actualizar el juego con las últimas novedades y correcciones de errores. Si juegas offline, podrás jugar sin necesidad de conexión a internet, pero no podrás ver las clasificaciones globales ni actualizar el juego.

        -

        ¿Traffic Racer tiene multijugador?

        -

        No, Traffic Racer no tiene multijugador. Es un juego de carreras arcade sin fin que solo se puede jugar en modo individual. Sin embargo, puedes competir contra otros jugadores a través de las clasificaciones globales y ver quién es el conductor más rápido del mundo.

        -

        ¿Traffic Racer tiene trucos o hacks?

        -

        No, Traffic Racer no tiene trucos o hacks oficiales. Es un juego que se basa en la habilidad y la destreza del jugador para conducir por la carretera esquivando el tráfico. Sin embargo, hay algunos sitios web que ofrecen trucos o hacks no oficiales para obtener más monedas y puntos en el juego. Te advertimos que estos trucos o hacks pueden ser ilegales, inseguros o dañar tu dispositivo o tu cuenta del juego. Te recomendamos que no los uses y que juegues de forma honesta y legal.

        -

        ¿Traffic Racer tiene versión para PC?

        -

        No, Traffic Racer no tiene versión para PC. Es un juego diseñado para dispositivos móviles Android e iOS. Sin embargo, hay algunos emuladores de Android que te permiten jugar a Traffic Racer en tu PC. Un emulador de Android es un programa que simula el sistema operativo Android en tu PC y te permite ejecutar aplicaciones y juegos de Android en tu PC. Algunos ejemplos de emuladores de Android son BlueStacks, NoxPlayer o LDPlayer.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 500px Photos Faster and Easier with This Chrome Extension.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 500px Photos Faster and Easier with This Chrome Extension.md deleted file mode 100644 index 2335259fd85870192ddf1a574a4524c993072944..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 500px Photos Faster and Easier with This Chrome Extension.md +++ /dev/null @@ -1,125 +0,0 @@ -
        -

        How to Download 500px Photos for Free

        -

        Do you love photography and want to discover and share amazing photos from around the world? If so, you might have heard of 500px, a photography community that helps you find inspiration and connect with photographers. You can also get paid for your work by licensing your photos through 500px.

        -

        But what if you want to download some of the stunning photos from 500px for your personal use, such as setting them as wallpapers, printing them, or using them for inspiration? Unfortunately, you can't directly download a photo from 500px for free, as they are protected by copyright. However, if you need to save a photo for legal use, such as fair use or educational purposes, there are some tools that will help you.

        -

        download 500px


        Download Filehttps://urlin.us/2uSU5m



        -

        In this article, we will show you three methods to download photos from 500px for free. You will need a web browser and an internet connection for all of them. Let's get started!

        -

        Method 1: Using 500pxdownloader

        -

        This method uses a website called 500pxdownload.com that helps you to download images from 500px. Here are the steps:

        -
          -
        1. Navigate to the image which you want to download. Open 500px.com in your web browser and click on the image to expand. Log-in isn't required here.
        2. -
        3. Copy the URL of the image. Move your mouse to the URL box and copy the link. E.g : https://500px.com/photo/3750980/oh-joy-by-i-gede-lila-kantiana
        4. -
        5. Open 500pxdownload.com in a new window. This website helps you to download images from 500px.
        6. -
        7. Paste your 500px image link in the box. When you're done, hit the Magic Decoder Button, under the URL box. This will open your photograph in a new window.
        8. -
        9. Click on the image to download. Hit the Download button from the dialog box to save the image to your device. You can choose the location and name of the file.
        10. -
        -

        Congratulations, you have successfully downloaded a photo from 500px using 500pxdownloader!

        -

        Method 2: Using Savelink.info

        -

        This method uses another website called Savelink.info that allows you to download photos from various websites, including 500px. Here are the steps:

        -
          -
        1. Open the 500px image which you want to download. Go to 500px.com and click on the photo you like. You don't need to log in for this.
        2. -
        3. Copy the URL of your preferred photo. Select the link from the URL box and copy it. E.g : https://500px.com/photo/1018375251/autumn-in-the-city-by-alexey-krasnitsky
        4. -
        5. Visit www.savelink.info/sites/500px in your browser. This website helps you to download photos from 500px and other sites.
        6. -
        7. Paste the 500px link into the box. Click on the box and paste your copied link there.
        8. -
        9. Click on the button with the three dots on it. This will show you a list of available sizes and qualities for your photo.
        10. -
        11. Save the image to your device. Right-click on the size and quality you want and choose Save link as... from the menu. You can also left-click on it and then click on the Download button that appears. Choose a location and name for your file and save it.
        12. -
        -

        Well done, you have successfully downloaded a photo from 500px using Savelink.info!

        -

        Method 3: Using the 500px app

        -

        This method uses the official 500px app for Android or iOS devices. You will need to download the app and create a free account for this method. Here are the steps:

        -
          -
        1. Download the app for Android or iOS. Go to Google Play Store or Apple App Store and search for 500px. Download and install the app on your device.
        2. -
        3. Log in or sign up for a free account. Open the app and enter your email and password, or sign up with Facebook, Google, or Apple.
        4. -
        5. Browse or search for photos you like. You can explore different categories, such as Popular, Editors' Choice, Upcoming, Fresh, or Following. You can also use the search bar to find photos by keywords, tags, or usernames.
        6. -
        7. Tap on the photo to view it in full screen. You can also swipe left or right to see more photos.
        8. -
        9. Tap on the three-dot icon at the bottom right corner. This will open a menu with various options, such as Like, Comment, Share, Add to Gallery, or Download photo.
        10. -
        11. Tap on Download photo and choose a size and quality. You can choose from Small (800 px), Medium (2048 px), Large (4096 px), or Original (full resolution). The app will ask for your permission to access your photos, media, and files on your device.
        12. -
        13. Save the image to your device. The app will download the photo and save it to your device's gallery or camera roll. You can also find it in your Downloads folder.
        14. -
        -

        Awesome, you have successfully downloaded a photo from 500px using the 500px app!

        -

        Conclusion

        -

        In this article, we have shown you three methods to download photos from 500px for free. You can use any of these methods depending on your preference and device. However, please remember that downloading photos from 500px does not give you the right to use them for commercial purposes or without crediting the original photographers. You should always respect their work and follow their license terms.

        -

        How to download 500px photos for free
        -Download 500px app for Android and iOS
        -Download 500px images with 500pxdownloader
        -Download 500px photos with Savelink.info
        -Download high-quality photos from 500px
        -Download 500px wallpapers for desktop and mobile
        -Download 500px portfolio website templates
        -Download 500px presets for Lightroom and Photoshop
        -Download 500px videos and articles from Resource Hub
        -Download 500px licensing agreement and terms of service
        -How to download 500px photos in bulk
        -Download 500px photos without watermark
        -Download 500px photos with original resolution and metadata
        -Download 500px photos with attribution and credit
        -Download 500px photos for personal and commercial use
        -How to download 500px photos on Mac and Windows
        -Download 500px photos on Chrome and Firefox
        -Download 500px photos on Safari and Edge
        -Download 500px photos on Opera and Brave
        -Download 500px photos on Linux and Ubuntu
        -How to download 500px photos on iPhone and iPad
        -Download 500px photos on Android phones and tablets
        -Download 500px photos on Samsung and Huawei devices
        -Download 500px photos on LG and Sony devices
        -Download 500px photos on OnePlus and Xiaomi devices
        -How to download 500px photos on social media platforms
        -Download 500px photos on Facebook and Instagram
        -Download 500px photos on Twitter and Pinterest
        -Download 500px photos on Reddit and Tumblr
        -Download 500px photos on LinkedIn and Medium
        -How to download 500px photos for different purposes and projects
        -Download 500px photos for wallpapers and backgrounds
        -Download 500px photos for presentations and slideshows
        -Download 500px photos for websites and blogs
        -Download 500px photos for flyers and posters
        -Download 500px photos for brochures and catalogs
        -Download 500px photos for business cards and logos
        -Download 500px photos for newsletters and magazines
        -Download 500px photos for ebooks and reports
        -Download 500px photos for calendars and planners

        -

        We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. Happy downloading!

        -

        FAQs

        -

        Q1: Is it legal to download photos from 500px?

        -

        A1: It depends on how you use them. If you download photos from 500px for personal use only, such as setting them as wallpapers, printing them, or using them for inspiration, then it is legal under fair use doctrine. However, if you download photos from 500px for commercial use, such as selling them, using them for advertising, or modifying them, then it is illegal unless you have the permission of the photographers or you buy a license from 500px. You should always check the license terms of each photo before downloading and using them.

        -

        Q2: How can I use the photos I download from 500px?

        -

        A2: You can use the photos you download from 500px for various purposes, as long as you respect the rights of the photographers and follow their license terms. Some of the common ways to use the photos are:

        -
          -
        • Setting them as wallpapers for your desktop, laptop, tablet, or smartphone.
        • -
        • Printing them and framing them as art pieces for your home or office.
        • -
        • Using them as backgrounds or elements for your graphic design projects.
        • -
        • Using them as references or inspiration for your own photography or art.
        • -
        • Sharing them with your friends or family on social media or messaging apps.
        • -
        -

        However, you should always credit the original photographers when you use their photos and link back to their 500px profiles. You should also avoid using their photos for any illegal, offensive, or harmful purposes.

        -

        Q3: How can I upload my own photos to 500px?

        -

        A3: If you have some amazing photos that you want to share with the world and get paid for them, you can upload them to 500px and join the community of millions of photographers. Here are the steps to upload your photos to 500px:

        -
          -
        1. Create a free account on 500px. Go to 500px.com and click on Join in the top right corner. You can sign up with your email and password, or use Facebook, Google, or Apple.
        2. -
        3. Verify your email address. Check your inbox for a confirmation email from 500px and click on the link to verify your account.
        4. -
        5. Upload your photos. Click on the Upload button in the top right corner and choose one or more photos from your device. You can also drag and drop your photos into the upload window.
        6. -
        7. Edit your photos. You can crop, rotate, adjust, filter, or watermark your photos using the built-in editor. You can also add titles, descriptions, tags, categories, locations, and privacy settings to your photos.
        8. -
        9. Publish your photos. When you are done editing your photos, click on Publish in the bottom right corner. Your photos will be uploaded to your profile and visible to other users.
        10. -
        -

        Congratulations, you have successfully uploaded your photos to 500px!

        -

        Q4: How can I get paid for my photos on 500px?

        -

        A4: If you want to earn money from your photos on 500px, you can license them through 500px Licensing. This means that you allow other people or companies to use your photos for commercial purposes in exchange for a royalty fee. Here are the steps to license your photos on 500px:

        -
          -
        1. Opt-in for Licensing. Go to https://licensing.500px.com/ and click on Start Licensing Your Photos. You will need to agree to the Contributor Agreement and fill in some information about yourself and your payment method.
        2. -
        3. Select your photos for Licensing. Go to https://web.500px.com/manage and click on the Licensing tab. You will see a list of your uploaded photos that are eligible for licensing. You can select the ones that you want to license by clicking on the checkbox next to each photo.
        4. -
        5. Submit your photos for review. After selecting your photos, click on Submit Selected Photos in the bottom right corner. Your photos will be sent to the 500px Licensing team for review and approval.
        6. -
        7. Wait for approval and payment. Once your photos are approved, they will be added to the 500px Licensing collection and available for buyers to purchase. You will receive a notification email when someone buys a license for your photo. You will also see your earnings in your account dashboard. You can withdraw your earnings once they reach $50 USD.
        8. -
        -

        Well done, you have successfully licensed your photos on 500px!

        -

        Q5: How can I contact the photographers on 500px?

        -

        A5: If you want to contact the photographers on 500px, you can do so by sending them a message through their profile page. Here are the steps to contact a photographer on 500px:

        -
          -
        1. Find the photographer's profile page. Go to 500px.com -
        2. Click on the Message button. This is located under the photographer's profile picture and name. You will need to log in or sign up for a free account to send a message.
        3. -
        4. Type your message and click on Send. You can write anything you want, such as complimenting their work, asking for permission to use their photos, or collaborating with them. Be polite and respectful in your message.
        5. -
        6. Wait for a reply. The photographer will receive your message in their inbox and may reply to you if they are interested. You can check your inbox by clicking on the envelope icon in the top right corner of the website.
        7. -
        -

        Great, you have successfully contacted a photographer on 500px!

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Cooking Journey Cooking Games Mod APK - Cook Serve and Have Fun.md b/spaces/1phancelerku/anime-remove-background/Cooking Journey Cooking Games Mod APK - Cook Serve and Have Fun.md deleted file mode 100644 index a731a014cac2a7ff7610689bae101042268e33ad..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Cooking Journey Cooking Games Mod APK - Cook Serve and Have Fun.md +++ /dev/null @@ -1,124 +0,0 @@ - -

        Download Cooking Journey Cooking Games Mod Apk: A Fun and Addictive Cooking Game for Android

        -

        Are you looking for an addictive cooking game that will challenge your time-management skills and culinary creativity? Do you want to travel around the world and experience different cuisines and cultures? Do you want to enjoy a fine art design game with beautiful graphics and sound effects? If you answered yes to any of these questions, then you should download Cooking Journey Cooking Games Mod Apk, a modified version of the original game that gives you unlimited money and gems to unlock all the features and have more fun.

        -

        What is Cooking Journey Cooking Games?

        -

        Cooking Journey Cooking Games is a free time-management cooking game developed by Cooking Chef Studio. In this game, you can cook delicious food, meals, and desserts from all over the world, explore great restaurants, and become a master chef. Here are some of the features of this game:

        -

        download cooking journey cooking games mod apk


        Download ····· https://jinyurl.com/2uNTqk



        -

        A time-management cooking game with various cuisines and restaurants

        -

        In this game, you can serve hundreds of exotic recipes from different countries, such as France, Italy, Mexico, China, Japan, and more. You can also discover many different restaurants, such as sushi bar, pizza shop, burger joint, ice cream parlor, taco truck, and more. You can also practice your cooking and management skills by preparing the ingredients, cooking the food, plating the dishes, serving the customers, collecting the coins, and cleaning the kitchen.

        -

        A fine art design game with beautiful graphics and sound effects

        -

        This game has a fine art design that makes you feel like you are in a real restaurant. The graphics are colorful and detailed, the animations are smooth and realistic, and the sound effects are lively and immersive. You can also enjoy the different themes and styles of each restaurant, such as Parisian elegance, Roman romance, New York chic, Mexican fiesta, Japanese zen, and more.

        -

        A free to play game with offline mode and in-app purchases

        -

        This game is free to download and play on your Android device. You can also play it offline without an internet connection. However, if you want to access some extra features or speed up your progress, you can also make in-app purchases with real money. For example, you can buy more coins or gems to unlock new restaurants or ingredients. You can also buy magic boosts to complete special cooking goals or get more tips from customers.

        -

        What is Cooking Journey Cooking Games Mod Apk?

        -

        Cooking Journey Cooking Games Mod Apk is a modified version of the original game that gives you unlimited money and gems to unlock all the features and have more fun. Here are some of the benefits of using this mod apk:

        -

        A modified version of the original game with unlimited money and gems

        -

        With this mod apk, you don't have to worry about running out of money or gems in the game. You can use them to unlock all the restaurants, ingredients, and kitchen appliances that you want. You can also use them to buy magic boosts or tips to make your cooking easier and faster. You can enjoy the game without any limitations or restrictions.

        -

        A way to unlock all the restaurants, ingredients, and kitchen appliances

        -

        With this mod apk, you can access all the content that the game has to offer. You can explore all the cuisines and restaurants that are available in the game, such as French bakery, Italian pasta, Mexican tacos, Chinese noodles, Japanese sushi, and more. You can also cook with all the ingredients and kitchen appliances that are available in the game, such as cheese, tomatoes, mushrooms, eggs, flour, butter, milk, oven, mixer, fryer, toaster, and more. You can have more variety and fun in your cooking.

        -

        A safe and easy to install file from a trusted source

        -

        This mod apk is safe and easy to install on your Android device. You don't need to root your device or use any complicated tools or methods. You just need to download the mod apk file from the link below and follow the simple steps to install it. The file is from a trusted source and has been tested for viruses and malware. You can download it without any worries or risks.

        -

        How to Download and Install Cooking Journey Cooking Games Mod Apk?

        -

        If you want to download and install Cooking Journey Cooking Games Mod Apk on your Android device, you just need to follow these simple steps:

        -

        How to download cooking journey cooking games mod apk for free
        -Cooking journey cooking games mod apk latest version
        -Cooking journey cooking games mod apk unlimited money
        -Cooking journey cooking games mod apk offline
        -Cooking journey cooking games mod apk no ads
        -Cooking journey cooking games mod apk hack
        -Cooking journey cooking games mod apk cheats
        -Cooking journey cooking games mod apk review
        -Cooking journey cooking games mod apk gameplay
        -Cooking journey cooking games mod apk features
        -Cooking journey cooking games mod apk download link
        -Cooking journey cooking games mod apk installation guide
        -Cooking journey cooking games mod apk requirements
        -Cooking journey cooking games mod apk compatibility
        -Cooking journey cooking games mod apk update
        -Cooking journey cooking games mod apk tips and tricks
        -Cooking journey cooking games mod apk best recipes
        -Cooking journey cooking games mod apk challenges
        -Cooking journey cooking games mod apk levels
        -Cooking journey cooking games mod apk rewards
        -Cooking journey cooking games mod apk rating
        -Cooking journey cooking games mod apk feedback
        -Cooking journey cooking games mod apk support
        -Cooking journey cooking games mod apk developer
        -Cooking journey cooking games mod apk alternatives
        -Download cooking journey cooking games for android
        -Download cooking journey cooking games for ios
        -Download cooking journey cooking games for pc
        -Download cooking journey cooking games for mac
        -Download cooking journey cooking games for windows
        -Download cooking journey cooking games for laptop
        -Download cooking journey cooking games for tablet
        -Download cooking journey cooking games for firestick
        -Download cooking journey cooking games for smart tv
        -Download cooking journey cooking games for chromebook
        -Download free modded cooking games for android
        -Download free modded cooking games for ios
        -Download free modded cooking games for pc
        -Download free modded cooking games for mac
        -Download free modded cooking games for windows
        -Download free modded cooking games for laptop
        -Download free modded cooking games for tablet
        -Download free modded cooking games for firestick
        -Download free modded cooking games for smart tv
        -Download free modded cooking games for chromebook

        -

        Step 1: Download the mod apk file from the link below

        -

        Click on the link below to download the mod apk file of Cooking Journey Cooking Games. The file size is about 100 MB and it will take a few minutes to download depending on your internet speed.

        -

        Download Cooking Journey Cooking Games Mod Apk

        -

        Step 2: Enable unknown sources on your device settings

        -

        Before you can install the mod apk file, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to your device settings > security > unknown sources > enable.

        -

        Step 3: Install the mod apk file and enjoy the game

        -

        After you have enabled unknown sources, go to your file manager and locate the mod apk file that you have downloaded. Tap on it and follow the instructions to install it. Once the installation is complete, open the game and enjoy it with unlimited money and gems.

        -

        Tips and Tricks to Play Cooking Journey Cooking Games

        -

        If you want to play Cooking Journey Cooking Games like a pro, here are some tips and tricks that you can use:

        -

        Use magic boosts to complete special cooking goals

        -

        In some levels, you will have special cooking goals that require you to cook a certain number of dishes or serve a certain number of customers in a limited time. To complete these goals, you can use magic boosts that will help you cook faster or serve more customers. For example, you can use the fast cook boost that will make your food cook instantly or the double tip boost that will make your customers pay twice as much.

        -

        Get combos and earn big bonus, coins, and tips

        -

        To earn more money and tips in the game, you should try to get combos by serving customers quickly and accurately. The more customers you serve in a row without making any mistakes or delays, the higher your combo meter will go. When your combo meter is full, you will get a big bonus of coins and tips that will boost your income.

        -

        Decorate your restaurants and upgrade your ingredients

        -

        To make your restaurants more attractive and profitable, you should decorate them with various items and themes. You can buy decorations with coins or gems in the shop menu. Decorations will increase your restaurant's popularity and customer satisfaction. You should also upgrade your ingredients with coins or gems in the shop menu. Upgrading your ingredients will improve their quality and taste. This will make your customers happier and more generous with their tips.

        -

        Reviews and Ratings of Cooking Journey Cooking Games

        -

        Cooking Journey Cooking Games is a popular and well-received game among users who love cooking games. Here are some of the reviews and ratings of this game:

        -

        Positive reviews from users who love the game

        -

        Many users have given positive feedback about this game. They have praised its graphics, sound effects, gameplay, variety , and fun of this game. They have also appreciated its offline mode, in-app purchases, and mod apk. Here are some of the positive reviews from Google Play and App Store:

        -

        "This is one of the best cooking games I have ever played. The graphics are amazing, the sound effects are realistic, and the gameplay is challenging and addictive. I love the different cuisines and restaurants that I can explore. I also like that I can play it offline and buy coins and gems with real money. The mod apk is also awesome, it gives me unlimited money and gems to unlock everything. I highly recommend this game to anyone who loves cooking games."

        -

        "I am addicted to this game. It is so fun and relaxing to cook delicious food and serve happy customers. The game has a fine art design that makes me feel like I am in a real restaurant. The game also has a lot of variety and content that keeps me entertained for hours. I also appreciate that the game has an offline mode and in-app purchases that are optional. The mod apk is also great, it makes the game more enjoyable with unlimited money and gems."

        -

        Negative reviews from users who encounter some bugs or glitches

        -

        Some users have given negative feedback about this game. They have complained about some bugs or glitches that affect their gaming experience. They have also suggested some improvements or features that they would like to see in the game. Here are some of the negative reviews from Google Play and App Store:

        -

        "The game is good, but it has some bugs that need to be fixed. Sometimes the game freezes or crashes when I am playing. Sometimes the customers disappear or don't pay me. Sometimes the ingredients or kitchen appliances don't work properly. These bugs are annoying and frustrating. Please fix them as soon as possible."

        -

        "The game is nice, but it has some glitches that ruin the fun. Sometimes the game lags or slows down when I am cooking. Sometimes the coins or gems don't add up correctly. Sometimes the magic boosts don't work or expire too soon. These glitches are disappointing and irritating. Please improve them as soon as possible."

        -

        Overall rating of 4.8 out of 5 stars on Google Play and App Store

        -

        Despite some minor issues, Cooking Journey Cooking Games is still a highly rated game among users who love cooking games. The game has an overall rating of 4.8 out of 5 stars on both Google Play and App Store, based on thousands of reviews and ratings. This shows that the game is popular and well-liked by most users who play it.

        -

        Conclusion

        -

        Cooking Journey Cooking Games is a fun and addictive cooking game for Android devices that will challenge your time-management skills and culinary creativity. You can cook delicious food, meals, and desserts from all over the world, explore great restaurants, and become a master chef. You can also enjoy a fine art design game with beautiful graphics and sound effects, a free to play game with offline mode and in-app purchases, and a mod apk that gives you unlimited money and gems to unlock all the features and have more fun.

        -

        If you want to download Cooking Journey Cooking Games Mod Apk, you just need to follow these simple steps:

        -
          -
        1. Download the mod apk file from the link below
        2. -
        3. Enable unknown sources on your device settings
        4. -
        5. Install the mod apk file and enjoy the game
        6. -
        -

        If you want to play Cooking Journey Cooking Games like a pro, you can use these tips and tricks:

        -
          -
        • Use magic boosts to complete special cooking goals
        • -
        • Get combos and earn big bonus, coins, and tips
        • -
        • Decorate your restaurants and upgrade your ingredients
        • -
        -

        Cooking Journey Cooking Games is a popular and well-received game among users who love cooking games. The game has an overall rating of 4.8 out of 5 stars on both Google Play and App Store.

        -

        If you are looking for an addictive cooking game that will challenge your time-management skills and culinary creativity, you should download Cooking Journey Cooking Games Mod Apk today.

        -

        FAQs

        -

        Here are some of the frequently asked questions about Cooking Journey Cooking Games Mod Apk:

        -

        Q: Is Cooking Journey Cooking Games Mod Apk safe to use?

        -

        A: Yes, Cooking Journey Cooking Games Mod Apk is safe to use on your Android device. The file is from a trusted source and has been tested for viruses and malware.

        -

        Q: Do I need to root my device to use Cooking Journey Cooking Games Mod Apk?

        -

        A: No, you don't need to root your device to use Cooking Journey Cooking Games Mod Ap k. You just need to enable unknown sources on your device settings to install the mod apk file.

        -

        Q: Can I play Cooking Journey Cooking Games Mod Apk offline?

        -

        A: Yes, you can play Cooking Journey Cooking Games Mod Apk offline without an internet connection. However, some features or content may require an internet connection to access.

        -

        Q: Can I update Cooking Journey Cooking Games Mod Apk to the latest version?

        -

        A: Yes, you can update Cooking Journey Cooking Games Mod Apk to the latest version when it is available. However, you may need to download and install the new mod apk file from the same source as before.

        -

        Q: Can I play Cooking Journey Cooking Games Mod Apk with my friends?

        -

        A: Yes, you can play Cooking Journey Cooking Games Mod Apk with your friends. You can connect your game to Facebook and invite your friends to join you in your cooking journey. You can also share your achievements and progress with your friends on social media.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of 8 Ball Pool with Pure APK for Windows.md b/spaces/1phancelerku/anime-remove-background/Experience the Thrill of 8 Ball Pool with Pure APK for Windows.md deleted file mode 100644 index 8520dd6f896d12a4b6271788e0a03d16cebb0557..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of 8 Ball Pool with Pure APK for Windows.md +++ /dev/null @@ -1,147 +0,0 @@ - - - - - - - - - -
        - Table 2: Article with HTML formatting
        -

        8 Ball Pool Pure APK: How to Download and Play on Your PC

        -

        Introduction

        -

        Do you love playing pool games on your mobile device but wish you could enjoy them on a bigger screen with better graphics and controls? If so, you might want to try out 8 Ball Pool Pure APK, a modified version of the popular pool game 8 Ball Pool from the popular pool game 8 Ball Pool by Miniclip. 8 Ball Pool Pure APK is a modified version that gives you unlimited coins, cash, and cues without any ads or restrictions. You can download and play 8 Ball Pool Pure APK on your PC using an emulator like Gameloop, which lets you enjoy the game on a larger screen with better graphics and controls. In this article, we will show you how to download and play 8 Ball Pool Pure APK on your PC using Gameloop Emulator. We will also tell you some of the benefits of using APKPure, a reliable source for downloading APK files.

        -

        How to Download 8 Ball Pool Pure APK on Your PC

        -

        To download and play 8 Ball Pool Pure APK on your PC, you will need to follow these steps:

        -

        8 ball pool pure apk


        Downloadhttps://jinyurl.com/2uNO0W



        -

        Step 1: Download Gameloop Emulator

        -

        Gameloop Emulator is a free and official Android emulator that allows you to run mobile games on your PC. It has a smooth and fast performance, a large game library, and a user-friendly interface. Gameloop Emulator also has advanced features like keyboard and mouse customization, screen recording, and anti-cheating system.

        -

        To download and install Gameloop Emulator on your PC, follow these steps:

        -
          -
        • Go to the official website of Gameloop Emulator and click on the Download button.
        • -
        • Run the installer file and follow the instructions to install Gameloop Emulator on your PC.
        • -
        • Launch Gameloop Emulator and sign in with your Google account or create a new one.
        • -
        -

        Step 2: Download 8 Ball Pool Pure APK from APKPure

        -

        APKPure is a website where you can download open-source Android applications that are not available or restricted on Google Play Store. APKPure verifies all apps before publishing by using SHA-1 to ensure the application is original and has not been modified in any way. APKPure also offers fast and safe downloads, automatic updates, and region-free access.

        -

        To download and install 8 Ball Pool Pure APK from APKPure, follow these steps:

        -
          -
        • Go to the official website of APKPure and search for 8 Ball Pool Pure APK.
        • -
        • Click on the Download APK button and save the file on your PC.
        • -
        • Drag and drop the APK file into Gameloop Emulator or click on the Install APK button at the bottom right corner of Gameloop Emulator.
        • -
        • Select the APK file from your PC and click on Open to install it.
        • -
        -

        Step 3: Run 8 Ball Pool Pure APK on Gameloop Emulator

        -

        To run 8 Ball Pool Pure APK on Gameloop Emulator, follow these steps:

        -
          -
        • Launch Gameloop Emulator and go to My Games tab.
        • -
        • Click on 8 Ball Pool Pure APK icon to start the game.
        • -
        • Allow the game to access your device data and storage.
        • -
        • Enjoy playing 8 Ball Pool Pure APK on your PC.
        • -

        How to Play 8 Ball Pool Pure APK on Your PC

        -

        Now that you have downloaded and installed 8 Ball Pool Pure APK on your PC, you are ready to play the game. Here are some steps to help you enjoy the game better:

        -

        8 ball pool apk pure download
        -8 ball pool pure apk mod
        -8 ball pool pure apk latest version
        -8 ball pool pure apk unlimited coins
        -8 ball pool pure apk hack
        -8 ball pool pure apk old version
        -8 ball pool pure apk android 1
        -8 ball pool pure apk anti ban
        -8 ball pool pure apk update
        -8 ball pool pure apk offline
        -8 ball pool pure apk no root
        -8 ball pool pure apk free download
        -8 ball pool pure apk online
        -8 ball pool pure apk multiplayer
        -8 ball pool pure apk miniclip
        -8 ball pool pure apk original
        -8 ball pool pure apk premium
        -8 ball pool pure apk pro
        -8 ball pool pure apk revdl
        -8 ball pool pure apk rexdl
        -8 ball pool pure apk unlimited money
        -8 ball pool pure apk unlocked
        -8 ball pool pure apk vip
        -8 ball pool pure apk xapk
        -8 ball pool pure apk youtube
        -8 ball pool android game apkpure
        -8 ball pool app download apkpure
        -8 ball pool by miniclip apkpure
        -8 ball pool cheat engine apkpure
        -8 ball pool coin hack apkpure
        -8 ball pool cue hack apkpure
        -8 ball pool download for android apkpure
        -8 ball pool extended guideline apkpure
        -8 ball pool free cash apkpure
        -8 ball pool guideline hack apkpure
        -8 ball pool hack tool apkpure
        -8 ball pool instant reward apkpure
        -8 ball pool legendary cue apkpure
        -8 ball pool long line apkpure
        -8 ball pool mod menu apkpure
        -8 ball pool modded apkpure
        -8 ball pool new version apkpure
        -8 ball pool old version download apkpure
        -8 ball pool reward link apkpure
        -8 ball pool unlimited cash apkpure
        -best version of 8 ball pool apkpure
        -download game android mod apkpure offline rpg dragon mania legends mod revdl rexdl codes for wildcraft animal sim online_3d premium latest version clash of clans mod unlimited gems coins elixir by vishal gaming with otg platinmods.com free fire battlegrounds hacked version real racing asphalt xtreme rally racing mod money/dirt/spades nitro nation drag & drift mod unlocked cars no ads godisagamer.org unlimited money gold stars exp tap tap dash mod unlock all levels adfree v1.837 geometry dash mod unlock all levels and icons world war heroes ww2 shooter mod unlimited ammo/no reload + data sniper fury top shooter fun shooting games fps mod unlimited rubies + data zombie frontier last stand mod money dead trigger zombie shooter mod unlimited money and gold + data unkilled zombie fps shooting game mod infinite ammo + data dead effect zombie shooter mod unlimited money and gold + data dead target zombie shooter mod unlimited money and gold + data call of mini zombies v4.3.4 mod money call of mini zombies classic v1.1 mod money call of mini infinity v2.6 mod money call of mini dino hunter v3.1.7 mod money call of mini squad v1.0.1 mod money call of mini double shot v1.21 mod money call of mini sniper v1.21 mod money call of mini brawlers v1.31 mod money call of duty black ops zombies v1.0.11 mod money dead warfare zombie shooting gun games v2.13.42 mod ammo/damage + data modern combat versus new online multiplayer fps v1.13.6 full + data modern combat blackout edition v1.2.9e full + data bullet force multiplayer fps v1.59 b130 full + data critical ops multiplayer fps v0.9.12.f242 full + data guns of boom online shooter v4.9.0 full + data standoff multiplayer v1.22.1 full + data warface global operations combat pvp shooter v0.3.0 full + data world war polygon ww2 shooter v1.51 full + data brothers in arms d-day psp iso highly compressed game god of war ghost of sparta psp iso highly compressed game gta vice city stories psp iso highly compressed game gta liberty city stories psp iso highly compressed game gta chinatown wars psp iso highly compressed game fifa street psp iso highly compressed game spider-man web of shadows psp iso highly compressed game the amazing spider-man p

        -

        Step 4: Adjust the Controls to Your Preference

        -

        One of the advantages of playing 8 Ball Pool Pure APK on your PC is that you can customize the keyboard and mouse controls to your preference. You can do this by following these steps:

        -
          -
        • Click on the Settings icon at the top right corner of Gameloop Emulator.
        • -
        • Click on the Game tab and select 8 Ball Pool Pure APK from the list.
        • -
        • Click on the Keyboard icon at the bottom right corner of Gameloop Emulator.
        • -
        • Drag and drop the keys to the corresponding buttons on the screen.
        • -
        • Click on Save to apply the changes.
        • -
        -

        Some tips and tricks for playing 8 Ball Pool Pure APK on your PC are:

        -
          -
        • Use the mouse wheel to zoom in and out of the table.
        • -
        • Use the left mouse button to aim and adjust the power of your shot.
        • -
        • Use the right mouse button to apply spin to your cue ball.
        • -
        • Use the space bar to confirm your shot.
        • -
        • Use the ESC key to pause or resume the game.
        • -
        -

        Step 5: Enjoy the Game

        -

        8 Ball Pool Pure APK is a fun and addictive pool game that offers many features and modes for you to enjoy. Some of them are:

        -
          -
        • 1v1 Mode: Play against other players online and win coins and trophies.
        • -
        • Tournaments Mode: Compete in tournaments with different rules and prizes.
        • -
        • Practice Mode: Practice your skills and improve your game.
        • -
        • Cues Shop: Buy and upgrade different cues with different stats and abilities.
        • -
        • Rewards: Collect daily rewards, free coins, and gifts from friends.
        • -
        -

        To play online with other players or friends, you need to have an internet connection and a Miniclip account. You can create a Miniclip account by following these steps:

        -
          -
        • Click on the Profile icon at the top left corner of Gameloop Emulator.
        • -
        • Click on Login with Miniclip ID.
        • -
        • Enter your email address and password or click on Sign Up to create a new account.
        • -
        • Verify your email address and complete your profile.
        • -
        -

        Conclusion

        -

        In this article, we have shown you how to download and play 8 Ball Pool Pure APK on your PC using Gameloop Emulator. We have also told you some of the benefits of using APKPure, a reliable source for downloading APK files. 8 Ball Pool Pure APK is a modified version of 8 Ball Pool by Miniclip that gives you unlimited coins, cash, and cues without any ads or restrictions. You can enjoy playing this game on a larger screen with better graphics and controls by following our simple steps. We hope you have fun playing 8 Ball Pool Pure APK on your PC. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

        -

        FAQs

        -

        Here are some frequently asked questions about 8 Ball Pool Pure APK:

        -
          -
        1. Q1: What is the difference between 8 Ball Pool Pure APK and 8 Ball Pool from Google Play Store?
        2. -

          A1: The main difference between 8 Ball Pool Pure APK and 8 Ball Pool from Google Play Store is that 8 Ball Pool Pure APK is a modified version that gives you unlimited coins, cash, and cues without any ads or restrictions. You can also download and install 8 Ball Pool Pure APK from APKPure, which is not available or restricted on Google Play Store. However, both versions are developed by Miniclip and have similar gameplay and features.

          -
        3. Q2: Is it safe to download and install 8 Ball Pool Pure APK from APKPure?
        4. -

          A2: Yes, it is safe to download and install 8 Ball Pool Pure APK from APKPure. APKPure verifies all apps before publishing by using SHA-1 to ensure the application is original and has not been modified in any way. APKPure also offers fast and safe downloads, automatic updates, and region-free access. However, you should always be careful when downloading any app from unknown sources and scan them with antivirus software before installing them on your PC.

          -
        5. Q3: Can I play 8 Ball Pool Pure APK on other emulators besides Gameloop Emulator?
        6. -

          A3: Yes, you can play 8 Ball Pool Pure APK on other emulators besides Gameloop Emulator. However, we recommend using Gameloop Emulator because it is a free and official Android emulator that allows you to run mobile games on your PC with a smooth and fast performance, a large game library, and a user-friendly interface. Gameloop Emulator also has advanced features like keyboard and mouse customization, screen recording, and anti-cheating system.

          -
        7. Q4: Can I transfer my progress and coins from 8 Ball Pool Pure APK to 8 Ball Pool from Google Play Store or vice versa?
        8. -

          A4: No, you cannot transfer your progress and coins from 8 Ball Pool Pure APK to 8 Ball Pool from Google Play Store or vice versa. This is because 8 Ball Pool Pure APK and 8 Ball Pool from Google Play Store are different versions of the game and have different servers. If you want to switch between the versions, you will have to start from scratch.

          -
        9. Q5: What are some alternatives to 8 Ball Pool Pure APK if I want to play other pool games on my PC?
        10. -

          A5: Some alternatives to 8 Ball Pool Pure APK if you want to play other pool games on your PC are:

          -
            -
          • Pool Live Pro: A realistic and social pool game that lets you play online with other players or friends, chat with them, and join clubs. You can also customize your cue, table, and avatar.
          • -
          • Real Pool 3D: A stunning and realistic pool game that lets you play offline or online with other players or friends. You can also choose from different game modes, rules, and environments.
          • -
          • Cue Billiard Club: A modern and stylish pool game that lets you play offline or online with other players or friends. You can also enjoy the realistic physics, graphics, and sounds.
          • -
          -
        -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_inpaint.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_inpaint.py deleted file mode 100644 index 3a024f8e739d22393ae486a30e452d709854030f..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_inpaint.py +++ /dev/null @@ -1,491 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import paddle -import PIL - -from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTokenizer - -from ...fastdeploy_utils import FastDeployRuntimeModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import PIL_INTERPOLATION, logging -from . import StableDiffusionPipelineOutput - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -NUM_UNET_INPUT_CHANNELS = 9 -NUM_LATENT_CHANNELS = 4 - - -def prepare_mask_and_masked_image(image, mask, latents_shape): - image = np.array(image.convert("RGB").resize((latents_shape[1] * 8, latents_shape[0] * 8))) - image = image[None].transpose(0, 3, 1, 2) - image = image.astype(np.float32) / 127.5 - 1.0 - - image_mask = np.array(mask.convert("L").resize((latents_shape[1] * 8, latents_shape[0] * 8))) - masked_image = image * (image_mask < 127.5) - - mask = mask.resize((latents_shape[1], latents_shape[0]), PIL_INTERPOLATION["nearest"]) - mask = np.array(mask.convert("L")) - mask = mask.astype(np.float32) / 255.0 - mask = mask[None, None] - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - - return mask, masked_image - - -class FastDeployStableDiffusionInpaintPipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving etc.) - - Args: - vae_encoder ([`FastDeployRuntimeModel`]): - Variational Auto-Encoder (VAE) Model to encode images to latent representations. - vae_decoder ([`FastDeployRuntimeModel`]): - Variational Auto-Encoder (VAE) Model to decode images from latent representations. - text_encoder ([`FastDeployRuntimeModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`FastDeployRuntimeModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] - or [`DPMSolverMultistepScheduler`]. - safety_checker ([`FastDeployRuntimeModel`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae_encoder: FastDeployRuntimeModel, - vae_decoder: FastDeployRuntimeModel, - text_encoder: FastDeployRuntimeModel, - tokenizer: CLIPTokenizer, - unet: FastDeployRuntimeModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: FastDeployRuntimeModel, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae_encoder=vae_encoder, - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids - - if not np.array_equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0] - text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] * batch_size - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="np", - ) - uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int64))[0] - uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def run_safety_checker(self, image, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor( - self.numpy_to_pil(image), return_tensors="np" - ).pixel_values.astype(dtype) - # There will throw an error if use safety_checker batchsize>1 - images, has_nsfw_concept = [], [] - for i in range(image.shape[0]): - image_i, has_nsfw_concept_i = self.safety_checker( - clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1] - ) - images.append(image_i) - has_nsfw_concept.append(has_nsfw_concept_i[0]) - image = np.concatenate(images) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = np.concatenate( - [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])] - ) - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose([0, 2, 3, 1]) - return image - - def prepare_extra_step_kwargs(self, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - return extra_step_kwargs - - def check_inputs(self, prompt, height, width, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, generator, latents=None): - if generator is None: - generator = np.random - - latents_shape = (batch_size, num_channels_latents, height // 8, width // 8) - if latents is None: - latents = paddle.to_tensor(generator.randn(*latents_shape), dtype=dtype) - elif latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * float(self.scheduler.init_noise_sigma) - return latents - - def prepare_mask_latents(self, mask, masked_image, batch_size, dtype, do_classifier_free_guidance): - mask = mask.astype(dtype) - masked_image = masked_image.astype(dtype) - - # encode the mask image into latents space so we can concatenate it to the latents - masked_image_latents = self.vae_encoder(sample=masked_image)[0] - masked_image_latents = 0.18215 * masked_image_latents - - # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method - mask = mask.repeat(batch_size, 0) - masked_image_latents = masked_image_latents.repeat(batch_size, 0) - - mask = np.concatenate([mask] * 2) if do_classifier_free_guidance else mask - masked_image_latents = ( - np.concatenate([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents - ) - masked_image_latents = masked_image_latents.astype(dtype) - return mask, masked_image_latents - - def __call__( - self, - prompt: Union[str, List[str]], - image: PIL.Image.Image, - mask_image: PIL.Image.Image, - height: int = 512, - width: int = 512, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[np.random.RandomState] = None, - latents: Optional[np.ndarray] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, np.ndarray], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - mask_image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted - to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L) - instead of 3, so the expected shape would be `(B, H, W, 1)`. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`np.random.RandomState`, *optional*): - A np.random.RandomState to make generation deterministic. - latents (`np.ndarray`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 1. Check inputs - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. set timesteps - self.scheduler.set_timesteps(num_inference_steps) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = NUM_LATENT_CHANNELS - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - text_embeddings.dtype, - generator, - latents, - ) - - # 6. Preprocess mask and image - if isinstance(image, PIL.Image.Image) and isinstance(mask_image, PIL.Image.Image): - mask, masked_image = prepare_mask_and_masked_image(image, mask_image, latents.shape[-2:]) - - # 7. Prepare mask latent variables - mask, masked_image_latents = self.prepare_mask_latents( - mask, - masked_image, - batch_size * num_images_per_prompt, - text_embeddings.dtype, - do_classifier_free_guidance, - ) - num_channels_mask = mask.shape[1] - num_channels_masked_image = masked_image_latents.shape[1] - mask = paddle.to_tensor(mask) - masked_image_latents = paddle.to_tensor(masked_image_latents) - - # 8. Check that sizes of mask, masked image and latents match - unet_input_channels = NUM_UNET_INPUT_CHANNELS - if num_channels_latents + num_channels_mask + num_channels_masked_image != unet_input_channels: - raise ValueError( - "Incorrect configuration settings! The config of `pipeline.unet` expects" - f" {unet_input_channels} but received `num_channels_latents`: {num_channels_latents} +" - f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}" - f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of" - " `pipeline.unet` or your `mask_image` or `image` input." - ) - - # 9. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(eta) - - # 10. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - text_embeddings = paddle.to_tensor(text_embeddings, dtype="float32") - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents - # concat latents, mask, masked_image_latnets in the channel dimension - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - latent_model_input = paddle.concat([latent_model_input, mask, masked_image_latents], axis=1) - - # predict the noise residual - noise_pred = self.unet.zero_copy_infer( - sample=latent_model_input, timestep=t, encoder_hidden_states=text_embeddings - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - scheduler_output = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs) - latents = scheduler_output.prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 11. Post-processing - image = self.decode_latents(latents.numpy()) - - # 12. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype) - - # 13. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/distributed.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/distributed.py deleted file mode 100644 index 51fa243257ef302e2015d5ff36ac531b86a9a0ce..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/stylegan/distributed.py +++ /dev/null @@ -1,126 +0,0 @@ -import math -import pickle - -import torch -from torch import distributed as dist -from torch.utils.data.sampler import Sampler - - -def get_rank(): - if not dist.is_available(): - return 0 - - if not dist.is_initialized(): - return 0 - - return dist.get_rank() - - -def synchronize(): - if not dist.is_available(): - return - - if not dist.is_initialized(): - return - - world_size = dist.get_world_size() - - if world_size == 1: - return - - dist.barrier() - - -def get_world_size(): - if not dist.is_available(): - return 1 - - if not dist.is_initialized(): - return 1 - - return dist.get_world_size() - - -def reduce_sum(tensor): - if not dist.is_available(): - return tensor - - if not dist.is_initialized(): - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - - return tensor - - -def gather_grad(params): - world_size = get_world_size() - - if world_size == 1: - return - - for param in params: - if param.grad is not None: - dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) - param.grad.data.div_(world_size) - - -def all_gather(data): - world_size = get_world_size() - - if world_size == 1: - return [data] - - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to('cuda') - - local_size = torch.IntTensor([tensor.numel()]).to('cuda') - size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda')) - - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda') - tensor = torch.cat((tensor, padding), 0) - - dist.all_gather(tensor_list, tensor) - - data_list = [] - - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_loss_dict(loss_dict): - world_size = get_world_size() - - if world_size < 2: - return loss_dict - - with torch.no_grad(): - keys = [] - losses = [] - - for k in sorted(loss_dict.keys()): - keys.append(k) - losses.append(loss_dict[k]) - - losses = torch.stack(losses, 0) - dist.reduce(losses, dst=0) - - if dist.get_rank() == 0: - losses /= world_size - - reduced_losses = {k: v for k, v in zip(keys, losses)} - - return reduced_losses diff --git a/spaces/4Taps/SadTalker/src/audio2pose_models/networks.py b/spaces/4Taps/SadTalker/src/audio2pose_models/networks.py deleted file mode 100644 index 8aa0b1390e7b4bb0e16057ac94d2fe84f48421af..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/audio2pose_models/networks.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch.nn as nn -import torch - - -class ResidualConv(nn.Module): - def __init__(self, input_dim, output_dim, stride, padding): - super(ResidualConv, self).__init__() - - self.conv_block = nn.Sequential( - nn.BatchNorm2d(input_dim), - nn.ReLU(), - nn.Conv2d( - input_dim, output_dim, kernel_size=3, stride=stride, padding=padding - ), - nn.BatchNorm2d(output_dim), - nn.ReLU(), - nn.Conv2d(output_dim, output_dim, kernel_size=3, padding=1), - ) - self.conv_skip = nn.Sequential( - nn.Conv2d(input_dim, output_dim, kernel_size=3, stride=stride, padding=1), - nn.BatchNorm2d(output_dim), - ) - - def forward(self, x): - - return self.conv_block(x) + self.conv_skip(x) - - -class Upsample(nn.Module): - def __init__(self, input_dim, output_dim, kernel, stride): - super(Upsample, self).__init__() - - self.upsample = nn.ConvTranspose2d( - input_dim, output_dim, kernel_size=kernel, stride=stride - ) - - def forward(self, x): - return self.upsample(x) - - -class Squeeze_Excite_Block(nn.Module): - def __init__(self, channel, reduction=16): - super(Squeeze_Excite_Block, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction, bias=False), - nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel, bias=False), - nn.Sigmoid(), - ) - - def forward(self, x): - b, c, _, _ = x.size() - y = self.avg_pool(x).view(b, c) - y = self.fc(y).view(b, c, 1, 1) - return x * y.expand_as(x) - - -class ASPP(nn.Module): - def __init__(self, in_dims, out_dims, rate=[6, 12, 18]): - super(ASPP, self).__init__() - - self.aspp_block1 = nn.Sequential( - nn.Conv2d( - in_dims, out_dims, 3, stride=1, padding=rate[0], dilation=rate[0] - ), - nn.ReLU(inplace=True), - nn.BatchNorm2d(out_dims), - ) - self.aspp_block2 = nn.Sequential( - nn.Conv2d( - in_dims, out_dims, 3, stride=1, padding=rate[1], dilation=rate[1] - ), - nn.ReLU(inplace=True), - nn.BatchNorm2d(out_dims), - ) - self.aspp_block3 = nn.Sequential( - nn.Conv2d( - in_dims, out_dims, 3, stride=1, padding=rate[2], dilation=rate[2] - ), - nn.ReLU(inplace=True), - nn.BatchNorm2d(out_dims), - ) - - self.output = nn.Conv2d(len(rate) * out_dims, out_dims, 1) - self._init_weights() - - def forward(self, x): - x1 = self.aspp_block1(x) - x2 = self.aspp_block2(x) - x3 = self.aspp_block3(x) - out = torch.cat([x1, x2, x3], dim=1) - return self.output(out) - - def _init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - -class Upsample_(nn.Module): - def __init__(self, scale=2): - super(Upsample_, self).__init__() - - self.upsample = nn.Upsample(mode="bilinear", scale_factor=scale) - - def forward(self, x): - return self.upsample(x) - - -class AttentionBlock(nn.Module): - def __init__(self, input_encoder, input_decoder, output_dim): - super(AttentionBlock, self).__init__() - - self.conv_encoder = nn.Sequential( - nn.BatchNorm2d(input_encoder), - nn.ReLU(), - nn.Conv2d(input_encoder, output_dim, 3, padding=1), - nn.MaxPool2d(2, 2), - ) - - self.conv_decoder = nn.Sequential( - nn.BatchNorm2d(input_decoder), - nn.ReLU(), - nn.Conv2d(input_decoder, output_dim, 3, padding=1), - ) - - self.conv_attn = nn.Sequential( - nn.BatchNorm2d(output_dim), - nn.ReLU(), - nn.Conv2d(output_dim, 1, 1), - ) - - def forward(self, x1, x2): - out = self.conv_encoder(x1) + self.conv_decoder(x2) - out = self.conv_attn(out) - return out * x2 \ No newline at end of file diff --git a/spaces/7Vivek/Next-Word-Prediction-Streamlit/app.py b/spaces/7Vivek/Next-Word-Prediction-Streamlit/app.py deleted file mode 100644 index 23f5ee5e0282164544401ccc6fef729a1ffa07d2..0000000000000000000000000000000000000000 --- a/spaces/7Vivek/Next-Word-Prediction-Streamlit/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import os -import streamlit as st -import torch -import string -from transformers import BertTokenizer, BertForMaskedLM - -st.set_page_config(page_title='Next Word Prediction Model', page_icon=None, layout='centered', initial_sidebar_state='auto') - -@st.cache() -def load_model(model_name): - try: - if model_name.lower() == "bert": - bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - bert_model = BertForMaskedLM.from_pretrained('bert-base-uncased').eval() - return bert_tokenizer,bert_model - except Exception as e: - pass - -#use joblib to fast your function - -def decode(tokenizer, pred_idx, top_clean): - ignore_tokens = string.punctuation + '[PAD]' - tokens = [] - for w in pred_idx: - token = ''.join(tokenizer.decode(w).split()) - if token not in ignore_tokens: - tokens.append(token.replace('##', '')) - return '\n'.join(tokens[:top_clean]) - -def encode(tokenizer, text_sentence, add_special_tokens=True): - text_sentence = text_sentence.replace('', tokenizer.mask_token) - # if is the last token, append a "." so that models dont predict punctuation. - if tokenizer.mask_token == text_sentence.split()[-1]: - text_sentence += ' .' - - input_ids = torch.tensor([tokenizer.encode(text_sentence, add_special_tokens=add_special_tokens)]) - mask_idx = torch.where(input_ids == tokenizer.mask_token_id)[1].tolist()[0] - return input_ids, mask_idx - -def get_all_predictions(text_sentence, top_clean=5): - # ========================= BERT ================================= - input_ids, mask_idx = encode(bert_tokenizer, text_sentence) - with torch.no_grad(): - predict = bert_model(input_ids)[0] - bert = decode(bert_tokenizer, predict[0, mask_idx, :].topk(top_k).indices.tolist(), top_clean) - return {'bert': bert} - -def get_prediction_eos(input_text): - try: - input_text += ' ' - res = get_all_predictions(input_text, top_clean=int(top_k)) - return res - except Exception as error: - pass - -try: - - st.markdown("

        Next Word Prediction

        ", unsafe_allow_html=True) - st.markdown("

        Keywords : BertTokenizer, BertForMaskedLM, Pytorch

        ", unsafe_allow_html=True) - - st.sidebar.text("Next Word Prediction Model") - top_k = st.sidebar.slider("Select How many words do you need", 1 , 25, 1) #some times it is possible to have less words - print(top_k) - model_name = st.sidebar.selectbox(label='Select Model to Apply', options=['BERT', 'XLNET'], index=0, key = "model_name") - - bert_tokenizer, bert_model = load_model(model_name) - input_text = st.text_area("Enter your text here") - - #click outside box of input text to get result - res = get_prediction_eos(input_text) - - answer = [] - print(res['bert'].split("\n")) - for i in res['bert'].split("\n"): - answer.append(i) - answer_as_string = " ".join(answer) - st.text_area("Predicted List is Here",answer_as_string,key="predicted_list") - st.image('https://freepngimg.com/download/keyboard/6-2-keyboard-png-file.png',use_column_width=True) - st.markdown("
        Created By Vivek - Checkout complete project here
        ", unsafe_allow_html=True) - -except Exception as e: - print("SOME PROBLEM OCCURED") - diff --git a/spaces/9prayer/ubiq-chat-cpu/README.md b/spaces/9prayer/ubiq-chat-cpu/README.md deleted file mode 100644 index 9817a6787ac1f3c5446fce4036f5f4de265413bb..0000000000000000000000000000000000000000 --- a/spaces/9prayer/ubiq-chat-cpu/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatglm 6b -emoji: 🐢 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AFCMEgypt/colorimetric_analyzer/README.md b/spaces/AFCMEgypt/colorimetric_analyzer/README.md deleted file mode 100644 index 6807b6579c60a31a6d2f9ce39416a966ce9af4cb..0000000000000000000000000000000000000000 --- a/spaces/AFCMEgypt/colorimetric_analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Colorimetric Analyzer -emoji: 😻 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: bigscience-bloom-rail-1.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/index.html b/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/index.html deleted file mode 100644 index f5a9e2cac5c2e44223f4437d3c104c0c3efa6bc9..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/index.html +++ /dev/null @@ -1,115 +0,0 @@ - - - - - - My static Space - - - - - - - - - - - - - - - - - - - - - - -
        -journey - title Create AI - section Training - Format DataSet Inputs Files, Data Splits: 5: Teacher - Model Build w/ SKLearn, TF, Pytorch: 3: Student - Determine Model Performance: 1: Teacher, Student - section Deploy - Web Deploy Local and Cloud: 5: Teacher - Architecture Spaces Gradio Streamlit Heroku AWS Azure and GCCP: 5: Teacher - section Testing - Test Model with Input Datasets: 5: Teacher - Examples. Inputs that Work, Inputs That Break Model: 5: Teacher - Governance - Analyze, Publish Fairness, Equity, Bias for Datasets and Outputs: 5: Teacher -
        - -
        -sequenceDiagram - participant Alice - participant Bob - Alice->>John: Hello John, how are you? - loop Healthcheck - John->>John: Fight against hypochondria - end - Note right of John: Rational thoughts
        prevail... - John-->>Alice: Great! - John->>Bob: How about you? - Bob-->>John: Jolly good! -
        - -
        -

        Welcome to the Mermaid Modeler Tip Sheet

        -

        - You can use Mermaid inside HTML5 by including the script and a div with the class or mermaid. -

        -

        - Documentation is located here: - Mermaid documentation. -

        -
        - - -Links: -https://huggingface.co/spaces/awacke1/HEDIS.Roster.Dash.Component.Service -https://huggingface.co/spaces/awacke1/HEDIS.Roster.Dash.Component.SDOH -https://huggingface.co/spaces/awacke1/HEDIS.Dash.Component.Top.Clinical.Terminology.Vocabulary - - - - diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddim.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddim.py deleted file mode 100644 index 6d6e9d396c799ce386fd1fa4262f46ac8fceaacf..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,262 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - # if attr.device != torch.device("cuda"): - # attr = attr.to(torch.device("cuda")) - attr = attr.to(self.device) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - # print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - - # iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(time_range): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - if isinstance(c, dict): - assert isinstance(unconditional_conditioning, dict) - c_in = dict() - for k in c: - if isinstance(c[k], list): - c_in[k] = [torch.cat([ - unconditional_conditioning[k][i], - c[k][i]]) for i in range(len(c[k]))] - else: - c_in[k] = torch.cat([ - unconditional_conditioning[k], - c[k]]) - elif isinstance(c, list): - c_in = list() - assert isinstance(unconditional_conditioning, list) - for i in range(len(c)): - c_in.append(torch.cat([unconditional_conditioning[i], c[i]])) - else: - c_in = torch.cat([unconditional_conditioning, c])# c/uc shape [b,seq_len=77,dim=1024],c_in shape [b*2,seq_len,dim] - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - # print(f"Running DDIM Sampling with {total_steps} timesteps") - - # iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(time_range): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec \ No newline at end of file diff --git a/spaces/AIQuest/lungCancerVgg19/README.md b/spaces/AIQuest/lungCancerVgg19/README.md deleted file mode 100644 index 7dbe1e5e1ec6ad73664cd5639e38c1ba2726f5dd..0000000000000000000000000000000000000000 --- a/spaces/AIQuest/lungCancerVgg19/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LungCancerVgg19 -emoji: 👀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: false -license: gpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AK-12/llama-gradio-chat/README.md b/spaces/AK-12/llama-gradio-chat/README.md deleted file mode 100644 index f3c9376948224673e5b2205fc9b5619f1e333761..0000000000000000000000000000000000000000 --- a/spaces/AK-12/llama-gradio-chat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llama Gradio Chat -emoji: 👁 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ALSv/midjourney-v4-1/README.md b/spaces/ALSv/midjourney-v4-1/README.md deleted file mode 100644 index 1ce7431777f888a4fef0cd42dc38b5be2fe22ddf..0000000000000000000000000000000000000000 --- a/spaces/ALSv/midjourney-v4-1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anything Midjourney V4 1 -emoji: 🚀 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -duplicated_from: lu2000/anything-midjourney-v4-1 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AP123/dreamgaussian/mesh_renderer.py b/spaces/AP123/dreamgaussian/mesh_renderer.py deleted file mode 100644 index 43780d07f24783e11a468518c5951063c2638524..0000000000000000000000000000000000000000 --- a/spaces/AP123/dreamgaussian/mesh_renderer.py +++ /dev/null @@ -1,154 +0,0 @@ -import os -import math -import cv2 -import trimesh -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F - -import nvdiffrast.torch as dr -from mesh import Mesh, safe_normalize - -def scale_img_nhwc(x, size, mag='bilinear', min='bilinear'): - assert (x.shape[1] >= size[0] and x.shape[2] >= size[1]) or (x.shape[1] < size[0] and x.shape[2] < size[1]), "Trying to magnify image in one dimension and minify in the other" - y = x.permute(0, 3, 1, 2) # NHWC -> NCHW - if x.shape[1] > size[0] and x.shape[2] > size[1]: # Minification, previous size was bigger - y = torch.nn.functional.interpolate(y, size, mode=min) - else: # Magnification - if mag == 'bilinear' or mag == 'bicubic': - y = torch.nn.functional.interpolate(y, size, mode=mag, align_corners=True) - else: - y = torch.nn.functional.interpolate(y, size, mode=mag) - return y.permute(0, 2, 3, 1).contiguous() # NCHW -> NHWC - -def scale_img_hwc(x, size, mag='bilinear', min='bilinear'): - return scale_img_nhwc(x[None, ...], size, mag, min)[0] - -def scale_img_nhw(x, size, mag='bilinear', min='bilinear'): - return scale_img_nhwc(x[..., None], size, mag, min)[..., 0] - -def scale_img_hw(x, size, mag='bilinear', min='bilinear'): - return scale_img_nhwc(x[None, ..., None], size, mag, min)[0, ..., 0] - -def trunc_rev_sigmoid(x, eps=1e-6): - x = x.clamp(eps, 1 - eps) - return torch.log(x / (1 - x)) - -def make_divisible(x, m=8): - return int(math.ceil(x / m) * m) - -class Renderer(nn.Module): - def __init__(self, opt): - - super().__init__() - - self.opt = opt - - self.mesh = Mesh.load(self.opt.mesh, resize=False) - - if not self.opt.gui or os.name == 'nt': - self.glctx = dr.RasterizeGLContext() - else: - self.glctx = dr.RasterizeCudaContext() - - # extract trainable parameters - self.v_offsets = nn.Parameter(torch.zeros_like(self.mesh.v)) - self.raw_albedo = nn.Parameter(trunc_rev_sigmoid(self.mesh.albedo)) - - - def get_params(self): - - params = [ - {'params': self.raw_albedo, 'lr': self.opt.texture_lr}, - ] - - if self.opt.train_geo: - params.append({'params': self.v_offsets, 'lr': self.opt.geom_lr}) - - return params - - @torch.no_grad() - def export_mesh(self, save_path): - self.mesh.v = (self.mesh.v + self.v_offsets).detach() - self.mesh.albedo = torch.sigmoid(self.raw_albedo.detach()) - self.mesh.write(save_path) - - - def render(self, pose, proj, h0, w0, ssaa=1, bg_color=1, texture_filter='linear-mipmap-linear'): - - # do super-sampling - if ssaa != 1: - h = make_divisible(h0 * ssaa, 8) - w = make_divisible(w0 * ssaa, 8) - else: - h, w = h0, w0 - - results = {} - - # get v - if self.opt.train_geo: - v = self.mesh.v + self.v_offsets # [N, 3] - else: - v = self.mesh.v - - pose = torch.from_numpy(pose.astype(np.float32)).to(v.device) - proj = torch.from_numpy(proj.astype(np.float32)).to(v.device) - - # get v_clip and render rgb - v_cam = torch.matmul(F.pad(v, pad=(0, 1), mode='constant', value=1.0), torch.inverse(pose).T).float().unsqueeze(0) - v_clip = v_cam @ proj.T - - rast, rast_db = dr.rasterize(self.glctx, v_clip, self.mesh.f, (h, w)) - - alpha = (rast[0, ..., 3:] > 0).float() - depth, _ = dr.interpolate(-v_cam[..., [2]], rast, self.mesh.f) # [1, H, W, 1] - depth = depth.squeeze(0) # [H, W, 1] - - texc, texc_db = dr.interpolate(self.mesh.vt.unsqueeze(0).contiguous(), rast, self.mesh.ft, rast_db=rast_db, diff_attrs='all') - albedo = dr.texture(self.raw_albedo.unsqueeze(0), texc, uv_da=texc_db, filter_mode=texture_filter) # [1, H, W, 3] - albedo = torch.sigmoid(albedo) - # get vn and render normal - if self.opt.train_geo: - i0, i1, i2 = self.mesh.f[:, 0].long(), self.mesh.f[:, 1].long(), self.mesh.f[:, 2].long() - v0, v1, v2 = v[i0, :], v[i1, :], v[i2, :] - - face_normals = torch.cross(v1 - v0, v2 - v0) - face_normals = safe_normalize(face_normals) - - vn = torch.zeros_like(v) - vn.scatter_add_(0, i0[:, None].repeat(1,3), face_normals) - vn.scatter_add_(0, i1[:, None].repeat(1,3), face_normals) - vn.scatter_add_(0, i2[:, None].repeat(1,3), face_normals) - - vn = torch.where(torch.sum(vn * vn, -1, keepdim=True) > 1e-20, vn, torch.tensor([0.0, 0.0, 1.0], dtype=torch.float32, device=vn.device)) - else: - vn = self.mesh.vn - - normal, _ = dr.interpolate(vn.unsqueeze(0).contiguous(), rast, self.mesh.fn) - normal = safe_normalize(normal[0]) - - # rotated normal (where [0, 0, 1] always faces camera) - rot_normal = normal @ pose[:3, :3] - viewcos = rot_normal[..., [2]] - - # antialias - albedo = dr.antialias(albedo, rast, v_clip, self.mesh.f).squeeze(0) # [H, W, 3] - albedo = alpha * albedo + (1 - alpha) * bg_color - - # ssaa - if ssaa != 1: - albedo = scale_img_hwc(albedo, (h0, w0)) - alpha = scale_img_hwc(alpha, (h0, w0)) - depth = scale_img_hwc(depth, (h0, w0)) - normal = scale_img_hwc(normal, (h0, w0)) - viewcos = scale_img_hwc(viewcos, (h0, w0)) - - results['image'] = albedo.clamp(0, 1) - results['alpha'] = alpha - results['depth'] = depth - results['normal'] = (normal + 1) / 2 - results['viewcos'] = viewcos - - return results \ No newline at end of file diff --git a/spaces/Abhilashvj/planogram-compliance/models/yolo.py b/spaces/Abhilashvj/planogram-compliance/models/yolo.py deleted file mode 100644 index e9c60553b5a2b29c436cf10c9a0a650ce9bb45da..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/models/yolo.py +++ /dev/null @@ -1,569 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -YOLO-specific modules - -Usage: - $ python models/yolo.py --cfg yolov5s.yaml -""" - -import argparse -import contextlib -import os -import platform -import sys -from copy import deepcopy -from pathlib import Path - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -if platform.system() != "Windows": - ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import * -from models.experimental import * -from utils.autoanchor import check_anchor_order -from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args -from utils.plots import feature_visualization -from utils.torch_utils import ( - fuse_conv_and_bn, - initialize_weights, - model_info, - profile, - scale_img, - select_device, - time_sync, -) - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - - -class Detect(nn.Module): - # YOLOv5 Detect head for detection models - stride = None # strides computed during build - dynamic = False # force grid reconstruction - export = False # export mode - - def __init__( - self, nc=80, anchors=(), ch=(), inplace=True - ): # detection layer - super().__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.empty(0) for _ in range(self.nl)] # init grid - self.anchor_grid = [ - torch.empty(0) for _ in range(self.nl) - ] # init anchor grid - self.register_buffer( - "anchors", torch.tensor(anchors).float().view(self.nl, -1, 2) - ) # shape(nl,na,2) - self.m = nn.ModuleList( - nn.Conv2d(x, self.no * self.na, 1) for x in ch - ) # output conv - self.inplace = inplace # use inplace ops (e.g. slice assignment) - - def forward(self, x): - z = [] # inference output - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = ( - x[i] - .view(bs, self.na, self.no, ny, nx) - .permute(0, 1, 3, 4, 2) - .contiguous() - ) - - if not self.training: # inference - if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i], self.anchor_grid[i] = self._make_grid( - nx, ny, i - ) - - if isinstance(self, Segment): # (boxes + masks) - xy, wh, conf, mask = x[i].split( - (2, 2, self.nc + 1, self.no - self.nc - 5), 4 - ) - xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[ - i - ] # xy - wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i] # wh - y = torch.cat((xy, wh, conf.sigmoid(), mask), 4) - else: # Detect (boxes only) - xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4) - xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy - wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, self.na * nx * ny, self.no)) - - return ( - x - if self.training - else (torch.cat(z, 1),) - if self.export - else (torch.cat(z, 1), x) - ) - - def _make_grid( - self, - nx=20, - ny=20, - i=0, - torch_1_10=check_version(torch.__version__, "1.10.0"), - ): - d = self.anchors[i].device - t = self.anchors[i].dtype - shape = 1, self.na, ny, nx, 2 # grid shape - y, x = torch.arange(ny, device=d, dtype=t), torch.arange( - nx, device=d, dtype=t - ) - yv, xv = ( - torch.meshgrid(y, x, indexing="ij") - if torch_1_10 - else torch.meshgrid(y, x) - ) # torch>=0.7 compatibility - grid = ( - torch.stack((xv, yv), 2).expand(shape) - 0.5 - ) # add grid offset, i.e. y = 2.0 * x - 0.5 - anchor_grid = ( - (self.anchors[i] * self.stride[i]) - .view((1, self.na, 1, 1, 2)) - .expand(shape) - ) - return grid, anchor_grid - - -class Segment(Detect): - # YOLOv5 Segment head for segmentation models - def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), inplace=True): - super().__init__(nc, anchors, ch, inplace) - self.nm = nm # number of masks - self.npr = npr # number of protos - self.no = 5 + nc + self.nm # number of outputs per anchor - self.m = nn.ModuleList( - nn.Conv2d(x, self.no * self.na, 1) for x in ch - ) # output conv - self.proto = Proto(ch[0], self.npr, self.nm) # protos - self.detect = Detect.forward - - def forward(self, x): - p = self.proto(x[0]) - x = self.detect(self, x) - return ( - (x, p) - if self.training - else (x[0], p) - if self.export - else (x[0], p, x[1]) - ) - - -class BaseModel(nn.Module): - # YOLOv5 base model - def forward(self, x, profile=False, visualize=False): - return self._forward_once( - x, profile, visualize - ) # single-scale inference, train - - def _forward_once(self, x, profile=False, visualize=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = ( - y[m.f] - if isinstance(m.f, int) - else [x if j == -1 else y[j] for j in m.f] - ) # from earlier layers - if profile: - self._profile_one_layer(m, x, dt) - x = m(x) # run - y.append(x if m.i in self.save else None) # save output - if visualize: - feature_visualization(x, m.type, m.i, save_dir=visualize) - return x - - def _profile_one_layer(self, m, x, dt): - c = m == self.model[-1] # is final layer, copy input as inplace fix - o = ( - thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] - / 1e9 - * 2 - if thop - else 0 - ) # FLOPs - t = time_sync() - for _ in range(10): - m(x.copy() if c else x) - dt.append((time_sync() - t) * 100) - if m == self.model[0]: - LOGGER.info( - f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module" - ) - LOGGER.info(f"{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}") - if c: - LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - LOGGER.info("Fusing layers... ") - for m in self.model.modules(): - if isinstance(m, (Conv, DWConv)) and hasattr(m, "bn"): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, "bn") # remove batchnorm - m.forward = m.forward_fuse # update forward - self.info() - return self - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - m = self.model[-1] # Detect() - if isinstance(m, (Detect, Segment)): - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - -class DetectionModel(BaseModel): - # YOLOv5 detection model - def __init__( - self, cfg="yolov5s.yaml", ch=3, nc=None, anchors=None - ): # model, input channels, number of classes - super().__init__() - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - - self.yaml_file = Path(cfg).name - with open(cfg, encoding="ascii", errors="ignore") as f: - self.yaml = yaml.safe_load(f) # model dict - - # Define model - ch = self.yaml["ch"] = self.yaml.get("ch", ch) # input channels - if nc and nc != self.yaml["nc"]: - LOGGER.info( - f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}" - ) - self.yaml["nc"] = nc # override yaml value - if anchors: - LOGGER.info( - f"Overriding model.yaml anchors with anchors={anchors}" - ) - self.yaml["anchors"] = round(anchors) # override yaml value - self.model, self.save = parse_model( - deepcopy(self.yaml), ch=[ch] - ) # model, savelist - self.names = [str(i) for i in range(self.yaml["nc"])] # default names - self.inplace = self.yaml.get("inplace", True) - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, (Detect, Segment)): - s = 256 # 2x min stride - m.inplace = self.inplace - forward = ( - lambda x: self.forward(x)[0] - if isinstance(m, Segment) - else self.forward(x) - ) - m.stride = torch.tensor( - [s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))] - ) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases() # only run once - - # Init weights, biases - initialize_weights(self) - self.info() - LOGGER.info("") - - def forward(self, x, augment=False, profile=False, visualize=False): - if augment: - return self._forward_augment(x) # augmented inference, None - return self._forward_once( - x, profile, visualize - ) # single-scale inference, train - - def _forward_augment(self, x): - img_size = x.shape[-2:] # height, width - s = [1, 0.83, 0.67] # scales - f = [None, 3, None] # flips (2-ud, 3-lr) - y = [] # outputs - for si, fi in zip(s, f): - xi = scale_img( - x.flip(fi) if fi else x, si, gs=int(self.stride.max()) - ) - yi = self._forward_once(xi)[0] # forward - # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - yi = self._descale_pred(yi, fi, si, img_size) - y.append(yi) - y = self._clip_augmented(y) # clip augmented tails - return torch.cat(y, 1), None # augmented inference, train - - def _descale_pred(self, p, flips, scale, img_size): - # de-scale predictions following augmented inference (inverse operation) - if self.inplace: - p[..., :4] /= scale # de-scale - if flips == 2: - p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - elif flips == 3: - p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - else: - x, y, wh = ( - p[..., 0:1] / scale, - p[..., 1:2] / scale, - p[..., 2:4] / scale, - ) # de-scale - if flips == 2: - y = img_size[0] - y # de-flip ud - elif flips == 3: - x = img_size[1] - x # de-flip lr - p = torch.cat((x, y, wh, p[..., 4:]), -1) - return p - - def _clip_augmented(self, y): - # Clip YOLOv5 augmented inference tails - nl = self.model[-1].nl # number of detection layers (P3-P5) - g = sum(4**x for x in range(nl)) # grid points - e = 1 # exclude layer count - i = (y[0].shape[1] // g) * sum(4**x for x in range(e)) # indices - y[0] = y[0][:, :-i] # large - i = (y[-1].shape[1] // g) * sum( - 4 ** (nl - 1 - x) for x in range(e) - ) # indices - y[-1] = y[-1][:, i:] # small - return y - - def _initialize_biases( - self, cf=None - ): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log( - 8 / (640 / s) ** 2 - ) # obj (8 objects per 640 image) - b.data[:, 5 : 5 + m.nc] += ( - math.log(0.6 / (m.nc - 0.99999)) - if cf is None - else torch.log(cf / cf.sum()) - ) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - -Model = ( - DetectionModel # retain YOLOv5 'Model' class for backwards compatibility -) - - -class SegmentationModel(DetectionModel): - # YOLOv5 segmentation model - def __init__(self, cfg="yolov5s-seg.yaml", ch=3, nc=None, anchors=None): - super().__init__(cfg, ch, nc, anchors) - - -class ClassificationModel(BaseModel): - # YOLOv5 classification model - def __init__( - self, cfg=None, model=None, nc=1000, cutoff=10 - ): # yaml, model, number of classes, cutoff index - super().__init__() - self._from_detection_model( - model, nc, cutoff - ) if model is not None else self._from_yaml(cfg) - - def _from_detection_model(self, model, nc=1000, cutoff=10): - # Create a YOLOv5 classification model from a YOLOv5 detection model - if isinstance(model, DetectMultiBackend): - model = model.model # unwrap DetectMultiBackend - model.model = model.model[:cutoff] # backbone - m = model.model[-1] # last layer - ch = ( - m.conv.in_channels - if hasattr(m, "conv") - else m.cv1.conv.in_channels - ) # ch into module - c = Classify(ch, nc) # Classify() - c.i, c.f, c.type = ( - m.i, - m.f, - "models.common.Classify", - ) # index, from, type - model.model[-1] = c # replace - self.model = model.model - self.stride = model.stride - self.save = [] - self.nc = nc - - def _from_yaml(self, cfg): - # Create a YOLOv5 classification model from a *.yaml file - self.model = None - - -def parse_model(d, ch): # model_dict, input_channels(3) - # Parse a YOLOv5 model.yaml dictionary - LOGGER.info( - f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}" - ) - anchors, nc, gd, gw, act = ( - d["anchors"], - d["nc"], - d["depth_multiple"], - d["width_multiple"], - d.get("activation"), - ) - if act: - Conv.default_act = eval( - act - ) # redefine default activation, i.e. Conv.default_act = nn.SiLU() - LOGGER.info(f"{colorstr('activation:')} {act}") # print - na = ( - (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors - ) # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate( - d["backbone"] + d["head"] - ): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - with contextlib.suppress(NameError): - args[j] = eval(a) if isinstance(a, str) else a # eval strings - - n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in { - Conv, - GhostConv, - Bottleneck, - GhostBottleneck, - SPP, - SPPF, - DWConv, - MixConv2d, - Focus, - CrossConv, - BottleneckCSP, - C3, - C3TR, - C3SPP, - C3Ghost, - nn.ConvTranspose2d, - DWConvTranspose2d, - C3x, - }: - c1, c2 = ch[f], args[0] - if c2 != no: # if not output - c2 = make_divisible(c2 * gw, 8) - - args = [c1, c2, *args[1:]] - if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x}: - args.insert(2, n) # number of repeats - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum(ch[x] for x in f) - # TODO: channel, gw, gd - elif m in {Detect, Segment}: - args.append([ch[x] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - if m is Segment: - args[3] = make_divisible(args[3] * gw, 8) - elif m is Contract: - c2 = ch[f] * args[0] ** 2 - elif m is Expand: - c2 = ch[f] // args[0] ** 2 - else: - c2 = ch[f] - - m_ = ( - nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) - ) # module - t = str(m)[8:-2].replace("__main__.", "") # module type - np = sum(x.numel() for x in m_.parameters()) # number params - m_.i, m_.f, m_.type, m_.np = ( - i, - f, - t, - np, - ) # attach index, 'from' index, type, number params - LOGGER.info( - f"{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}" - ) # print - save.extend( - x % i for x in ([f] if isinstance(f, int) else f) if x != -1 - ) # append to savelist - layers.append(m_) - if i == 0: - ch = [] - ch.append(c2) - return nn.Sequential(*layers), sorted(save) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--cfg", type=str, default="yolov5s.yaml", help="model.yaml" - ) - parser.add_argument( - "--batch-size", - type=int, - default=1, - help="total batch size for all GPUs", - ) - parser.add_argument( - "--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu" - ) - parser.add_argument( - "--profile", action="store_true", help="profile model speed" - ) - parser.add_argument( - "--line-profile", - action="store_true", - help="profile model speed layer by layer", - ) - parser.add_argument( - "--test", action="store_true", help="test all yolo*.yaml" - ) - opt = parser.parse_args() - opt.cfg = check_yaml(opt.cfg) # check YAML - print_args(vars(opt)) - device = select_device(opt.device) - - # Create model - im = torch.rand(opt.batch_size, 3, 640, 640).to(device) - model = Model(opt.cfg).to(device) - - # Options - if opt.line_profile: # profile layer by layer - model(im, profile=True) - - elif opt.profile: # profile forward-backward - results = profile(input=im, ops=[model], n=3) - - elif opt.test: # test all models - for cfg in Path(ROOT / "models").rglob("yolo*.yaml"): - try: - _ = Model(cfg) - except Exception as e: - print(f"Error in {cfg}: {e}") - - else: # report fused model summary - model.fuse() diff --git a/spaces/Adapter/T2I-Adapter/ldm/models/diffusion/dpm_solver/__init__.py b/spaces/Adapter/T2I-Adapter/ldm/models/diffusion/dpm_solver/__init__.py deleted file mode 100644 index 7427f38c07530afbab79154ea8aaf88c4bf70a08..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/models/diffusion/dpm_solver/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .sampler import DPMSolverSampler \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Factory.d.ts deleted file mode 100644 index 1d0955e69b2e994396aaad6fbf93545829691d0f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Factory.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -// import * as Phaser from 'phaser'; -import Anchor from "./Anchor"; - -export default function ( - gameObject: Phaser.GameObjects.GameObject, - config?: Anchor.IConfig -): Anchor; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simplelabel/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simplelabel/Factory.d.ts deleted file mode 100644 index db02f57e2708c317cf60d752c95d2a7b3f166fbe..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simplelabel/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import SimpleLabel from './SimpleLabel'; - -export default function ( - config?: SimpleLabel.IConfig, - creators?: SimpleLabel.ICreatorsConfig, -): SimpleLabel; \ No newline at end of file diff --git a/spaces/Aki004/herta-so-vits/train.py b/spaces/Aki004/herta-so-vits/train.py deleted file mode 100644 index 410f19213866f388763f0c9ac21c24c09dd5dfea..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/train.py +++ /dev/null @@ -1,330 +0,0 @@ -import logging -import multiprocessing -import time - -logging.getLogger('matplotlib').setLevel(logging.WARNING) -logging.getLogger('numba').setLevel(logging.WARNING) - -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import modules.commons as commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioCollate -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from modules.losses import ( - kl_loss, - generator_loss, discriminator_loss, feature_loss -) - -from modules.mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 -start_time = time.time() - -# os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'INFO' - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - hps = utils.get_hparams() - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = hps.train.port - - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - # for pytorch on win, backend use gloo - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - collate_fn = TextAudioCollate() - all_in_mem = hps.train.all_in_mem # If you have enough memory, turn on this option to avoid disk IO and speed up training. - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps, all_in_mem=all_in_mem) - num_workers = 5 if multiprocessing.cpu_count() > 4 else multiprocessing.cpu_count() - if all_in_mem: - num_workers = 0 - train_loader = DataLoader(train_dataset, num_workers=num_workers, shuffle=False, pin_memory=True, - batch_size=hps.train.batch_size, collate_fn=collate_fn) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps, all_in_mem=all_in_mem) - eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False, - batch_size=1, pin_memory=False, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) # , find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank]) - - skip_optimizer = False - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer) - epoch_str = max(epoch_str, 1) - name=utils.latest_checkpoint_path(hps.model_dir, "D_*.pth") - global_step=int(name[name.rfind("_")+1:name.rfind(".")])+1 - #global_step = (epoch_str - 1) * len(train_loader) - except: - print("load old checkpoint failed...") - epoch_str = 1 - global_step = 0 - if skip_optimizer: - epoch_str = 1 - global_step = 0 - - warmup_epoch = hps.train.warmup_epochs - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - # update learning rate - if epoch > 1: - scheduler_g.step() - scheduler_d.step() - # set up warm-up learning rate - if epoch <= warmup_epoch: - for param_group in optim_g.param_groups: - param_group['lr'] = hps.train.learning_rate / warmup_epoch * epoch - for param_group in optim_d.param_groups: - param_group['lr'] = hps.train.learning_rate / warmup_epoch * epoch - # training - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, None], None, None) - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - # train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, items in enumerate(train_loader): - c, f0, spec, y, spk, lengths, uv = items - g = spk.cuda(rank, non_blocking=True) - spec, y = spec.cuda(rank, non_blocking=True), y.cuda(rank, non_blocking=True) - c = c.cuda(rank, non_blocking=True) - f0 = f0.cuda(rank, non_blocking=True) - uv = uv.cuda(rank, non_blocking=True) - lengths = lengths.cuda(rank, non_blocking=True) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - - with autocast(enabled=hps.train.fp16_run): - y_hat, ids_slice, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 = net_g(c, f0, uv, spec, g=g, c_lengths=lengths, - spec_lengths=lengths) - - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_lf0 = F.mse_loss(pred_lf0, lf0) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl + loss_lf0 - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_kl] - reference_loss=0 - for i in losses: - reference_loss += i - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info(f"Losses: {[x.item() for x in losses]}, step: {global_step}, lr: {lr}, reference_loss: {reference_loss}") - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/kl": loss_kl, - "loss/g/lf0": loss_lf0}) - - # scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - # scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - # scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - pred_lf0[0, 0, :].detach().cpu().numpy()), - "all/norm_lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - norm_lf0[0, 0, :].detach().cpu().numpy()) - } - - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 0) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - global_step += 1 - - if rank == 0: - global start_time - now = time.time() - durtaion = format(now - start_time, '.2f') - logger.info(f'====> Epoch: {epoch}, cost {durtaion} s') - start_time = now - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - with torch.no_grad(): - for batch_idx, items in enumerate(eval_loader): - c, f0, spec, y, spk, _, uv = items - g = spk[:1].cuda(0) - spec, y = spec[:1].cuda(0), y[:1].cuda(0) - c = c[:1].cuda(0) - f0 = f0[:1].cuda(0) - uv= uv[:1].cuda(0) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat = generator.module.infer(c, f0, uv, g=g) - - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - audio_dict.update({ - f"gen/audio_{batch_idx}": y_hat[0], - f"gt/audio_{batch_idx}": y[0] - }) - image_dict.update({ - f"gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()), - "gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy()) - }) - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/AlexWang/lama/saicinpainting/utils.py b/spaces/AlexWang/lama/saicinpainting/utils.py deleted file mode 100644 index d0914320eab96e197ae379b94ea7eeb2fe5dfd79..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/utils.py +++ /dev/null @@ -1,174 +0,0 @@ -import bisect -import functools -import logging -import numbers -import os -import signal -import sys -import traceback -import warnings - -import torch -from pytorch_lightning import seed_everything - -LOGGER = logging.getLogger(__name__) - - -def check_and_warn_input_range(tensor, min_value, max_value, name): - actual_min = tensor.min() - actual_max = tensor.max() - if actual_min < min_value or actual_max > max_value: - warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}") - - -def sum_dict_with_prefix(target, cur_dict, prefix, default=0): - for k, v in cur_dict.items(): - target_key = prefix + k - target[target_key] = target.get(target_key, default) + v - - -def average_dicts(dict_list): - result = {} - norm = 1e-3 - for dct in dict_list: - sum_dict_with_prefix(result, dct, '') - norm += 1 - for k in list(result): - result[k] /= norm - return result - - -def add_prefix_to_keys(dct, prefix): - return {prefix + k: v for k, v in dct.items()} - - -def set_requires_grad(module, value): - for param in module.parameters(): - param.requires_grad = value - - -def flatten_dict(dct): - result = {} - for k, v in dct.items(): - if isinstance(k, tuple): - k = '_'.join(k) - if isinstance(v, dict): - for sub_k, sub_v in flatten_dict(v).items(): - result[f'{k}_{sub_k}'] = sub_v - else: - result[k] = v - return result - - -class LinearRamp: - def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0): - self.start_value = start_value - self.end_value = end_value - self.start_iter = start_iter - self.end_iter = end_iter - - def __call__(self, i): - if i < self.start_iter: - return self.start_value - if i >= self.end_iter: - return self.end_value - part = (i - self.start_iter) / (self.end_iter - self.start_iter) - return self.start_value * (1 - part) + self.end_value * part - - -class LadderRamp: - def __init__(self, start_iters, values): - self.start_iters = start_iters - self.values = values - assert len(values) == len(start_iters) + 1, (len(values), len(start_iters)) - - def __call__(self, i): - segment_i = bisect.bisect_right(self.start_iters, i) - return self.values[segment_i] - - -def get_ramp(kind='ladder', **kwargs): - if kind == 'linear': - return LinearRamp(**kwargs) - if kind == 'ladder': - return LadderRamp(**kwargs) - raise ValueError(f'Unexpected ramp kind: {kind}') - - -def print_traceback_handler(sig, frame): - LOGGER.warning(f'Received signal {sig}') - bt = ''.join(traceback.format_stack()) - LOGGER.warning(f'Requested stack trace:\n{bt}') - - -def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler): - LOGGER.warning(f'Setting signal {sig} handler {handler}') - signal.signal(sig, handler) - - -def handle_deterministic_config(config): - seed = dict(config).get('seed', None) - if seed is None: - return False - - seed_everything(seed) - return True - - -def get_shape(t): - if torch.is_tensor(t): - return tuple(t.shape) - elif isinstance(t, dict): - return {n: get_shape(q) for n, q in t.items()} - elif isinstance(t, (list, tuple)): - return [get_shape(q) for q in t] - elif isinstance(t, numbers.Number): - return type(t) - else: - raise ValueError('unexpected type {}'.format(type(t))) - - -def get_has_ddp_rank(): - master_port = os.environ.get('MASTER_PORT', None) - node_rank = os.environ.get('NODE_RANK', None) - local_rank = os.environ.get('LOCAL_RANK', None) - world_size = os.environ.get('WORLD_SIZE', None) - has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None - return has_rank - - -def handle_ddp_subprocess(): - def main_decorator(main_func): - @functools.wraps(main_func) - def new_main(*args, **kwargs): - # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if has_parent: - # we are in the worker - sys.argv.extend([ - f'hydra.run.dir={parent_cwd}', - # 'hydra/hydra_logging=disabled', - # 'hydra/job_logging=disabled' - ]) - # do nothing if this is a top-level process - # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization - - main_func(*args, **kwargs) - return new_main - return main_decorator - - -def handle_ddp_parent_process(): - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if parent_cwd is None: - os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd() - - return has_parent diff --git a/spaces/Alfasign/HuggingGPT-Lite/app.py b/spaces/Alfasign/HuggingGPT-Lite/app.py deleted file mode 100644 index 4c06deac3e4f5ea1d5d2607830ba2b3e47db7582..0000000000000000000000000000000000000000 --- a/spaces/Alfasign/HuggingGPT-Lite/app.py +++ /dev/null @@ -1,237 +0,0 @@ -import uuid -import gradio as gr -import re -from diffusers.utils import load_image -import requests -from awesome_chat import chat_huggingface -import os - -os.makedirs("public/images", exist_ok=True) -os.makedirs("public/audios", exist_ok=True) -os.makedirs("public/videos", exist_ok=True) - -HUGGINGFACE_TOKEN = os.environ.get("HUGGINGFACE_TOKEN") -OPENAI_KEY = os.environ.get("OPENAI_KEY") - - -class Client: - def __init__(self) -> None: - self.OPENAI_KEY = OPENAI_KEY - self.HUGGINGFACE_TOKEN = HUGGINGFACE_TOKEN - self.all_messages = [] - - def set_key(self, openai_key): - self.OPENAI_KEY = openai_key - return self.OPENAI_KEY - - def set_token(self, huggingface_token): - self.HUGGINGFACE_TOKEN = huggingface_token - return self.HUGGINGFACE_TOKEN - - def add_message(self, content, role): - message = {"role": role, "content": content} - self.all_messages.append(message) - - def extract_medias(self, message): - # url_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?") - urls = [] - # for match in url_pattern.finditer(message): - # if match.group(0) not in urls: - # urls.append(match.group(0)) - - image_pattern = re.compile( - r"(http(s?):|\/)?([\.\/_\w:-])*?\.(jpg|jpeg|tiff|gif|png)" - ) - image_urls = [] - for match in image_pattern.finditer(message): - if match.group(0) not in image_urls: - image_urls.append(match.group(0)) - - audio_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?\.(flac|wav)") - audio_urls = [] - for match in audio_pattern.finditer(message): - if match.group(0) not in audio_urls: - audio_urls.append(match.group(0)) - - video_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?\.(mp4)") - video_urls = [] - for match in video_pattern.finditer(message): - if match.group(0) not in video_urls: - video_urls.append(match.group(0)) - - return urls, image_urls, audio_urls, video_urls - - def add_text(self, messages, message): - if ( - not self.OPENAI_KEY - or not self.OPENAI_KEY.startswith("sk-") - or not self.HUGGINGFACE_TOKEN - or not self.HUGGINGFACE_TOKEN.startswith("hf_") - ): - return ( - messages, - "Please set your OpenAI API key and Hugging Face token first!!!", - ) - self.add_message(message, "user") - messages = messages + [(message, None)] - urls, image_urls, audio_urls, video_urls = self.extract_medias(message) - - for image_url in image_urls: - if not image_url.startswith("http") and not image_url.startswith("public"): - image_url = "public/" + image_url - image = load_image(image_url) - name = f"public/images/{str(uuid.uuid4())[:4]}.jpg" - image.save(name) - messages = messages + [((f"{name}",), None)] - for audio_url in audio_urls and not audio_url.startswith("public"): - if not audio_url.startswith("http"): - audio_url = "public/" + audio_url - ext = audio_url.split(".")[-1] - name = f"public/audios/{str(uuid.uuid4()[:4])}.{ext}" - response = requests.get(audio_url) - with open(name, "wb") as f: - f.write(response.content) - messages = messages + [((f"{name}",), None)] - for video_url in video_urls and not video_url.startswith("public"): - if not video_url.startswith("http"): - video_url = "public/" + video_url - ext = video_url.split(".")[-1] - name = f"public/audios/{str(uuid.uuid4()[:4])}.{ext}" - response = requests.get(video_url) - with open(name, "wb") as f: - f.write(response.content) - messages = messages + [((f"{name}",), None)] - return messages, "" - - def bot(self, messages): - if ( - not self.OPENAI_KEY - or not self.OPENAI_KEY.startswith("sk-") - or not self.HUGGINGFACE_TOKEN - or not self.HUGGINGFACE_TOKEN.startswith("hf_") - ): - return messages, {} - message, results = chat_huggingface( - self.all_messages, self.OPENAI_KEY, self.HUGGINGFACE_TOKEN - ) - urls, image_urls, audio_urls, video_urls = self.extract_medias(message) - self.add_message(message, "assistant") - messages[-1][1] = message - for image_url in image_urls: - if not image_url.startswith("http"): - image_url = image_url.replace("public/", "") - messages = messages + [((None, (f"public/{image_url}",)))] - # else: - # messages = messages + [((None, (f"{image_url}",)))] - for audio_url in audio_urls: - if not audio_url.startswith("http"): - audio_url = audio_url.replace("public/", "") - messages = messages + [((None, (f"public/{audio_url}",)))] - # else: - # messages = messages + [((None, (f"{audio_url}",)))] - for video_url in video_urls: - if not video_url.startswith("http"): - video_url = video_url.replace("public/", "") - messages = messages + [((None, (f"public/{video_url}",)))] - # else: - # messages = messages + [((None, (f"{video_url}",)))] - # replace int key to string key - results = {str(k): v for k, v in results.items()} - return messages, results - - -css = ".json {height: 527px; overflow: scroll;} .json-holder {height: 527px; overflow: scroll;}" -with gr.Blocks(css=css) as demo: - state = gr.State(value={"client": Client()}) - gr.Markdown("

        HuggingGPT - Lite 🎐

        ") - gr.Markdown( - "

        " - ) - gr.Markdown( - "

        A system to connect LLMs with ML community. See our Project and Paper.

        " - ) - gr.HTML( - """
        Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key and Hugging Face Token
        """ - ) - gr.Markdown( - """>**Note**: This is a further lite version of the original HuggingGPT designed to run on CPU-only spaces. This model by default uses `gpt-3.5-turbo` which is much much cheaper than `text-davinci-003`. """ - ) - if not OPENAI_KEY: - with gr.Row().style(): - with gr.Column(scale=0.85): - openai_api_key = gr.Textbox( - show_label=False, - placeholder="Set your OpenAI API key here and press Enter", - lines=1, - type="password", - ).style(container=False) - with gr.Column(scale=0.15, min_width=0): - btn1 = gr.Button("Submit").style(full_height=True) - - if not HUGGINGFACE_TOKEN: - with gr.Row().style(): - with gr.Column(scale=0.85): - hugging_face_token = gr.Textbox( - show_label=False, - placeholder="Set your Hugging Face Token here and press Enter", - lines=1, - type="password", - ).style(container=False) - with gr.Column(scale=0.15, min_width=0): - btn3 = gr.Button("Submit").style(full_height=True) - - with gr.Row().style(): - with gr.Column(scale=0.6): - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=500) - with gr.Column(scale=0.4): - results = gr.JSON(elem_classes="json") - - with gr.Row().style(): - with gr.Column(scale=0.85): - txt = gr.Textbox( - show_label=False, - placeholder="Enter text and press enter. The url must contain the media type. e.g, https://example.com/example.jpg", - lines=1, - ).style(container=False) - with gr.Column(scale=0.15, min_width=0): - btn2 = gr.Button("Send").style(full_height=True) - - def set_key(state, openai_api_key): - return state["client"].set_key(openai_api_key) - - def add_text(state, chatbot, txt): - return state["client"].add_text(chatbot, txt) - - def set_token(state, hugging_face_token): - return state["client"].set_token(hugging_face_token) - - def bot(state, chatbot): - return state["client"].bot(chatbot) - - if not OPENAI_KEY: - openai_api_key.submit(set_key, [state, openai_api_key], [openai_api_key]) - btn1.click(set_key, [state, openai_api_key], [openai_api_key]) - - if not HUGGINGFACE_TOKEN: - hugging_face_token.submit( - set_token, [state, hugging_face_token], [hugging_face_token] - ) - btn3.click(set_token, [state, hugging_face_token], [hugging_face_token]) - - txt.submit(add_text, [state, chatbot, txt], [chatbot, txt]).then( - bot, [state, chatbot], [chatbot, results] - ) - btn2.click(add_text, [state, chatbot, txt], [chatbot, txt]).then( - bot, [state, chatbot], [chatbot, results] - ) - - gr.Examples( - examples=[ - "Given a collection of image A: /examples/a.jpg, B: /examples/b.jpg, C: /examples/c.jpg, please tell me how many zebras in these picture?", - "show me a joke and an image of cat", - "what is in the examples/a.jpg", - ], - inputs=txt, - ) - -demo.launch() diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/abbrrviation.py b/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/abbrrviation.py deleted file mode 100644 index abf198b97e6e818e1fbe59006f98492640bcee54..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/abbrrviation.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/inference.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/inference.py deleted file mode 100644 index 3e5156e8d649954837e397c2ff15ec29995e7502..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/inference.py +++ /dev/null @@ -1,35 +0,0 @@ -import argparse - -import cv2 -import numpy as np -import torch - -from backbones import get_model - - -@torch.no_grad() -def inference(weight, name, img): - if img is None: - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.uint8) - else: - img = cv2.imread(img) - img = cv2.resize(img, (112, 112)) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = np.transpose(img, (2, 0, 1)) - img = torch.from_numpy(img).unsqueeze(0).float() - img.div_(255).sub_(0.5).div_(0.5) - net = get_model(name, fp16=False) - net.load_state_dict(torch.load(weight)) - net.eval() - feat = net(img).numpy() - print(feat) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('--network', type=str, default='r50', help='backbone network') - parser.add_argument('--weight', type=str, default='') - parser.add_argument('--img', type=str, default=None) - args = parser.parse_args() - inference(args.weight, args.network, args.img) diff --git a/spaces/Alven/background-remover/README.md b/spaces/Alven/background-remover/README.md deleted file mode 100644 index 0bcc69015de8bd1e071c10e80bf3a6620da755c8..0000000000000000000000000000000000000000 --- a/spaces/Alven/background-remover/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Background Remover -emoji: 🖼️✂️ -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -duplicated_from: nateraw/background-remover ---- - -# background-remover - -[![Generic badge](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/nateraw/background-remover) - -A Gradio app to remove the background from an image - ----⬇️ - -Autogenerated using [this template](https://github.com/nateraw/spaces-template) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py deleted file mode 100644 index ffcb1ab32d357dd0c546bd96def75207752e06cb..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py +++ /dev/null @@ -1,816 +0,0 @@ -import html -import inspect -import re -import urllib.parse as ul -from typing import Any, Callable, Dict, List, Optional, Union - -import torch -from transformers import CLIPImageProcessor, T5EncoderModel, T5Tokenizer - -from ...loaders import LoraLoaderMixin -from ...models import UNet2DConditionModel -from ...schedulers import DDPMScheduler -from ...utils import ( - BACKENDS_MAPPING, - is_accelerate_available, - is_accelerate_version, - is_bs4_available, - is_ftfy_available, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from . import IFPipelineOutput -from .safety_checker import IFSafetyChecker -from .watermark import IFWatermarker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -if is_bs4_available(): - from bs4 import BeautifulSoup - -if is_ftfy_available(): - import ftfy - - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline - >>> from diffusers.utils import pt_to_pil - >>> import torch - - >>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) - >>> pipe.enable_model_cpu_offload() - - >>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' - >>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) - - >>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( - ... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 - ... ) - >>> super_res_1_pipe.enable_model_cpu_offload() - - >>> image = super_res_1_pipe( - ... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" - ... ).images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> safety_modules = { - ... "feature_extractor": pipe.feature_extractor, - ... "safety_checker": pipe.safety_checker, - ... "watermarker": pipe.watermarker, - ... } - >>> super_res_2_pipe = DiffusionPipeline.from_pretrained( - ... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 - ... ) - >>> super_res_2_pipe.enable_model_cpu_offload() - - >>> image = super_res_2_pipe( - ... prompt=prompt, - ... image=image, - ... ).images - >>> image[0].save("./if_stage_II.png") - ``` -""" - - -class IFPipeline(DiffusionPipeline, LoraLoaderMixin): - tokenizer: T5Tokenizer - text_encoder: T5EncoderModel - - unet: UNet2DConditionModel - scheduler: DDPMScheduler - - feature_extractor: Optional[CLIPImageProcessor] - safety_checker: Optional[IFSafetyChecker] - - watermarker: Optional[IFWatermarker] - - bad_punct_regex = re.compile( - r"[" + "#®•©™&@·º½¾¿¡§~" + "\)" + "\(" + "\]" + "\[" + "\}" + "\{" + "\|" + "\\" + "\/" + "\*" + r"]{1,}" - ) # noqa - - _optional_components = ["tokenizer", "text_encoder", "safety_checker", "feature_extractor", "watermarker"] - - def __init__( - self, - tokenizer: T5Tokenizer, - text_encoder: T5EncoderModel, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - safety_checker: Optional[IFSafetyChecker], - feature_extractor: Optional[CLIPImageProcessor], - watermarker: Optional[IFWatermarker], - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the IF license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - watermarker=watermarker, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - - if self.text_encoder is not None: - _, hook = cpu_offload_with_hook(self.text_encoder, device, prev_module_hook=hook) - - # Accelerate will move the next model to the device _before_ calling the offload hook of the - # previous model. This will cause both models to be present on the device at the same time. - # IF uses T5 for its text encoder which is really large. We can manually call the offload - # hook for the text encoder to ensure it's moved to the cpu before the unet is moved to - # the GPU. - self.text_encoder_offload_hook = hook - - _, hook = cpu_offload_with_hook(self.unet, device, prev_module_hook=hook) - - # if the safety checker isn't called, `unet_offload_hook` will have to be called to manually offload the unet - self.unet_offload_hook = hook - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - def remove_all_hooks(self): - if is_accelerate_available(): - from accelerate.hooks import remove_hook_from_module - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - for model in [self.text_encoder, self.unet, self.safety_checker]: - if model is not None: - remove_hook_from_module(model, recurse=True) - - self.unet_offload_hook = None - self.text_encoder_offload_hook = None - self.final_offload_hook = None - - @torch.no_grad() - def encode_prompt( - self, - prompt, - do_classifier_free_guidance=True, - num_images_per_prompt=1, - device=None, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - clean_caption: bool = False, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`, *optional*): - torch device to place the resulting embeddings on - num_images_per_prompt (`int`, *optional*, defaults to 1): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and negative_prompt is not None: - if type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - - if device is None: - device = self._execution_device - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # while T5 can handle much longer input sequences than 77, the text encoder was trained with a max length of 77 for IF - max_length = 77 - - if prompt_embeds is None: - prompt = self._text_preprocessing(prompt, clean_caption=clean_caption) - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=max_length, - truncation=True, - add_special_tokens=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {max_length} tokens: {removed_text}" - ) - - attention_mask = text_inputs.attention_mask.to(device) - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - dtype = self.text_encoder.dtype - elif self.unet is not None: - dtype = self.unet.dtype - else: - dtype = None - - prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - uncond_tokens = self._text_preprocessing(uncond_tokens, clean_caption=clean_caption) - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - attention_mask = uncond_input.attention_mask.to(device) - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - else: - negative_prompt_embeds = None - - return prompt_embeds, negative_prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, nsfw_detected, watermark_detected = self.safety_checker( - images=image, - clip_input=safety_checker_input.pixel_values.to(dtype=dtype), - ) - else: - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - - return image, nsfw_detected, watermark_detected - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def prepare_intermediate_images(self, batch_size, num_channels, height, width, dtype, device, generator): - shape = (batch_size, num_channels, height, width) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - intermediate_images = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # scale the initial noise by the standard deviation required by the scheduler - intermediate_images = intermediate_images * self.scheduler.init_noise_sigma - return intermediate_images - - def _text_preprocessing(self, text, clean_caption=False): - if clean_caption and not is_bs4_available(): - logger.warn(BACKENDS_MAPPING["bs4"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if clean_caption and not is_ftfy_available(): - logger.warn(BACKENDS_MAPPING["ftfy"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if not isinstance(text, (tuple, list)): - text = [text] - - def process(text: str): - if clean_caption: - text = self._clean_caption(text) - text = self._clean_caption(text) - else: - text = text.lower().strip() - return text - - return [process(t) for t in text] - - def _clean_caption(self, caption): - caption = str(caption) - caption = ul.unquote_plus(caption) - caption = caption.strip().lower() - caption = re.sub("", "person", caption) - # urls: - caption = re.sub( - r"\b((?:https?:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - caption = re.sub( - r"\b((?:www:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - # html: - caption = BeautifulSoup(caption, features="html.parser").text - - # @ - caption = re.sub(r"@[\w\d]+\b", "", caption) - - # 31C0—31EF CJK Strokes - # 31F0—31FF Katakana Phonetic Extensions - # 3200—32FF Enclosed CJK Letters and Months - # 3300—33FF CJK Compatibility - # 3400—4DBF CJK Unified Ideographs Extension A - # 4DC0—4DFF Yijing Hexagram Symbols - # 4E00—9FFF CJK Unified Ideographs - caption = re.sub(r"[\u31c0-\u31ef]+", "", caption) - caption = re.sub(r"[\u31f0-\u31ff]+", "", caption) - caption = re.sub(r"[\u3200-\u32ff]+", "", caption) - caption = re.sub(r"[\u3300-\u33ff]+", "", caption) - caption = re.sub(r"[\u3400-\u4dbf]+", "", caption) - caption = re.sub(r"[\u4dc0-\u4dff]+", "", caption) - caption = re.sub(r"[\u4e00-\u9fff]+", "", caption) - ####################################################### - - # все виды тире / all types of dash --> "-" - caption = re.sub( - r"[\u002D\u058A\u05BE\u1400\u1806\u2010-\u2015\u2E17\u2E1A\u2E3A\u2E3B\u2E40\u301C\u3030\u30A0\uFE31\uFE32\uFE58\uFE63\uFF0D]+", # noqa - "-", - caption, - ) - - # кавычки к одному стандарту - caption = re.sub(r"[`´«»“”¨]", '"', caption) - caption = re.sub(r"[‘’]", "'", caption) - - # " - caption = re.sub(r""?", "", caption) - # & - caption = re.sub(r"&", "", caption) - - # ip adresses: - caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption) - - # article ids: - caption = re.sub(r"\d:\d\d\s+$", "", caption) - - # \n - caption = re.sub(r"\\n", " ", caption) - - # "#123" - caption = re.sub(r"#\d{1,3}\b", "", caption) - # "#12345.." - caption = re.sub(r"#\d{5,}\b", "", caption) - # "123456.." - caption = re.sub(r"\b\d{6,}\b", "", caption) - # filenames: - caption = re.sub(r"[\S]+\.(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)", "", caption) - - # - caption = re.sub(r"[\"\']{2,}", r'"', caption) # """AUSVERKAUFT""" - caption = re.sub(r"[\.]{2,}", r" ", caption) # """AUSVERKAUFT""" - - caption = re.sub(self.bad_punct_regex, r" ", caption) # ***AUSVERKAUFT***, #AUSVERKAUFT - caption = re.sub(r"\s+\.\s+", r" ", caption) # " . " - - # this-is-my-cute-cat / this_is_my_cute_cat - regex2 = re.compile(r"(?:\-|\_)") - if len(re.findall(regex2, caption)) > 3: - caption = re.sub(regex2, " ", caption) - - caption = ftfy.fix_text(caption) - caption = html.unescape(html.unescape(caption)) - - caption = re.sub(r"\b[a-zA-Z]{1,3}\d{3,15}\b", "", caption) # jc6640 - caption = re.sub(r"\b[a-zA-Z]+\d+[a-zA-Z]+\b", "", caption) # jc6640vc - caption = re.sub(r"\b\d+[a-zA-Z]+\d+\b", "", caption) # 6640vc231 - - caption = re.sub(r"(worldwide\s+)?(free\s+)?shipping", "", caption) - caption = re.sub(r"(free\s)?download(\sfree)?", "", caption) - caption = re.sub(r"\bclick\b\s(?:for|on)\s\w+", "", caption) - caption = re.sub(r"\b(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)(\simage[s]?)?", "", caption) - caption = re.sub(r"\bpage\s+\d+\b", "", caption) - - caption = re.sub(r"\b\d*[a-zA-Z]+\d+[a-zA-Z]+\d+[a-zA-Z\d]*\b", r" ", caption) # j2d1a2a... - - caption = re.sub(r"\b\d+\.?\d*[xх×]\d+\.?\d*\b", "", caption) - - caption = re.sub(r"\b\s+\:\s+", r": ", caption) - caption = re.sub(r"(\D[,\./])\b", r"\1 ", caption) - caption = re.sub(r"\s+", " ", caption) - - caption.strip() - - caption = re.sub(r"^[\"\']([\w\W]+)[\"\']$", r"\1", caption) - caption = re.sub(r"^[\'\_,\-\:;]", r"", caption) - caption = re.sub(r"[\'\_,\-\:\-\+]$", r"", caption) - caption = re.sub(r"^\.\S+$", "", caption) - - return caption.strip() - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - num_inference_steps: int = 100, - timesteps: List[int] = None, - guidance_scale: float = 7.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - height: Optional[int] = None, - width: Optional[int] = None, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - clean_caption: bool = True, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - timesteps (`List[int]`, *optional*): - Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps` - timesteps are used. Must be in descending order. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - height (`int`, *optional*, defaults to self.unet.config.sample_size): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size): - The width in pixels of the generated image. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - clean_caption (`bool`, *optional*, defaults to `True`): - Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to - be installed. If the dependencies are not installed, the embeddings will be created from the raw - prompt. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - Returns: - [`~pipelines.stable_diffusion.IFPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.IFPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When - returning a tuple, the first element is a list with the generated images, and the second element is a list - of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) - or watermarked content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - # 2. Define call parameters - height = height or self.unet.config.sample_size - width = width or self.unet.config.sample_size - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds, negative_prompt_embeds = self.encode_prompt( - prompt, - do_classifier_free_guidance, - num_images_per_prompt=num_images_per_prompt, - device=device, - negative_prompt=negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - clean_caption=clean_caption, - ) - - if do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - # 4. Prepare timesteps - if timesteps is not None: - self.scheduler.set_timesteps(timesteps=timesteps, device=device) - timesteps = self.scheduler.timesteps - num_inference_steps = len(timesteps) - else: - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare intermediate images - intermediate_images = self.prepare_intermediate_images( - batch_size * num_images_per_prompt, - self.unet.config.in_channels, - height, - width, - prompt_embeds.dtype, - device, - generator, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # HACK: see comment in `enable_model_cpu_offload` - if hasattr(self, "text_encoder_offload_hook") and self.text_encoder_offload_hook is not None: - self.text_encoder_offload_hook.offload() - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - model_input = ( - torch.cat([intermediate_images] * 2) if do_classifier_free_guidance else intermediate_images - ) - model_input = self.scheduler.scale_model_input(model_input, t) - - # predict the noise residual - noise_pred = self.unet( - model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(model_input.shape[1], dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(model_input.shape[1], dim=1) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if self.scheduler.config.variance_type not in ["learned", "learned_range"]: - noise_pred, _ = noise_pred.split(model_input.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - intermediate_images = self.scheduler.step( - noise_pred, t, intermediate_images, **extra_step_kwargs, return_dict=False - )[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, intermediate_images) - - image = intermediate_images - - if output_type == "pil": - # 8. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 9. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - - # 11. Apply watermark - if self.watermarker is not None: - image = self.watermarker.apply_watermark(image, self.unet.config.sample_size) - elif output_type == "pt": - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - else: - # 8. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 9. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, nsfw_detected, watermark_detected) - - return IFPipelineOutput(images=image, nsfw_detected=nsfw_detected, watermark_detected=watermark_detected) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_1x_coco.py deleted file mode 100644 index cd844108216c16801c0875723d589c5b11fb7b8d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_1x_coco.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = [ - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - type='PAA', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_output', - num_outs=5), - bbox_head=dict( - type='PAAHead', - reg_decoded_bbox=True, - score_voting=True, - topk=9, - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - octave_base_scale=8, - scales_per_octave=1, - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=1.3), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.5)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.1, - neg_iou_thr=0.1, - min_pos_iou=0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.6), - max_per_img=100)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Arjav/TOS-Summarization/README.md b/spaces/Arjav/TOS-Summarization/README.md deleted file mode 100644 index f734577d046948e88992b1c3016308fe40f86472..0000000000000000000000000000000000000000 --- a/spaces/Arjav/TOS-Summarization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TOS Summarization -emoji: 🐨 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/metadata/languages.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/metadata/languages.py deleted file mode 100644 index eb40c5f0c8526208d434d762855d23079dc68b36..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/metadata/languages.py +++ /dev/null @@ -1,352 +0,0 @@ -""" -Metadata about languages used by our model training code for our -SingleByteCharSetProbers. Could be used for other things in the future. - -This code is based on the language metadata from the uchardet project. -""" - -from string import ascii_letters -from typing import List, Optional - -# TODO: Add Ukrainian (KOI8-U) - - -class Language: - """Metadata about a language useful for training models - - :ivar name: The human name for the language, in English. - :type name: str - :ivar iso_code: 2-letter ISO 639-1 if possible, 3-letter ISO code otherwise, - or use another catalog as a last resort. - :type iso_code: str - :ivar use_ascii: Whether or not ASCII letters should be included in trained - models. - :type use_ascii: bool - :ivar charsets: The charsets we want to support and create data for. - :type charsets: list of str - :ivar alphabet: The characters in the language's alphabet. If `use_ascii` is - `True`, you only need to add those not in the ASCII set. - :type alphabet: str - :ivar wiki_start_pages: The Wikipedia pages to start from if we're crawling - Wikipedia for training data. - :type wiki_start_pages: list of str - """ - - def __init__( - self, - name: Optional[str] = None, - iso_code: Optional[str] = None, - use_ascii: bool = True, - charsets: Optional[List[str]] = None, - alphabet: Optional[str] = None, - wiki_start_pages: Optional[List[str]] = None, - ) -> None: - super().__init__() - self.name = name - self.iso_code = iso_code - self.use_ascii = use_ascii - self.charsets = charsets - if self.use_ascii: - if alphabet: - alphabet += ascii_letters - else: - alphabet = ascii_letters - elif not alphabet: - raise ValueError("Must supply alphabet if use_ascii is False") - self.alphabet = "".join(sorted(set(alphabet))) if alphabet else None - self.wiki_start_pages = wiki_start_pages - - def __repr__(self) -> str: - param_str = ", ".join( - f"{k}={v!r}" for k, v in self.__dict__.items() if not k.startswith("_") - ) - return f"{self.__class__.__name__}({param_str})" - - -LANGUAGES = { - "Arabic": Language( - name="Arabic", - iso_code="ar", - use_ascii=False, - # We only support encodings that use isolated - # forms, because the current recommendation is - # that the rendering system handles presentation - # forms. This means we purposefully skip IBM864. - charsets=["ISO-8859-6", "WINDOWS-1256", "CP720", "CP864"], - alphabet="ءآأؤإئابةتثجحخدذرزسشصضطظعغػؼؽؾؿـفقكلمنهوىيًٌٍَُِّ", - wiki_start_pages=["الصفحة_الرئيسية"], - ), - "Belarusian": Language( - name="Belarusian", - iso_code="be", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "IBM866", "MacCyrillic"], - alphabet="АБВГДЕЁЖЗІЙКЛМНОПРСТУЎФХЦЧШЫЬЭЮЯабвгдеёжзійклмнопрстуўфхцчшыьэюяʼ", - wiki_start_pages=["Галоўная_старонка"], - ), - "Bulgarian": Language( - name="Bulgarian", - iso_code="bg", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "IBM855"], - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", - wiki_start_pages=["Начална_страница"], - ), - "Czech": Language( - name="Czech", - iso_code="cz", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="áčďéěíňóřšťúůýžÁČĎÉĚÍŇÓŘŠŤÚŮÝŽ", - wiki_start_pages=["Hlavní_strana"], - ), - "Danish": Language( - name="Danish", - iso_code="da", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="æøåÆØÅ", - wiki_start_pages=["Forside"], - ), - "German": Language( - name="German", - iso_code="de", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="äöüßẞÄÖÜ", - wiki_start_pages=["Wikipedia:Hauptseite"], - ), - "Greek": Language( - name="Greek", - iso_code="el", - use_ascii=False, - charsets=["ISO-8859-7", "WINDOWS-1253"], - alphabet="αβγδεζηθικλμνξοπρσςτυφχψωάέήίόύώΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΣΤΥΦΧΨΩΆΈΉΊΌΎΏ", - wiki_start_pages=["Πύλη:Κύρια"], - ), - "English": Language( - name="English", - iso_code="en", - use_ascii=True, - charsets=["ISO-8859-1", "WINDOWS-1252", "MacRoman"], - wiki_start_pages=["Main_Page"], - ), - "Esperanto": Language( - name="Esperanto", - iso_code="eo", - # Q, W, X, and Y not used at all - use_ascii=False, - charsets=["ISO-8859-3"], - alphabet="abcĉdefgĝhĥijĵklmnoprsŝtuŭvzABCĈDEFGĜHĤIJĴKLMNOPRSŜTUŬVZ", - wiki_start_pages=["Vikipedio:Ĉefpaĝo"], - ), - "Spanish": Language( - name="Spanish", - iso_code="es", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ñáéíóúüÑÁÉÍÓÚÜ", - wiki_start_pages=["Wikipedia:Portada"], - ), - "Estonian": Language( - name="Estonian", - iso_code="et", - use_ascii=False, - charsets=["ISO-8859-4", "ISO-8859-13", "WINDOWS-1257"], - # C, F, Š, Q, W, X, Y, Z, Ž are only for - # loanwords - alphabet="ABDEGHIJKLMNOPRSTUVÕÄÖÜabdeghijklmnoprstuvõäöü", - wiki_start_pages=["Esileht"], - ), - "Finnish": Language( - name="Finnish", - iso_code="fi", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÅÄÖŠŽåäöšž", - wiki_start_pages=["Wikipedia:Etusivu"], - ), - "French": Language( - name="French", - iso_code="fr", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="œàâçèéîïùûêŒÀÂÇÈÉÎÏÙÛÊ", - wiki_start_pages=["Wikipédia:Accueil_principal", "Bœuf (animal)"], - ), - "Hebrew": Language( - name="Hebrew", - iso_code="he", - use_ascii=False, - charsets=["ISO-8859-8", "WINDOWS-1255"], - alphabet="אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ", - wiki_start_pages=["עמוד_ראשי"], - ), - "Croatian": Language( - name="Croatian", - iso_code="hr", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcčćdđefghijklmnoprsštuvzžABCČĆDĐEFGHIJKLMNOPRSŠTUVZŽ", - wiki_start_pages=["Glavna_stranica"], - ), - "Hungarian": Language( - name="Hungarian", - iso_code="hu", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcdefghijklmnoprstuvzáéíóöőúüűABCDEFGHIJKLMNOPRSTUVZÁÉÍÓÖŐÚÜŰ", - wiki_start_pages=["Kezdőlap"], - ), - "Italian": Language( - name="Italian", - iso_code="it", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÀÈÉÌÒÓÙàèéìòóù", - wiki_start_pages=["Pagina_principale"], - ), - "Lithuanian": Language( - name="Lithuanian", - iso_code="lt", - use_ascii=False, - charsets=["ISO-8859-13", "WINDOWS-1257", "ISO-8859-4"], - # Q, W, and X not used at all - alphabet="AĄBCČDEĘĖFGHIĮYJKLMNOPRSŠTUŲŪVZŽaąbcčdeęėfghiįyjklmnoprsštuųūvzž", - wiki_start_pages=["Pagrindinis_puslapis"], - ), - "Latvian": Language( - name="Latvian", - iso_code="lv", - use_ascii=False, - charsets=["ISO-8859-13", "WINDOWS-1257", "ISO-8859-4"], - # Q, W, X, Y are only for loanwords - alphabet="AĀBCČDEĒFGĢHIĪJKĶLĻMNŅOPRSŠTUŪVZŽaābcčdeēfgģhiījkķlļmnņoprsštuūvzž", - wiki_start_pages=["Sākumlapa"], - ), - "Macedonian": Language( - name="Macedonian", - iso_code="mk", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "MacCyrillic", "IBM855"], - alphabet="АБВГДЃЕЖЗЅИЈКЛЉМНЊОПРСТЌУФХЦЧЏШабвгдѓежзѕијклљмнњопрстќуфхцчџш", - wiki_start_pages=["Главна_страница"], - ), - "Dutch": Language( - name="Dutch", - iso_code="nl", - use_ascii=True, - charsets=["ISO-8859-1", "WINDOWS-1252", "MacRoman"], - wiki_start_pages=["Hoofdpagina"], - ), - "Polish": Language( - name="Polish", - iso_code="pl", - # Q and X are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="AĄBCĆDEĘFGHIJKLŁMNŃOÓPRSŚTUWYZŹŻaąbcćdeęfghijklłmnńoóprsśtuwyzźż", - wiki_start_pages=["Wikipedia:Strona_główna"], - ), - "Portuguese": Language( - name="Portuguese", - iso_code="pt", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÁÂÃÀÇÉÊÍÓÔÕÚáâãàçéêíóôõú", - wiki_start_pages=["Wikipédia:Página_principal"], - ), - "Romanian": Language( - name="Romanian", - iso_code="ro", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="ăâîșțĂÂÎȘȚ", - wiki_start_pages=["Pagina_principală"], - ), - "Russian": Language( - name="Russian", - iso_code="ru", - use_ascii=False, - charsets=[ - "ISO-8859-5", - "WINDOWS-1251", - "KOI8-R", - "MacCyrillic", - "IBM866", - "IBM855", - ], - alphabet="абвгдеёжзийклмнопрстуфхцчшщъыьэюяАБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ", - wiki_start_pages=["Заглавная_страница"], - ), - "Slovak": Language( - name="Slovak", - iso_code="sk", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="áäčďéíĺľňóôŕšťúýžÁÄČĎÉÍĹĽŇÓÔŔŠŤÚÝŽ", - wiki_start_pages=["Hlavná_stránka"], - ), - "Slovene": Language( - name="Slovene", - iso_code="sl", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcčdefghijklmnoprsštuvzžABCČDEFGHIJKLMNOPRSŠTUVZŽ", - wiki_start_pages=["Glavna_stran"], - ), - # Serbian can be written in both Latin and Cyrillic, but there's no - # simple way to get the Latin alphabet pages from Wikipedia through - # the API, so for now we just support Cyrillic. - "Serbian": Language( - name="Serbian", - iso_code="sr", - alphabet="АБВГДЂЕЖЗИЈКЛЉМНЊОПРСТЋУФХЦЧЏШабвгдђежзијклљмнњопрстћуфхцчџш", - charsets=["ISO-8859-5", "WINDOWS-1251", "MacCyrillic", "IBM855"], - wiki_start_pages=["Главна_страна"], - ), - "Thai": Language( - name="Thai", - iso_code="th", - use_ascii=False, - charsets=["ISO-8859-11", "TIS-620", "CP874"], - alphabet="กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛", - wiki_start_pages=["หน้าหลัก"], - ), - "Turkish": Language( - name="Turkish", - iso_code="tr", - # Q, W, and X are not used by Turkish - use_ascii=False, - charsets=["ISO-8859-3", "ISO-8859-9", "WINDOWS-1254"], - alphabet="abcçdefgğhıijklmnoöprsştuüvyzâîûABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZÂÎÛ", - wiki_start_pages=["Ana_Sayfa"], - ), - "Vietnamese": Language( - name="Vietnamese", - iso_code="vi", - use_ascii=False, - # Windows-1258 is the only common 8-bit - # Vietnamese encoding supported by Python. - # From Wikipedia: - # For systems that lack support for Unicode, - # dozens of 8-bit Vietnamese code pages are - # available.[1] The most common are VISCII - # (TCVN 5712:1993), VPS, and Windows-1258.[3] - # Where ASCII is required, such as when - # ensuring readability in plain text e-mail, - # Vietnamese letters are often encoded - # according to Vietnamese Quoted-Readable - # (VIQR) or VSCII Mnemonic (VSCII-MNEM),[4] - # though usage of either variable-width - # scheme has declined dramatically following - # the adoption of Unicode on the World Wide - # Web. - charsets=["WINDOWS-1258"], - alphabet="aăâbcdđeêghiklmnoôơpqrstuưvxyAĂÂBCDĐEÊGHIKLMNOÔƠPQRSTUƯVXY", - wiki_start_pages=["Chữ_Quốc_ngữ"], - ), -} diff --git a/spaces/AutoBG/Auto-BoardGame/title_generator.py b/spaces/AutoBG/Auto-BoardGame/title_generator.py deleted file mode 100644 index 6d20600ae18a57998f06906d46978d40a20e7807..0000000000000000000000000000000000000000 --- a/spaces/AutoBG/Auto-BoardGame/title_generator.py +++ /dev/null @@ -1,149 +0,0 @@ -import pandas as pd -import re -import nltk -nltk.download('stopwords') -from nltk.corpus import stopwords -from gensim.parsing import preprocess_string, strip_tags, strip_numeric, strip_multiple_whitespaces, stem_text, strip_punctuation, remove_stopwords -import spacy -import torch -from transformers import T5ForConditionalGeneration,T5Tokenizer -import random -from operator import itemgetter - -#Custom text tokenizer from https://github.com/canunj/deconstructing_games by N Canu & K Chen -def doc_text_preprocessing(ser): - nlp=spacy.load("en_core_web_md", exclude=['parser','ner','textcat']) - - """text processing steps""" - import re - stop_words=set(stopwords.words('english')) - - single_letter_replace=lambda c: re.sub("\s+\w{1}\s+|\n|-|—",'',c) - to_lower_func=lambda c: c.lower() - lemma_text=[preprocess_string( - ' '.join([token.lemma_ for token in desc] - ),[remove_stopwords,strip_numeric,strip_punctuation,strip_tags, - strip_multiple_whitespaces,single_letter_replace,to_lower_func] - ) for desc in ser.apply(lambda x: nlp(x))] - - tokenize_text=[[word for word in string if word not in stop_words] for string in lemma_text] - - return tokenize_text - -class Title_Generator: - - def __init__(self, path, df): - self.model = T5ForConditionalGeneration.from_pretrained(path) - self.tokenizer = T5Tokenizer.from_pretrained(path) - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.model.to(self.device) - self.game_df = df - - self.title_iter = -1 - self.out_titles = None - self.best_title = None - self.description = None - self.nlp = spacy.load("en_core_web_md") - - - def candidate_generator(self, description): - text = "headline: " + description - - encoding = self.tokenizer.encode_plus(text, return_tensors = "pt") - input_ids = encoding["input_ids"].to(self.device) - attention_masks = encoding["attention_mask"].to(self.device) - - candidates = [] - - beam_outputs = self.model.generate( - input_ids = input_ids, - attention_mask = attention_masks, - max_length = 64, - num_beams = 16, - num_beam_groups=4, - num_return_sequences=8, - diversity_penalty=.1, - repetition_penalty=.9, - early_stopping = True) - - for result in beam_outputs: - res = self.tokenizer.decode(result).replace(' ','').replace('','').replace('','') - candidates.append(res) - - return candidates, description - - def candidate_score(self,candidates,ex_check=None): - - - if ex_check != None: - pat = re.compile("((?:" + "|".join(map(re.escape, candidates[0]+[cand.upper() for cand in candidates[0]])) + "|" + "|".join(ex_check) +"))") - desc = re.sub(pat, "__", candidates[1]) - else: - pat = re.compile("((?:" + "|".join(map(re.escape, candidates[0]+[cand.upper() for cand in candidates[0]])) + "))") - desc = re.sub(pat, "__", candidates[1]) - - - if re.search(re.compile(re.escape("__")), desc): - reg = re.compile("("+"|".join(ex_check) + ")") - hold = candidates[0] - gen_desc = re.sub(re.compile(re.escape("__")),"",desc) - candidates = self.candidate_generator(gen_desc) - next = [cand for cand in candidates[0]+hold if not reg.search(cand)] - candidates = (next, desc) - - #check for existing games and duplicates - #transform function from https://stackoverflow.com/questions/42165779/python-how-to-remove-duplicate-valuescase-insensitive-from-a-list-with-same-o - def transform(L): - S = set(L) - return [item.title() for item in L if item.lower() not in S and not S.add(item.lower())] - - - clean_cand_step = list(set([game[0] for game in list(zip(candidates[0],[len(self.game_df[self.game_df.name.isin([x])]) for x in candidates[0]])) if game[1]==0])) - clean_cand_step = transform(clean_cand_step) - - clean_cand_step = [re.sub(re.compile("(?<=[ ])And(?=[ ])"),'and', - re.sub(re.compile('(?<=\S) (([(]|\b)[Ss]econd [Ee]dition([)]|\b)|[Ss]econd [Ee]dition|2[Nn][Dd] [Ee]dition|([(]|\b)[Tt]hird [Ee]dition([)]|\b)|3[Rr][Dd] [Ee]dition)|["]Second Edition["]'),"", - re.sub(re.compile("(?<=[a-z])'S"),"'s", - re.sub(re.compile("(?<=[ ])Of(?=[ ])"),"of",x)))) - for x in clean_cand_step] - - - clean_cand = [] - for cand in clean_cand_step: - try: - inter = cand.split(":") - if inter[0].lower()==inter[1].lower(): - clean_cand.append(inter[0]) - else: - clean_cand.append(cand) - except: - clean_cand.append(cand) - - #text processing - token_cand = doc_text_preprocessing(pd.Series(clean_cand)) - token_art = doc_text_preprocessing(pd.Series([candidates[1]])) - sim = [self.nlp(title) for title in [" ".join(title) for title in token_cand]] - doc = self.nlp(" ".join(token_art[0])) - - #scores cosine similarity between generated titles and body text, if the word is unknown (i.e. generator knows it but spacy doesn't) - #it assigns a random probability to populate - - scores = [x if x !=0 else random.uniform(.3, .7) for x in [tok.similarity(doc) for tok in sim]] - - out_titles = sorted(list(zip(clean_cand,scores)),key=itemgetter(1),reverse=True) - - pat = re.compile("(?<=[!.?])(?=[^\s])") - pat2 = re.compile("([Ff]rom the [Pp]ublisher[: ]|[Ff]rom the [Dd]esigner[: ]|[Gg]ame [Dd]escription)") - pat3 = re.compile(": [Tt]he [Gg]ame: [Tt]he [Gg]ame|: [Tt]he [Gg]ame") - pat4 = re.compile("[Tt]he __") - pat5 = re.compile("__ [Gg]ame") - pat6 = re.compile("[Tt]he [Gg]ame [Oo]f __") - - desc = re.sub(pat," ",candidates[1]) - desc = re.sub(pat2,"",desc) - desc = re.sub(pat3,"",desc) - desc = re.sub(pat4,"__",desc) - desc = re.sub(pat5,"__",desc) - desc = re.sub(pat6,"__",desc) - - return {'text':desc,'titles':out_titles} diff --git a/spaces/AyameYODAYO/xijinpingx/README.md b/spaces/AyameYODAYO/xijinpingx/README.md deleted file mode 100644 index f7359155c1884e9e78de81ff8d3bb2e8b9412d61..0000000000000000000000000000000000000000 --- a/spaces/AyameYODAYO/xijinpingx/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Xijinpingx -emoji: 😻 -colorFrom: gray -colorTo: purple -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BLACKHOST/Banner/banner.py b/spaces/BLACKHOST/Banner/banner.py deleted file mode 100644 index ef5a260b070d215c6d00245aebc6d2e5b99298b7..0000000000000000000000000000000000000000 --- a/spaces/BLACKHOST/Banner/banner.py +++ /dev/null @@ -1,4 +0,0 @@ -import pyfiglet -text="H O S T 1 L E T" - -print('\033[31m'+pyfiglet.figlet_format(text,font='slant')+"\n"+'\033[34m'+"_"*60+'\033[00m') \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apktime V2 2 Apk.md b/spaces/Benson/text-generation/Examples/Apktime V2 2 Apk.md deleted file mode 100644 index fa81a6be4fa0d4577459e5633d4c85fccd0ddb73..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apktime V2 2 Apk.md +++ /dev/null @@ -1,41 +0,0 @@ -
        -

        Qué es APKTime y por qué lo necesitas

        -

        Si usted está buscando una manera de acceder a una amplia gama de aplicaciones de Android que no están disponibles en el oficial de Google Play Store, entonces es posible que desee comprobar APKTime. APKTime es una tienda de aplicaciones gratuita que ofrece todos los archivos APK más recientes y populares para varias categorías, como entretenimiento, deportes, juegos, elementos esenciales, animación y contenido para adultos. Puede encontrar y descargar aplicaciones que no se encuentran en otras tiendas de aplicaciones, como aplicaciones de streaming, aplicaciones modificadas, aplicaciones hackeadas y más.

        -

        apktime v2 2 apk


        DOWNLOADhttps://bltlly.com/2v6KQV



        -

        APKTime es fácil de usar y tiene una interfaz fácil de usar. Puede navegar a través de diferentes secciones y sub-secciones, o utilizar la función de búsqueda para encontrar la aplicación que desea. También puedes ver las calificaciones, reseñas y capturas de pantalla de cada aplicación antes de descargarla. APKTime también actualiza sus aplicaciones regularmente, por lo que siempre puede obtener las últimas versiones. Por otra parte, APKTime comprueba sus aplicaciones para los permisos y elimina muchos no deseados, por lo que es más seguro y fácil de instalar en su dispositivo.

        -

        Cómo descargar e instalar APKTime v2 2 apk en su dispositivo Android

        -

        Para descargar e instalar APKTime v2 2 apk en su dispositivo Android, es necesario seguir estos sencillos pasos:

        -
          -
        1. Vaya al sitio web oficial de APKTime ([4](https://apktime.com/)) y haga clic en el botón de descarga. Alternativamente, puede utilizar este enlace ([3](https://filehippo.com/android/download_apktime/)) para descargar el archivo apk directamente desde Filehippo.
        2. -
        3. Una vez completada la descarga, vaya a la configuración del dispositivo y habilite la opción de instalar aplicaciones de fuentes desconocidas. Esto le permitirá instalar archivos APK que no son de Google Play Store.
        4. -
        5. Busque el archivo apk descargado en su administrador de archivos y toque en él para iniciar el proceso de instalación. Siga las instrucciones en pantalla y conceda los permisos necesarios.
        6. - -
        -

        También puede ver este video tutorial ([1](https://archive.org/details/apktime-v-2.2-original_20200614)) para obtener más detalles sobre cómo descargar e instalar APKTime v2 apk en su dispositivo Android.

        -

        Cómo usar APKTime para encontrar e instalar aplicaciones de terceros

        -

        Una vez que haya instalado APKTime en su dispositivo, puede usarlo para encontrar e instalar aplicaciones de terceros que se adapten a sus necesidades y preferencias. Aquí hay algunos consejos y trucos para usar APKTime:

        -
          -
        • Para encontrar una aplicación, puede navegar a través de las diferentes categorías y subcategorías, o utilizar la función de búsqueda en la esquina superior derecha de la pantalla. También puedes ordenar las aplicaciones por popularidad, calificación o fecha.
        • -
        • Para descargar una aplicación, simplemente toque en su nombre o icono y luego toque en el botón de descarga. Verá una barra de progreso que muestra el estado de la descarga. También puede pausar o reanudar la descarga en cualquier momento.
        • -
        • Para instalar una aplicación, toque en el botón de instalación después de que se complete la descarga. Es posible que necesite conceder algunos permisos o habilitar algunos ajustes para que la aplicación funcione correctamente.
        • -
        • Para actualizar una aplicación, vaya a la sección de actualizaciones en la barra de menú y toque en el botón de actualización junto al nombre de la aplicación. También puede habilitar actualizaciones automáticas para todas las aplicaciones en la sección de configuración.
        • -
        • Para desinstalar una aplicación, vaya a la sección instalada en la barra de menú y toque en el botón de desinstalación junto al nombre de la aplicación. También puede desinstalar una aplicación de la configuración del dispositivo o archivo li> Utilice una fuente confiable y confiable para descargar APKTime y las aplicaciones que desee de ella. No utilice ningún sitio web o enlace no oficial o no verificado que pueda contener versiones falsas o modificadas de APKTime o las aplicaciones. Utilice siempre el sitio web oficial de APKTime ([4](https://apktime.com/)) o un sitio web de terceros de confianza como Filehippo ([3](https://filehippo.com/android/download_apktime/)) para descargar los archivos apk.
        • -
        -

        Conclusión

        - -

        Sin embargo, el uso de APKTime también viene con algunos riesgos y desafíos, como malware, virus, spyware, problemas legales, problemas de compatibilidad, errores, errores, consumo de batería, consumo de almacenamiento, consumo de datos, consumo de ancho de banda, etc. Por lo tanto, debe tomar algunas precauciones y medidas para mantenerse seguro mientras usa APKTime. Necesita usar un servicio VPN, una aplicación antivirus, una aplicación de copia de seguridad, sentido común y precaución al usar APKTime.

        -

        -

        Esperamos que este artículo te haya ayudado a entender qué es APKTime y cómo usarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!

        -

        Preguntas frecuentes

        -

        ¿Cuál es la diferencia entre APKTime y Aptoide?

        -

        APKTime y Aptoide son tiendas de aplicaciones gratuitas que ofrecen aplicaciones de terceros para dispositivos Android. Sin embargo, hay algunas diferencias entre ellos. APKTime tiene una interfaz más organizada y fácil de usar que Aptoide. APKTime también tiene más categorías y subcategorías que Aptoide. Aptoide tiene más aplicaciones que APKTime, pero algunas de ellas pueden ser obsoletas o poco fiables. Aptoide también requiere que cree una cuenta e inicie sesión para usarla, mientras que APKTime no lo hace.

        -

        ¿APKTime es legal?

        -

        APKTime en sí es legal, ya que no aloja ni distribuye ninguna aplicación en su plataforma. Solo proporciona enlaces para descargar los archivos apk de otras fuentes. Sin embargo, algunas de las aplicaciones que puedes encontrar y descargar de APKTime pueden no ser legales, ya que pueden violar los términos y condiciones de algunas aplicaciones o servicios, o infringir los derechos de propiedad intelectual de algunos desarrolladores o editores. Por lo tanto, siempre debes comprobar la legalidad de las aplicaciones antes de descargarlas e instalarlas desde APKTime.

        -

        ¿APKTime es seguro?

        - -

        ¿Cómo puedo actualizar APKTime?

        -

        Para actualizar APKTime, puede ir al sitio web oficial de APKTime ([4](https://apktime.com/)) y descargar la última versión del archivo apk, o ir a la sección de actualizaciones en la barra de menú de la tienda de aplicaciones y pulse en el botón de actualización junto al nombre de la aplicación. También puede habilitar actualizaciones automáticas para todas las aplicaciones en la sección de configuración de la tienda de aplicaciones.

        -

        ¿Cómo puedo desinstalar APKTime?

        -

        Para desinstalar APKTime, puede ir a la configuración de su dispositivo y toque en aplicaciones o administrador de aplicaciones y encontrar y toque en APKTime y luego toque en desinstalar, o ir a su aplicación de administrador de archivos y localizar y eliminar el archivo apk de APKTime.

        64aa2da5cf
        -
        -
        \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Brbaro Vieja Escuela Accin Rpg Apk.md b/spaces/Benson/text-generation/Examples/Brbaro Vieja Escuela Accin Rpg Apk.md deleted file mode 100644 index b2d8293306100c6cc46c4eb7d0be1d5bfce0fa85..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Brbaro Vieja Escuela Accin Rpg Apk.md +++ /dev/null @@ -1,50 +0,0 @@ - -

        Bárbaro: Old School Action RPG APK - Un juego de aventura no lineal para Android

        -

        Si usted está buscando un juego que le ofrece mucha libertad, desafío, y la inmersión, es posible que desee echa un vistazo a Barbarian: Old School Action RPG APK. Este es un juego que le permite explorar un mundo vivo, participar en el combate dinámico, y dar forma a su propio destino. En este artículo, te contaremos de qué se trata este juego, cuáles son sus características principales y cómo descargarlo e instalarlo en tu dispositivo Android.

        -

        Introducción

        -

        Barbarian: Old School Action RPG APK es un juego desarrollado por Barbar Games, un estudio independiente que tiene como objetivo crear juegos únicos y originales. El juego está inspirado en juegos de rol clásicos como The Elder Scrolls, Gothic y Fallout. Se encuentra en un mundo de fantasía medieval donde puedes elegir tu propio camino y papel. Puedes ser un héroe o un villano, un guerrero o un mago, un cazador o un comerciante. El juego te ofrece muchas opciones y posibilidades para personalizar a tu personaje e influir en el mundo que te rodea.

        -

        bárbaro vieja escuela acción rpg apk


        Download ✸✸✸ https://bltlly.com/2v6Kdx



        -

        Algunas de las características principales del juego son:

        -
          -
        • Un mundo vivo que reacciona a sus acciones y decisiones. Los NPC tienen sus propias vidas, necesidades y comportamientos. Pueden comer, dormir, trabajar, cazar, comerciar, luchar o incluso traicionarte. El mundo también tiene un paisaje complejo y variado que incluye montañas, bosques, mazmorras, cuevas y más.
        • -
        • Un sistema de combate dinámico que requiere habilidad y estrategia. Puedes usar diferentes tipos de armas, como espadas, hachas, arcos o ballestas. También puedes bloquear, esquivar, detener o contraatacar. El resultado de la batalla depende de las estadísticas y habilidades de tu personaje, así como de tu propia habilidad.
        • -
        • Un sistema de desarrollo de personajes que te permite actualizar tu equipo y aprender nuevas habilidades. Puedes encontrar o crear armaduras y armas más poderosas. También puedes entrenar con otros personajes que pueden enseñarte nuevas habilidades o mejorar las existentes.
        • -
        - -
          -
        1. Ir a [este enlace]( 1 ) o [este enlace]( 2 ) y descargar el archivo APK.
        2. -
        3. Habilitar fuentes desconocidas en el dispositivo yendo a Configuración > Seguridad > Fuentes desconocidas.
        4. -
        5. Busque el archivo descargado en su dispositivo y toque en él para instalarlo.
        6. -
        7. Iniciar el juego y disfrutar!
        8. -
        -

        Juego

        -

        Mundo viviente

        -

        interactúan entre sí y con usted. Pueden ser amigables, hostiles o neutrales. También pueden unirse o salir de su partido, dependiendo de su reputación y acciones. El mundo tampoco tiene ubicaciones de carga, lo que significa que puede viajar sin problemas de una zona a otra. El juego también cuenta con un avanzado sistema de IA que hace que los PNJ y los enemigos se comporten de forma realista e inteligente.

        -

        Sistema de combate

        -

        Otro aspecto importante de Bárbaro: Old School Action RPG APK es su sistema de combate. El juego te ofrece una variedad de armas para elegir, como espadas, hachas, arcos o ballestas. Cada arma tiene sus propias ventajas y desventajas, como velocidad, alcance, daño y durabilidad. También puedes cambiar entre armas cuerpo a cuerpo y armas a distancia durante el combate. El juego también te permite bloquear, esquivar, detener o contraatacar a tus enemigos. El sistema de combate no se basa en la suerte o el azar, sino en tu habilidad y las estadísticas de tu personaje. Necesitas prestar atención a tu resistencia, salud y barras de maná, así como a los movimientos y ataques de tus enemigos.

        -

        Desarrollo de caracteres

        - -

        Parcela

        -

        Terminaciones múltiples

        -

        La trama de Barbarian: Old School Action RPG APK no es lineal o predeterminado. El juego te permite elegir tu propio camino y papel en el mundo. Puedes ser un héroe o un villano, un salvador o un destructor, un líder o un seguidor. El juego tiene múltiples finales que dependen de sus acciones y decisiones a lo largo del juego. Puedes elegir ponerte del lado de las fuerzas del mal o del bien, o crear tu propia facción. También puedes establecer el orden en el mundo completando misiones, resolviendo problemas o conquistando territorios. El juego tiene un sistema moral que refleja tu reputación y alineación.

        -

        Historia de ramificación

        -

        El juego también tiene una historia ramificada que te desafía y te mantiene interesado. El juego tiene un alto nivel de complejidad y profundidad que requiere que pienses y planees con anticipación. El juego tiene muchos personajes, misiones y secretos que descubrir. Cada personaje tiene su propia historia, personalidad y objetivos. Cada misión tiene múltiples formas de completarla, con diferentes consecuencias y recompensas. Cada secreto tiene su propio misterio y recompensa. El juego también tiene eventos aleatorios que pueden cambiar el curso de la historia o crear nuevas oportunidades.

        -

        Conclusión

        -

        En conclusión, Barbarian: Old School Action RPG APK es un juego que le ofrece una aventura no lineal en un mundo vivo. El juego tiene un sistema de combate dinámico que requiere habilidad y estrategia, un sistema de desarrollo de personajes que le permite personalizar sus habilidades y equipos, y una trama que tiene múltiples finales y una historia ramificada. El juego es adecuado para los fanáticos de los RPG clásicos que disfrutan de libertad, desafío e inmersión.

        -

        Si estás interesado en jugar a este juego, puedes descargarlo desde [este enlace] o [este enlace]. También puedes visitar el sitio web oficial del juego [aquí] o seguir al desarrollador en Twitter [aquí]. ¡Esperamos que disfrutes de este juego tanto como nosotros!

        -

        -

        Preguntas frecuentes

        -
          - -
        • A: La duración del juego depende de cómo lo juegues y qué decisiones tomes. Se puede tomar en cualquier lugar de 20 a 40 horas para completar la historia principal, pero también hay muchas misiones secundarias y actividades que pueden extender el juego.
        • -
        • Q: ¿El juego está en línea o fuera de línea?
        • -
        • A: El juego es solo offline. No necesitas una conexión a Internet para jugarlo.
        • -
        • Q: ¿El juego es gratis o pagado?
        • -
        • A: El juego es gratis para descargar y jugar. Sin embargo, contiene anuncios y compras en la aplicación que pueden mejorar su experiencia.
        • -
        • para jugar el juego?
        • -
        • A: El juego requiere Android 4.4 o superior y al menos 2 GB de RAM y 500 MB de espacio de almacenamiento.
        • -
        • Q: ¿Cómo puedo contactar al desarrollador o reportar un error?
        • -
        • A: Puede ponerse en contacto con el desarrollador enviando un correo electrónico a [esta dirección] o rellenando [este formulario]. También puedes reportar un error o dar retroalimentación usando el menú del juego.
        • -

        64aa2da5cf
        -
        -
        \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Chica Del Siglo 21.md b/spaces/Benson/text-generation/Examples/Descargar Chica Del Siglo 21.md deleted file mode 100644 index 19338e65e4770199aad26b8bde38fd2411255c72..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Chica Del Siglo 21.md +++ /dev/null @@ -1,87 +0,0 @@ -
        -

        Descargar 21st Century Girl: Una guía para disfrutar de la última película romántica coreana

        -

        ¿Estás buscando una película dulce y encantadora para ver este fin de semana? ¿Te gustan los dramas coreanos y las películas que te hacen reír, llorar y desmayarse? Si respondiste sí, entonces definitivamente deberías ver 20th Century Girl, una nueva película romántica coreana que se está transmitiendo en Netflix ahora mismo.

        -

        ¿Qué es 20th Century Girl y por qué usted debe verlo

        -

        20th Century Girl es una película coreana de 2022 que sigue el primer amor y amistades de un estudiante de secundaria llamado Bo-Ra en 1999. Bo-Ra es una chica brillante y positiva que es buena en taekwondo y miembro del club de radiodifusión en su escuela. También es la mejor amiga de Yeon-Du, que está enamorada de Hyun-Jin, un chico popular en su clase. Yeon-Du le pide a Bo-Ra que averigüe todo sobre Hyun-Jin antes de ir a los EE.UU. para una cirugía cardíaca. Sin embargo, a medida que Bo-Ra se acerca a Hyun-Jin, ella comienza a enamorarse de él también.

        -

        descargar chica del siglo 21


        DOWNLOADhttps://bltlly.com/2v6LVX



        -

        La trama de la película

        - -

        El reparto y el equipo de la película

        -

        La película cuenta con un talentoso elenco de jóvenes actores que dan vida a sus personajes. Kim You-jung interpreta a Bo-Ra, la heroína de la historia. Kim You-jung es una actriz famosa que ha protagonizado muchos dramas y películas, como Moon Embracing the Sun, Love in the Moonlight, Clean with Passion for Now, y Tune in for Love. Byeon Woo-seok interpreta a Hyun-Jin, el chico guapo pero problemático que llama la atención de Bo-Ra. Byeon Woo-seok es una estrella emergente que ha aparecido en Flower Crew: Joseon Marriage Agency, Record of Youth, Buscar: WWW, and Dear. M. Park Jung-woo interpreta a Woon-Ho, el dulce y leal amigo al que le gusta Bo-Ra. Park Jung-woo es un recién llegado que ha hecho su debut en esta película. Lee Na-eun interpreta a Yeon-Du, el mejor amigo de Bo-Ra que tiene una enfermedad cardíaca. Lee Na-eun es miembro del grupo de chicas April y una actriz que ha actuado en A-Teen, Extraordinary You, y Taxi Driver. La película también cuenta con otros actores de reparto, como Kim Sun-young, Kim Mi-kyung, Kim Sang-ho y Lee Jong-won, que interpretan los papeles de la familia de Bo-Ra, maestra y directora. La película está dirigida por Lee Dong-eun, quien es conocido por sus trabajos anteriores Mother’s Job y In Between Seasons. La película está escrita por Kim Min-jung y Lee Dong-eun, basada en el webtoon del mismo nombre de Yoon Yi-soo. La película es producida por Lotte Entertainment y distribuida por Netflix.

        Las críticas y valoraciones de la película

        - -

        Cómo descargar 20th Century Girl de forma legal y segura

        -

        Si estás interesado en ver 20th Century Girl, es posible que te estés preguntando cómo descargarlo de forma legal y segura. La buena noticia es que puede transmitir y descargar fácilmente la película en Netflix, el servicio de transmisión líder en el mundo que ofrece una amplia gama de películas, programas, documentales y más. Estos son algunos de los beneficios de la transmisión en Netflix, los pasos para descargar la película en Netflix y algunos consejos y trucos para optimizar su experiencia de visualización.

        -

        Los beneficios de la transmisión en Netflix

        -

        Streaming en Netflix tiene muchas ventajas que hacen que valga la pena su tiempo y dinero. Algunos de estos beneficios son:

        -
          -
        • Puedes ver la película en cualquier momento, en cualquier lugar, en cualquier dispositivo. Puede transmitir la película en su televisor inteligente, computadora portátil, tableta, teléfono inteligente o consola de juegos. También puede descargar la película en su dispositivo y verla sin conexión cuando no tiene acceso a Internet o desea guardar datos.
        • -
        • Puedes disfrutar de la película en alta calidad y con subtítulos. Puede elegir entre diferentes resoluciones, como HD o 4K, dependiendo de su dispositivo y la velocidad de Internet. También puede seleccionar entre varios idiomas de subtítulos, como inglés, español, francés o coreano.
        • -
        • Puedes acceder a otro contenido relacionado con la película. Puedes ver videos entre bastidores, entrevistas con el reparto y el equipo, trailers, teasers y más. También puedes ver otras películas y programas coreanos similares a 20th Century Girl, como Crash Landing on You, Itaewon Class, Start-Up, y Sweet Home.
        • -
        • Puede compartir sus opiniones y recomendaciones con otros espectadores. Puede calificar la película, escribir una reseña o dejar un comentario en Netflix. También puede unirse a comunidades y foros en línea donde puede discutir la película con otros fans.
        • - -
        -

        Los pasos para descargar la película en Netflix

        -

        Para descargar 20th Century Girl en Netflix, necesitas tener una cuenta de Netflix y un dispositivo compatible. Si no tiene uno, puede registrarse para una prueba gratuita o un plan de suscripción mensual en el sitio web o la aplicación de Netflix. Los planes varían en precio y características, como el número de pantallas que puede ver al mismo tiempo, la calidad del video y la disponibilidad de descargas. Una vez que tenga una cuenta, puede seguir estos pasos para descargar la película en Netflix:

        -
          -
        1. Abra la aplicación de Netflix en su dispositivo e inicie sesión con su cuenta.
        2. -
        3. Busque 20th Century Girl en la barra de búsqueda o busque las categorías hasta encontrarla.
        4. -
        5. Selecciona la película y toca el botón Descargar que aparece debajo del título. También puedes tocar el botón More y luego seleccionar Download desde el menú.
        6. -
        7. Espere a que se complete la descarga. Puede comprobar el progreso de la descarga en la pestaña Downloads en la parte inferior de la pantalla.
        8. -
        9. Una vez finalizada la descarga, puede ver la película sin conexión pulsando en la pestaña Descargas y seleccionando la película. También puedes acceder a tus descargas desde el botón Menú en la parte superior izquierda de la pantalla y elegir Mis descargas.
        10. -
        -

        Los consejos y trucos para optimizar tu experiencia de visualización

        -

        Para aprovechar al máximo tu experiencia de streaming y descarga, aquí hay algunos consejos y trucos que puedes probar:

        -
          -
        • Asegúrese de tener una conexión a Internet estable y rápida. Si su Internet es lento o poco confiable, puede experimentar almacenamiento en búfer, retraso o video de baja calidad. Para evitar esto, puede usar una conexión por cable en lugar de Wi-Fi, cerrar cualquier otra aplicación o programa que use ancho de banda o actualizar su plan de Internet si es posible.
        • - -
        • Elimina cualquier descarga que ya no necesites. Si has visto la película o no quieres verla de nuevo, puedes borrarla de tu dispositivo para liberar espacio. Puedes hacer esto tocando el botón Editar en la pestaña Descargas y seleccionando la película. También puede eliminar todas las descargas a la vez pulsando en el botón Eliminar todas las descargas en el menú /strong>.
        • -
        • Póngase en contacto con el servicio al cliente de Netflix si encuentra algún problema o problema. Si tiene preguntas, quejas o comentarios sobre la transmisión o descarga en Netflix, puede ponerse en contacto con el servicio de atención al cliente de Netflix a través del teléfono, el chat o el correo electrónico. También puede visitar el Centro de ayuda de Netflix para obtener más información y soluciones.
        • -
        -

        Cómo disfrutar de 20th Century Girl con tus amigos y familiares

        -

        20th Century Girl es una película que se disfruta mejor con tus amigos y familiares. Es una película que te hará reír, llorar y sentir nostalgia por tu primer amor y amistades. Aquí hay algunas maneras de disfrutar de 20th Century Girl con sus seres queridos:

        -

        Los mejores aperitivos y bebidas para preparar la noche de cine

        -

        Ninguna noche de cine está completa sin algunos deliciosos aperitivos y bebidas para comer mientras observa. Estos son algunos de los mejores aperitivos y bebidas que van bien con 20th Century Girl:

        - -SnacksBebidas - - -

        Los divertidos juegos y actividades para hacer antes y después de la película

        -

        Además de ver la película, también puede tener algunos juegos divertidos y actividades para hacer con sus amigos y familiares antes y después de la película. Aquí hay algunas ideas:

        -

        -
          -
        • Antes de la película, puede jugar un juego de preguntas sobre 1999, el año en que se desarrolla la película. Puede hacerse preguntas sobre los eventos, tendencias, celebridades, música, películas y programas que ocurrieron o fueron populares en 1999. También puede utilizar cuestionarios o aplicaciones en línea para poner a prueba sus conocimientos.
        • -
        • Después de la película, puede jugar un juego de karaoke con las canciones de la banda sonora de la película. La película presenta algunas de las canciones más icónicas de los 90 y 2000, como I Want It That Way de Backstreet Boys, ... Baby One More Time por Britney Spears, Mi corazón seguirá adelante por Celine Dion, y Mientras me ames por Justin Bieber. Puedes cantar estas canciones usando una máquina de karaoke, un micrófono o una aplicación para smartphone.
        • -
        • Otra actividad que puedes hacer después de la película es hacer una cápsula del tiempo con tus amigos y familiares. Pueden escribir cartas a sus seres futuros, tomar fotos o videos de ustedes mismos, o recoger artículos que representan sus vidas actuales, como recuerdos, boletos, revistas o juguetes. A continuación, puede poner estas cosas en una caja o un recipiente y sellarlo con una fecha. Usted puede decidir cuándo abrir la cápsula del tiempo en el futuro, como en 10 años, 20 años, o en una ocasión especial.
        • -
        -

        Las preguntas de discusión y los temas para compartir sus pensamientos y sentimientos sobre la película

        -

        Una de las mejores maneras de disfrutar de 20th Century Girl es compartir tus pensamientos y sentimientos sobre la película con tus amigos y familiares. Puede tener una discusión significativa y animada sobre la película haciéndose algunas preguntas y temas, como:

        -
          -
        • ¿Qué te gustó o no de la película?
        • - -
        • ¿Cuál fue tu escena o momento favorito en la película y por qué?
        • -
        • ¿Cómo te hizo sentir la película? ¿Te hizo reír, llorar o ambos?
        • -
        • ¿Qué aprendiste de la película? ¿Te enseñó algo sobre ti mismo, amor, amistad o vida?
        • -
        • ¿Cómo se relacionó la película con tus propias experiencias? ¿Te recordó tu primer amor o amistades?
        • -
        • ¿Cómo retrató la película la era de 1999? ¿Capturó la esencia de ese período de tiempo?
        • -
        • ¿Qué te pareció el final de la película? ¿Estabas satisfecho o decepcionado?
        • -
        • Si pudieras cambiar algo sobre la película, ¿qué sería?
        • -
        • Si hubiera una secuela de la película, ¿qué querrías que pasara?
        • -
        -

        Conclusión

        -

        20th Century Girl es una película romántica coreana que te llevará a un viaje nostálgico y conmovedor de primer amor y amistades. Es una película que puedes ver con tus amigos y familiares y pasar un rato divertido y memorable juntos. Puede transmitir y descargar fácilmente la película en Netflix, donde también puede encontrar otro contenido relacionado con la película. También puede preparar algunos aperitivos y bebidas, jugar algunos juegos y actividades, y tener algunas discusiones sobre la película para mejorar su experiencia de visualización. Si estás buscando una película dulce y encantadora para ver este fin de semana, no te pierdas 20th Century Girl. ¡Descárgalo ahora en Netflix y disfruta!

        -

        Preguntas frecuentes

        -

        Aquí están algunas de las preguntas más frecuentes sobre 20th Century Girl:

        -
          -
        1. ¿Está la chica del siglo XX basada en una historia real?
        2. -

          No, 20th Century Girl no se basa en una historia real. Se basa en un webtoon del mismo nombre de Yoon Yi-soo.

          -
        3. ¿Dónde fue filmada la chica del siglo 20?
        4. -

          La película fue filmada en varios lugares de Corea del Sur, como Seúl, Busan, la isla de Jeju y la provincia de Gyeonggi.

          - -

          Las canciones de 20 Century Girl son cantadas por varios artistas, como Backstreet Boys, Britney Spears, Celine Dion, Justin Bieber e IU. La banda sonora original de la película está compuesta por Kim Jun-seok y Park Se-jun.

          -
        5. ¿Cuánto tiempo es la chica del siglo 20?
        6. -

          La película tiene un tiempo de ejecución de 115 minutos.

          -
        7. Es la chica del siglo XX adecuada para los niños?
        8. -

          La película tiene una calificación PG-13 para algunas referencias de lenguaje, violencia y sexuales. Es adecuada para adolescentes y adultos, pero no para niños pequeños.

          -

        64aa2da5cf
        -
        -
        \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/__version__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/__version__.py deleted file mode 100644 index 69be3dec7418c9bececde7811fd1d5a62f995f03..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/__version__.py +++ /dev/null @@ -1,14 +0,0 @@ -# .-. .-. .-. . . .-. .-. .-. .-. -# |( |- |.| | | |- `-. | `-. -# ' ' `-' `-`.`-' `-' `-' ' `-' - -__title__ = "requests" -__description__ = "Python HTTP for Humans." -__url__ = "https://requests.readthedocs.io" -__version__ = "2.28.2" -__build__ = 0x022802 -__author__ = "Kenneth Reitz" -__author_email__ = "me@kennethreitz.org" -__license__ = "Apache 2.0" -__copyright__ = "Copyright Kenneth Reitz" -__cake__ = "\u2728 \U0001f370 \u2728" diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/packages/six.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/packages/six.py deleted file mode 100644 index f099a3dcd28d2fec21457c9b6c01ded4e3e9ddee..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/packages/six.py +++ /dev/null @@ -1,1076 +0,0 @@ -# Copyright (c) 2010-2020 Benjamin Peterson -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -"""Utilities for writing code that runs on Python 2 and 3""" - -from __future__ import absolute_import - -import functools -import itertools -import operator -import sys -import types - -__author__ = "Benjamin Peterson " -__version__ = "1.16.0" - - -# Useful for very coarse version differentiation. -PY2 = sys.version_info[0] == 2 -PY3 = sys.version_info[0] == 3 -PY34 = sys.version_info[0:2] >= (3, 4) - -if PY3: - string_types = (str,) - integer_types = (int,) - class_types = (type,) - text_type = str - binary_type = bytes - - MAXSIZE = sys.maxsize -else: - string_types = (basestring,) - integer_types = (int, long) - class_types = (type, types.ClassType) - text_type = unicode - binary_type = str - - if sys.platform.startswith("java"): - # Jython always uses 32 bits. - MAXSIZE = int((1 << 31) - 1) - else: - # It's possible to have sizeof(long) != sizeof(Py_ssize_t). - class X(object): - def __len__(self): - return 1 << 31 - - try: - len(X()) - except OverflowError: - # 32-bit - MAXSIZE = int((1 << 31) - 1) - else: - # 64-bit - MAXSIZE = int((1 << 63) - 1) - del X - -if PY34: - from importlib.util import spec_from_loader -else: - spec_from_loader = None - - -def _add_doc(func, doc): - """Add documentation to a function.""" - func.__doc__ = doc - - -def _import_module(name): - """Import module, returning the module after the last dot.""" - __import__(name) - return sys.modules[name] - - -class _LazyDescr(object): - def __init__(self, name): - self.name = name - - def __get__(self, obj, tp): - result = self._resolve() - setattr(obj, self.name, result) # Invokes __set__. - try: - # This is a bit ugly, but it avoids running this again by - # removing this descriptor. - delattr(obj.__class__, self.name) - except AttributeError: - pass - return result - - -class MovedModule(_LazyDescr): - def __init__(self, name, old, new=None): - super(MovedModule, self).__init__(name) - if PY3: - if new is None: - new = name - self.mod = new - else: - self.mod = old - - def _resolve(self): - return _import_module(self.mod) - - def __getattr__(self, attr): - _module = self._resolve() - value = getattr(_module, attr) - setattr(self, attr, value) - return value - - -class _LazyModule(types.ModuleType): - def __init__(self, name): - super(_LazyModule, self).__init__(name) - self.__doc__ = self.__class__.__doc__ - - def __dir__(self): - attrs = ["__doc__", "__name__"] - attrs += [attr.name for attr in self._moved_attributes] - return attrs - - # Subclasses should override this - _moved_attributes = [] - - -class MovedAttribute(_LazyDescr): - def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): - super(MovedAttribute, self).__init__(name) - if PY3: - if new_mod is None: - new_mod = name - self.mod = new_mod - if new_attr is None: - if old_attr is None: - new_attr = name - else: - new_attr = old_attr - self.attr = new_attr - else: - self.mod = old_mod - if old_attr is None: - old_attr = name - self.attr = old_attr - - def _resolve(self): - module = _import_module(self.mod) - return getattr(module, self.attr) - - -class _SixMetaPathImporter(object): - - """ - A meta path importer to import six.moves and its submodules. - - This class implements a PEP302 finder and loader. It should be compatible - with Python 2.5 and all existing versions of Python3 - """ - - def __init__(self, six_module_name): - self.name = six_module_name - self.known_modules = {} - - def _add_module(self, mod, *fullnames): - for fullname in fullnames: - self.known_modules[self.name + "." + fullname] = mod - - def _get_module(self, fullname): - return self.known_modules[self.name + "." + fullname] - - def find_module(self, fullname, path=None): - if fullname in self.known_modules: - return self - return None - - def find_spec(self, fullname, path, target=None): - if fullname in self.known_modules: - return spec_from_loader(fullname, self) - return None - - def __get_module(self, fullname): - try: - return self.known_modules[fullname] - except KeyError: - raise ImportError("This loader does not know module " + fullname) - - def load_module(self, fullname): - try: - # in case of a reload - return sys.modules[fullname] - except KeyError: - pass - mod = self.__get_module(fullname) - if isinstance(mod, MovedModule): - mod = mod._resolve() - else: - mod.__loader__ = self - sys.modules[fullname] = mod - return mod - - def is_package(self, fullname): - """ - Return true, if the named module is a package. - - We need this method to get correct spec objects with - Python 3.4 (see PEP451) - """ - return hasattr(self.__get_module(fullname), "__path__") - - def get_code(self, fullname): - """Return None - - Required, if is_package is implemented""" - self.__get_module(fullname) # eventually raises ImportError - return None - - get_source = get_code # same as get_code - - def create_module(self, spec): - return self.load_module(spec.name) - - def exec_module(self, module): - pass - - -_importer = _SixMetaPathImporter(__name__) - - -class _MovedItems(_LazyModule): - - """Lazy loading of moved objects""" - - __path__ = [] # mark as package - - -_moved_attributes = [ - MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), - MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), - MovedAttribute( - "filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse" - ), - MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), - MovedAttribute("intern", "__builtin__", "sys"), - MovedAttribute("map", "itertools", "builtins", "imap", "map"), - MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), - MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), - MovedAttribute("getoutput", "commands", "subprocess"), - MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute( - "reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload" - ), - MovedAttribute("reduce", "__builtin__", "functools"), - MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), - MovedAttribute("StringIO", "StringIO", "io"), - MovedAttribute("UserDict", "UserDict", "collections"), - MovedAttribute("UserList", "UserList", "collections"), - MovedAttribute("UserString", "UserString", "collections"), - MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), - MovedAttribute( - "zip_longest", "itertools", "itertools", "izip_longest", "zip_longest" - ), - MovedModule("builtins", "__builtin__"), - MovedModule("configparser", "ConfigParser"), - MovedModule( - "collections_abc", - "collections", - "collections.abc" if sys.version_info >= (3, 3) else "collections", - ), - MovedModule("copyreg", "copy_reg"), - MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), - MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), - MovedModule( - "_dummy_thread", - "dummy_thread", - "_dummy_thread" if sys.version_info < (3, 9) else "_thread", - ), - MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), - MovedModule("http_cookies", "Cookie", "http.cookies"), - MovedModule("html_entities", "htmlentitydefs", "html.entities"), - MovedModule("html_parser", "HTMLParser", "html.parser"), - MovedModule("http_client", "httplib", "http.client"), - MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), - MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), - MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), - MovedModule( - "email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart" - ), - MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), - MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), - MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), - MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), - MovedModule("cPickle", "cPickle", "pickle"), - MovedModule("queue", "Queue"), - MovedModule("reprlib", "repr"), - MovedModule("socketserver", "SocketServer"), - MovedModule("_thread", "thread", "_thread"), - MovedModule("tkinter", "Tkinter"), - MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), - MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), - MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), - MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), - MovedModule("tkinter_tix", "Tix", "tkinter.tix"), - MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), - MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), - MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), - MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), - MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), - MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), - MovedModule("tkinter_font", "tkFont", "tkinter.font"), - MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), - MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), - MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), - MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), - MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), - MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), - MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), - MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), -] -# Add windows specific modules. -if sys.platform == "win32": - _moved_attributes += [ - MovedModule("winreg", "_winreg"), - ] - -for attr in _moved_attributes: - setattr(_MovedItems, attr.name, attr) - if isinstance(attr, MovedModule): - _importer._add_module(attr, "moves." + attr.name) -del attr - -_MovedItems._moved_attributes = _moved_attributes - -moves = _MovedItems(__name__ + ".moves") -_importer._add_module(moves, "moves") - - -class Module_six_moves_urllib_parse(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_parse""" - - -_urllib_parse_moved_attributes = [ - MovedAttribute("ParseResult", "urlparse", "urllib.parse"), - MovedAttribute("SplitResult", "urlparse", "urllib.parse"), - MovedAttribute("parse_qs", "urlparse", "urllib.parse"), - MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), - MovedAttribute("urldefrag", "urlparse", "urllib.parse"), - MovedAttribute("urljoin", "urlparse", "urllib.parse"), - MovedAttribute("urlparse", "urlparse", "urllib.parse"), - MovedAttribute("urlsplit", "urlparse", "urllib.parse"), - MovedAttribute("urlunparse", "urlparse", "urllib.parse"), - MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), - MovedAttribute("quote", "urllib", "urllib.parse"), - MovedAttribute("quote_plus", "urllib", "urllib.parse"), - MovedAttribute("unquote", "urllib", "urllib.parse"), - MovedAttribute("unquote_plus", "urllib", "urllib.parse"), - MovedAttribute( - "unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes" - ), - MovedAttribute("urlencode", "urllib", "urllib.parse"), - MovedAttribute("splitquery", "urllib", "urllib.parse"), - MovedAttribute("splittag", "urllib", "urllib.parse"), - MovedAttribute("splituser", "urllib", "urllib.parse"), - MovedAttribute("splitvalue", "urllib", "urllib.parse"), - MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), - MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), - MovedAttribute("uses_params", "urlparse", "urllib.parse"), - MovedAttribute("uses_query", "urlparse", "urllib.parse"), - MovedAttribute("uses_relative", "urlparse", "urllib.parse"), -] -for attr in _urllib_parse_moved_attributes: - setattr(Module_six_moves_urllib_parse, attr.name, attr) -del attr - -Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), - "moves.urllib_parse", - "moves.urllib.parse", -) - - -class Module_six_moves_urllib_error(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_error""" - - -_urllib_error_moved_attributes = [ - MovedAttribute("URLError", "urllib2", "urllib.error"), - MovedAttribute("HTTPError", "urllib2", "urllib.error"), - MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), -] -for attr in _urllib_error_moved_attributes: - setattr(Module_six_moves_urllib_error, attr.name, attr) -del attr - -Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), - "moves.urllib_error", - "moves.urllib.error", -) - - -class Module_six_moves_urllib_request(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_request""" - - -_urllib_request_moved_attributes = [ - MovedAttribute("urlopen", "urllib2", "urllib.request"), - MovedAttribute("install_opener", "urllib2", "urllib.request"), - MovedAttribute("build_opener", "urllib2", "urllib.request"), - MovedAttribute("pathname2url", "urllib", "urllib.request"), - MovedAttribute("url2pathname", "urllib", "urllib.request"), - MovedAttribute("getproxies", "urllib", "urllib.request"), - MovedAttribute("Request", "urllib2", "urllib.request"), - MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), - MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), - MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), - MovedAttribute("BaseHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), - MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), - MovedAttribute("FileHandler", "urllib2", "urllib.request"), - MovedAttribute("FTPHandler", "urllib2", "urllib.request"), - MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), - MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), - MovedAttribute("urlretrieve", "urllib", "urllib.request"), - MovedAttribute("urlcleanup", "urllib", "urllib.request"), - MovedAttribute("URLopener", "urllib", "urllib.request"), - MovedAttribute("FancyURLopener", "urllib", "urllib.request"), - MovedAttribute("proxy_bypass", "urllib", "urllib.request"), - MovedAttribute("parse_http_list", "urllib2", "urllib.request"), - MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), -] -for attr in _urllib_request_moved_attributes: - setattr(Module_six_moves_urllib_request, attr.name, attr) -del attr - -Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), - "moves.urllib_request", - "moves.urllib.request", -) - - -class Module_six_moves_urllib_response(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_response""" - - -_urllib_response_moved_attributes = [ - MovedAttribute("addbase", "urllib", "urllib.response"), - MovedAttribute("addclosehook", "urllib", "urllib.response"), - MovedAttribute("addinfo", "urllib", "urllib.response"), - MovedAttribute("addinfourl", "urllib", "urllib.response"), -] -for attr in _urllib_response_moved_attributes: - setattr(Module_six_moves_urllib_response, attr.name, attr) -del attr - -Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), - "moves.urllib_response", - "moves.urllib.response", -) - - -class Module_six_moves_urllib_robotparser(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_robotparser""" - - -_urllib_robotparser_moved_attributes = [ - MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), -] -for attr in _urllib_robotparser_moved_attributes: - setattr(Module_six_moves_urllib_robotparser, attr.name, attr) -del attr - -Module_six_moves_urllib_robotparser._moved_attributes = ( - _urllib_robotparser_moved_attributes -) - -_importer._add_module( - Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), - "moves.urllib_robotparser", - "moves.urllib.robotparser", -) - - -class Module_six_moves_urllib(types.ModuleType): - - """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" - - __path__ = [] # mark as package - parse = _importer._get_module("moves.urllib_parse") - error = _importer._get_module("moves.urllib_error") - request = _importer._get_module("moves.urllib_request") - response = _importer._get_module("moves.urllib_response") - robotparser = _importer._get_module("moves.urllib_robotparser") - - def __dir__(self): - return ["parse", "error", "request", "response", "robotparser"] - - -_importer._add_module( - Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib" -) - - -def add_move(move): - """Add an item to six.moves.""" - setattr(_MovedItems, move.name, move) - - -def remove_move(name): - """Remove item from six.moves.""" - try: - delattr(_MovedItems, name) - except AttributeError: - try: - del moves.__dict__[name] - except KeyError: - raise AttributeError("no such move, %r" % (name,)) - - -if PY3: - _meth_func = "__func__" - _meth_self = "__self__" - - _func_closure = "__closure__" - _func_code = "__code__" - _func_defaults = "__defaults__" - _func_globals = "__globals__" -else: - _meth_func = "im_func" - _meth_self = "im_self" - - _func_closure = "func_closure" - _func_code = "func_code" - _func_defaults = "func_defaults" - _func_globals = "func_globals" - - -try: - advance_iterator = next -except NameError: - - def advance_iterator(it): - return it.next() - - -next = advance_iterator - - -try: - callable = callable -except NameError: - - def callable(obj): - return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) - - -if PY3: - - def get_unbound_function(unbound): - return unbound - - create_bound_method = types.MethodType - - def create_unbound_method(func, cls): - return func - - Iterator = object -else: - - def get_unbound_function(unbound): - return unbound.im_func - - def create_bound_method(func, obj): - return types.MethodType(func, obj, obj.__class__) - - def create_unbound_method(func, cls): - return types.MethodType(func, None, cls) - - class Iterator(object): - def next(self): - return type(self).__next__(self) - - callable = callable -_add_doc( - get_unbound_function, """Get the function out of a possibly unbound function""" -) - - -get_method_function = operator.attrgetter(_meth_func) -get_method_self = operator.attrgetter(_meth_self) -get_function_closure = operator.attrgetter(_func_closure) -get_function_code = operator.attrgetter(_func_code) -get_function_defaults = operator.attrgetter(_func_defaults) -get_function_globals = operator.attrgetter(_func_globals) - - -if PY3: - - def iterkeys(d, **kw): - return iter(d.keys(**kw)) - - def itervalues(d, **kw): - return iter(d.values(**kw)) - - def iteritems(d, **kw): - return iter(d.items(**kw)) - - def iterlists(d, **kw): - return iter(d.lists(**kw)) - - viewkeys = operator.methodcaller("keys") - - viewvalues = operator.methodcaller("values") - - viewitems = operator.methodcaller("items") -else: - - def iterkeys(d, **kw): - return d.iterkeys(**kw) - - def itervalues(d, **kw): - return d.itervalues(**kw) - - def iteritems(d, **kw): - return d.iteritems(**kw) - - def iterlists(d, **kw): - return d.iterlists(**kw) - - viewkeys = operator.methodcaller("viewkeys") - - viewvalues = operator.methodcaller("viewvalues") - - viewitems = operator.methodcaller("viewitems") - -_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") -_add_doc(itervalues, "Return an iterator over the values of a dictionary.") -_add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") -_add_doc( - iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary." -) - - -if PY3: - - def b(s): - return s.encode("latin-1") - - def u(s): - return s - - unichr = chr - import struct - - int2byte = struct.Struct(">B").pack - del struct - byte2int = operator.itemgetter(0) - indexbytes = operator.getitem - iterbytes = iter - import io - - StringIO = io.StringIO - BytesIO = io.BytesIO - del io - _assertCountEqual = "assertCountEqual" - if sys.version_info[1] <= 1: - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" - else: - _assertRaisesRegex = "assertRaisesRegex" - _assertRegex = "assertRegex" - _assertNotRegex = "assertNotRegex" -else: - - def b(s): - return s - - # Workaround for standalone backslash - - def u(s): - return unicode(s.replace(r"\\", r"\\\\"), "unicode_escape") - - unichr = unichr - int2byte = chr - - def byte2int(bs): - return ord(bs[0]) - - def indexbytes(buf, i): - return ord(buf[i]) - - iterbytes = functools.partial(itertools.imap, ord) - import StringIO - - StringIO = BytesIO = StringIO.StringIO - _assertCountEqual = "assertItemsEqual" - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" -_add_doc(b, """Byte literal""") -_add_doc(u, """Text literal""") - - -def assertCountEqual(self, *args, **kwargs): - return getattr(self, _assertCountEqual)(*args, **kwargs) - - -def assertRaisesRegex(self, *args, **kwargs): - return getattr(self, _assertRaisesRegex)(*args, **kwargs) - - -def assertRegex(self, *args, **kwargs): - return getattr(self, _assertRegex)(*args, **kwargs) - - -def assertNotRegex(self, *args, **kwargs): - return getattr(self, _assertNotRegex)(*args, **kwargs) - - -if PY3: - exec_ = getattr(moves.builtins, "exec") - - def reraise(tp, value, tb=None): - try: - if value is None: - value = tp() - if value.__traceback__ is not tb: - raise value.with_traceback(tb) - raise value - finally: - value = None - tb = None - -else: - - def exec_(_code_, _globs_=None, _locs_=None): - """Execute code in a namespace.""" - if _globs_ is None: - frame = sys._getframe(1) - _globs_ = frame.f_globals - if _locs_ is None: - _locs_ = frame.f_locals - del frame - elif _locs_ is None: - _locs_ = _globs_ - exec ("""exec _code_ in _globs_, _locs_""") - - exec_( - """def reraise(tp, value, tb=None): - try: - raise tp, value, tb - finally: - tb = None -""" - ) - - -if sys.version_info[:2] > (3,): - exec_( - """def raise_from(value, from_value): - try: - raise value from from_value - finally: - value = None -""" - ) -else: - - def raise_from(value, from_value): - raise value - - -print_ = getattr(moves.builtins, "print", None) -if print_ is None: - - def print_(*args, **kwargs): - """The new-style print function for Python 2.4 and 2.5.""" - fp = kwargs.pop("file", sys.stdout) - if fp is None: - return - - def write(data): - if not isinstance(data, basestring): - data = str(data) - # If the file has an encoding, encode unicode with it. - if ( - isinstance(fp, file) - and isinstance(data, unicode) - and fp.encoding is not None - ): - errors = getattr(fp, "errors", None) - if errors is None: - errors = "strict" - data = data.encode(fp.encoding, errors) - fp.write(data) - - want_unicode = False - sep = kwargs.pop("sep", None) - if sep is not None: - if isinstance(sep, unicode): - want_unicode = True - elif not isinstance(sep, str): - raise TypeError("sep must be None or a string") - end = kwargs.pop("end", None) - if end is not None: - if isinstance(end, unicode): - want_unicode = True - elif not isinstance(end, str): - raise TypeError("end must be None or a string") - if kwargs: - raise TypeError("invalid keyword arguments to print()") - if not want_unicode: - for arg in args: - if isinstance(arg, unicode): - want_unicode = True - break - if want_unicode: - newline = unicode("\n") - space = unicode(" ") - else: - newline = "\n" - space = " " - if sep is None: - sep = space - if end is None: - end = newline - for i, arg in enumerate(args): - if i: - write(sep) - write(arg) - write(end) - - -if sys.version_info[:2] < (3, 3): - _print = print_ - - def print_(*args, **kwargs): - fp = kwargs.get("file", sys.stdout) - flush = kwargs.pop("flush", False) - _print(*args, **kwargs) - if flush and fp is not None: - fp.flush() - - -_add_doc(reraise, """Reraise an exception.""") - -if sys.version_info[0:2] < (3, 4): - # This does exactly the same what the :func:`py3:functools.update_wrapper` - # function does on Python versions after 3.2. It sets the ``__wrapped__`` - # attribute on ``wrapper`` object and it doesn't raise an error if any of - # the attributes mentioned in ``assigned`` and ``updated`` are missing on - # ``wrapped`` object. - def _update_wrapper( - wrapper, - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - for attr in assigned: - try: - value = getattr(wrapped, attr) - except AttributeError: - continue - else: - setattr(wrapper, attr, value) - for attr in updated: - getattr(wrapper, attr).update(getattr(wrapped, attr, {})) - wrapper.__wrapped__ = wrapped - return wrapper - - _update_wrapper.__doc__ = functools.update_wrapper.__doc__ - - def wraps( - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - return functools.partial( - _update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated - ) - - wraps.__doc__ = functools.wraps.__doc__ - -else: - wraps = functools.wraps - - -def with_metaclass(meta, *bases): - """Create a base class with a metaclass.""" - # This requires a bit of explanation: the basic idea is to make a dummy - # metaclass for one level of class instantiation that replaces itself with - # the actual metaclass. - class metaclass(type): - def __new__(cls, name, this_bases, d): - if sys.version_info[:2] >= (3, 7): - # This version introduced PEP 560 that requires a bit - # of extra care (we mimic what is done by __build_class__). - resolved_bases = types.resolve_bases(bases) - if resolved_bases is not bases: - d["__orig_bases__"] = bases - else: - resolved_bases = bases - return meta(name, resolved_bases, d) - - @classmethod - def __prepare__(cls, name, this_bases): - return meta.__prepare__(name, bases) - - return type.__new__(metaclass, "temporary_class", (), {}) - - -def add_metaclass(metaclass): - """Class decorator for creating a class with a metaclass.""" - - def wrapper(cls): - orig_vars = cls.__dict__.copy() - slots = orig_vars.get("__slots__") - if slots is not None: - if isinstance(slots, str): - slots = [slots] - for slots_var in slots: - orig_vars.pop(slots_var) - orig_vars.pop("__dict__", None) - orig_vars.pop("__weakref__", None) - if hasattr(cls, "__qualname__"): - orig_vars["__qualname__"] = cls.__qualname__ - return metaclass(cls.__name__, cls.__bases__, orig_vars) - - return wrapper - - -def ensure_binary(s, encoding="utf-8", errors="strict"): - """Coerce **s** to six.binary_type. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> encoded to `bytes` - - `bytes` -> `bytes` - """ - if isinstance(s, binary_type): - return s - if isinstance(s, text_type): - return s.encode(encoding, errors) - raise TypeError("not expecting type '%s'" % type(s)) - - -def ensure_str(s, encoding="utf-8", errors="strict"): - """Coerce *s* to `str`. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - # Optimization: Fast return for the common case. - if type(s) is str: - return s - if PY2 and isinstance(s, text_type): - return s.encode(encoding, errors) - elif PY3 and isinstance(s, binary_type): - return s.decode(encoding, errors) - elif not isinstance(s, (text_type, binary_type)): - raise TypeError("not expecting type '%s'" % type(s)) - return s - - -def ensure_text(s, encoding="utf-8", errors="strict"): - """Coerce *s* to six.text_type. - - For Python 2: - - `unicode` -> `unicode` - - `str` -> `unicode` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - if isinstance(s, binary_type): - return s.decode(encoding, errors) - elif isinstance(s, text_type): - return s - else: - raise TypeError("not expecting type '%s'" % type(s)) - - -def python_2_unicode_compatible(klass): - """ - A class decorator that defines __unicode__ and __str__ methods under Python 2. - Under Python 3 it does nothing. - - To support Python 2 and 3 with a single code base, define a __str__ method - returning text and apply this decorator to the class. - """ - if PY2: - if "__str__" not in klass.__dict__: - raise ValueError( - "@python_2_unicode_compatible cannot be applied " - "to %s because it doesn't define __str__()." % klass.__name__ - ) - klass.__unicode__ = klass.__str__ - klass.__str__ = lambda self: self.__unicode__().encode("utf-8") - return klass - - -# Complete the moves implementation. -# This code is at the end of this module to speed up module loading. -# Turn this module into a package. -__path__ = [] # required for PEP 302 and PEP 451 -__package__ = __name__ # see PEP 366 @ReservedAssignment -if globals().get("__spec__") is not None: - __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable -# Remove other six meta path importers, since they cause problems. This can -# happen if six is removed from sys.modules and then reloaded. (Setuptools does -# this for some reason.) -if sys.meta_path: - for i, importer in enumerate(sys.meta_path): - # Here's some real nastiness: Another "instance" of the six module might - # be floating around. Therefore, we can't use isinstance() to check for - # the six meta path importer, since the other six instance will have - # inserted an importer with different class. - if ( - type(importer).__name__ == "_SixMetaPathImporter" - and importer.name == __name__ - ): - del sys.meta_path[i] - break - del i, importer -# Finally, add the importer to the meta path import hook. -sys.meta_path.append(_importer) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_securetransport/low_level.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_securetransport/low_level.py deleted file mode 100644 index fa0b245d279e96724d5610f93bc3b3c8c22ca032..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_securetransport/low_level.py +++ /dev/null @@ -1,397 +0,0 @@ -""" -Low-level helpers for the SecureTransport bindings. - -These are Python functions that are not directly related to the high-level APIs -but are necessary to get them to work. They include a whole bunch of low-level -CoreFoundation messing about and memory management. The concerns in this module -are almost entirely about trying to avoid memory leaks and providing -appropriate and useful assistance to the higher-level code. -""" -import base64 -import ctypes -import itertools -import os -import re -import ssl -import struct -import tempfile - -from .bindings import CFConst, CoreFoundation, Security - -# This regular expression is used to grab PEM data out of a PEM bundle. -_PEM_CERTS_RE = re.compile( - b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL -) - - -def _cf_data_from_bytes(bytestring): - """ - Given a bytestring, create a CFData object from it. This CFData object must - be CFReleased by the caller. - """ - return CoreFoundation.CFDataCreate( - CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring) - ) - - -def _cf_dictionary_from_tuples(tuples): - """ - Given a list of Python tuples, create an associated CFDictionary. - """ - dictionary_size = len(tuples) - - # We need to get the dictionary keys and values out in the same order. - keys = (t[0] for t in tuples) - values = (t[1] for t in tuples) - cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys) - cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values) - - return CoreFoundation.CFDictionaryCreate( - CoreFoundation.kCFAllocatorDefault, - cf_keys, - cf_values, - dictionary_size, - CoreFoundation.kCFTypeDictionaryKeyCallBacks, - CoreFoundation.kCFTypeDictionaryValueCallBacks, - ) - - -def _cfstr(py_bstr): - """ - Given a Python binary data, create a CFString. - The string must be CFReleased by the caller. - """ - c_str = ctypes.c_char_p(py_bstr) - cf_str = CoreFoundation.CFStringCreateWithCString( - CoreFoundation.kCFAllocatorDefault, - c_str, - CFConst.kCFStringEncodingUTF8, - ) - return cf_str - - -def _create_cfstring_array(lst): - """ - Given a list of Python binary data, create an associated CFMutableArray. - The array must be CFReleased by the caller. - - Raises an ssl.SSLError on failure. - """ - cf_arr = None - try: - cf_arr = CoreFoundation.CFArrayCreateMutable( - CoreFoundation.kCFAllocatorDefault, - 0, - ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), - ) - if not cf_arr: - raise MemoryError("Unable to allocate memory!") - for item in lst: - cf_str = _cfstr(item) - if not cf_str: - raise MemoryError("Unable to allocate memory!") - try: - CoreFoundation.CFArrayAppendValue(cf_arr, cf_str) - finally: - CoreFoundation.CFRelease(cf_str) - except BaseException as e: - if cf_arr: - CoreFoundation.CFRelease(cf_arr) - raise ssl.SSLError("Unable to allocate array: %s" % (e,)) - return cf_arr - - -def _cf_string_to_unicode(value): - """ - Creates a Unicode string from a CFString object. Used entirely for error - reporting. - - Yes, it annoys me quite a lot that this function is this complex. - """ - value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p)) - - string = CoreFoundation.CFStringGetCStringPtr( - value_as_void_p, CFConst.kCFStringEncodingUTF8 - ) - if string is None: - buffer = ctypes.create_string_buffer(1024) - result = CoreFoundation.CFStringGetCString( - value_as_void_p, buffer, 1024, CFConst.kCFStringEncodingUTF8 - ) - if not result: - raise OSError("Error copying C string from CFStringRef") - string = buffer.value - if string is not None: - string = string.decode("utf-8") - return string - - -def _assert_no_error(error, exception_class=None): - """ - Checks the return code and throws an exception if there is an error to - report - """ - if error == 0: - return - - cf_error_string = Security.SecCopyErrorMessageString(error, None) - output = _cf_string_to_unicode(cf_error_string) - CoreFoundation.CFRelease(cf_error_string) - - if output is None or output == u"": - output = u"OSStatus %s" % error - - if exception_class is None: - exception_class = ssl.SSLError - - raise exception_class(output) - - -def _cert_array_from_pem(pem_bundle): - """ - Given a bundle of certs in PEM format, turns them into a CFArray of certs - that can be used to validate a cert chain. - """ - # Normalize the PEM bundle's line endings. - pem_bundle = pem_bundle.replace(b"\r\n", b"\n") - - der_certs = [ - base64.b64decode(match.group(1)) for match in _PEM_CERTS_RE.finditer(pem_bundle) - ] - if not der_certs: - raise ssl.SSLError("No root certificates specified") - - cert_array = CoreFoundation.CFArrayCreateMutable( - CoreFoundation.kCFAllocatorDefault, - 0, - ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), - ) - if not cert_array: - raise ssl.SSLError("Unable to allocate memory!") - - try: - for der_bytes in der_certs: - certdata = _cf_data_from_bytes(der_bytes) - if not certdata: - raise ssl.SSLError("Unable to allocate memory!") - cert = Security.SecCertificateCreateWithData( - CoreFoundation.kCFAllocatorDefault, certdata - ) - CoreFoundation.CFRelease(certdata) - if not cert: - raise ssl.SSLError("Unable to build cert object!") - - CoreFoundation.CFArrayAppendValue(cert_array, cert) - CoreFoundation.CFRelease(cert) - except Exception: - # We need to free the array before the exception bubbles further. - # We only want to do that if an error occurs: otherwise, the caller - # should free. - CoreFoundation.CFRelease(cert_array) - raise - - return cert_array - - -def _is_cert(item): - """ - Returns True if a given CFTypeRef is a certificate. - """ - expected = Security.SecCertificateGetTypeID() - return CoreFoundation.CFGetTypeID(item) == expected - - -def _is_identity(item): - """ - Returns True if a given CFTypeRef is an identity. - """ - expected = Security.SecIdentityGetTypeID() - return CoreFoundation.CFGetTypeID(item) == expected - - -def _temporary_keychain(): - """ - This function creates a temporary Mac keychain that we can use to work with - credentials. This keychain uses a one-time password and a temporary file to - store the data. We expect to have one keychain per socket. The returned - SecKeychainRef must be freed by the caller, including calling - SecKeychainDelete. - - Returns a tuple of the SecKeychainRef and the path to the temporary - directory that contains it. - """ - # Unfortunately, SecKeychainCreate requires a path to a keychain. This - # means we cannot use mkstemp to use a generic temporary file. Instead, - # we're going to create a temporary directory and a filename to use there. - # This filename will be 8 random bytes expanded into base64. We also need - # some random bytes to password-protect the keychain we're creating, so we - # ask for 40 random bytes. - random_bytes = os.urandom(40) - filename = base64.b16encode(random_bytes[:8]).decode("utf-8") - password = base64.b16encode(random_bytes[8:]) # Must be valid UTF-8 - tempdirectory = tempfile.mkdtemp() - - keychain_path = os.path.join(tempdirectory, filename).encode("utf-8") - - # We now want to create the keychain itself. - keychain = Security.SecKeychainRef() - status = Security.SecKeychainCreate( - keychain_path, len(password), password, False, None, ctypes.byref(keychain) - ) - _assert_no_error(status) - - # Having created the keychain, we want to pass it off to the caller. - return keychain, tempdirectory - - -def _load_items_from_file(keychain, path): - """ - Given a single file, loads all the trust objects from it into arrays and - the keychain. - Returns a tuple of lists: the first list is a list of identities, the - second a list of certs. - """ - certificates = [] - identities = [] - result_array = None - - with open(path, "rb") as f: - raw_filedata = f.read() - - try: - filedata = CoreFoundation.CFDataCreate( - CoreFoundation.kCFAllocatorDefault, raw_filedata, len(raw_filedata) - ) - result_array = CoreFoundation.CFArrayRef() - result = Security.SecItemImport( - filedata, # cert data - None, # Filename, leaving it out for now - None, # What the type of the file is, we don't care - None, # what's in the file, we don't care - 0, # import flags - None, # key params, can include passphrase in the future - keychain, # The keychain to insert into - ctypes.byref(result_array), # Results - ) - _assert_no_error(result) - - # A CFArray is not very useful to us as an intermediary - # representation, so we are going to extract the objects we want - # and then free the array. We don't need to keep hold of keys: the - # keychain already has them! - result_count = CoreFoundation.CFArrayGetCount(result_array) - for index in range(result_count): - item = CoreFoundation.CFArrayGetValueAtIndex(result_array, index) - item = ctypes.cast(item, CoreFoundation.CFTypeRef) - - if _is_cert(item): - CoreFoundation.CFRetain(item) - certificates.append(item) - elif _is_identity(item): - CoreFoundation.CFRetain(item) - identities.append(item) - finally: - if result_array: - CoreFoundation.CFRelease(result_array) - - CoreFoundation.CFRelease(filedata) - - return (identities, certificates) - - -def _load_client_cert_chain(keychain, *paths): - """ - Load certificates and maybe keys from a number of files. Has the end goal - of returning a CFArray containing one SecIdentityRef, and then zero or more - SecCertificateRef objects, suitable for use as a client certificate trust - chain. - """ - # Ok, the strategy. - # - # This relies on knowing that macOS will not give you a SecIdentityRef - # unless you have imported a key into a keychain. This is a somewhat - # artificial limitation of macOS (for example, it doesn't necessarily - # affect iOS), but there is nothing inside Security.framework that lets you - # get a SecIdentityRef without having a key in a keychain. - # - # So the policy here is we take all the files and iterate them in order. - # Each one will use SecItemImport to have one or more objects loaded from - # it. We will also point at a keychain that macOS can use to work with the - # private key. - # - # Once we have all the objects, we'll check what we actually have. If we - # already have a SecIdentityRef in hand, fab: we'll use that. Otherwise, - # we'll take the first certificate (which we assume to be our leaf) and - # ask the keychain to give us a SecIdentityRef with that cert's associated - # key. - # - # We'll then return a CFArray containing the trust chain: one - # SecIdentityRef and then zero-or-more SecCertificateRef objects. The - # responsibility for freeing this CFArray will be with the caller. This - # CFArray must remain alive for the entire connection, so in practice it - # will be stored with a single SSLSocket, along with the reference to the - # keychain. - certificates = [] - identities = [] - - # Filter out bad paths. - paths = (path for path in paths if path) - - try: - for file_path in paths: - new_identities, new_certs = _load_items_from_file(keychain, file_path) - identities.extend(new_identities) - certificates.extend(new_certs) - - # Ok, we have everything. The question is: do we have an identity? If - # not, we want to grab one from the first cert we have. - if not identities: - new_identity = Security.SecIdentityRef() - status = Security.SecIdentityCreateWithCertificate( - keychain, certificates[0], ctypes.byref(new_identity) - ) - _assert_no_error(status) - identities.append(new_identity) - - # We now want to release the original certificate, as we no longer - # need it. - CoreFoundation.CFRelease(certificates.pop(0)) - - # We now need to build a new CFArray that holds the trust chain. - trust_chain = CoreFoundation.CFArrayCreateMutable( - CoreFoundation.kCFAllocatorDefault, - 0, - ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), - ) - for item in itertools.chain(identities, certificates): - # ArrayAppendValue does a CFRetain on the item. That's fine, - # because the finally block will release our other refs to them. - CoreFoundation.CFArrayAppendValue(trust_chain, item) - - return trust_chain - finally: - for obj in itertools.chain(identities, certificates): - CoreFoundation.CFRelease(obj) - - -TLS_PROTOCOL_VERSIONS = { - "SSLv2": (0, 2), - "SSLv3": (3, 0), - "TLSv1": (3, 1), - "TLSv1.1": (3, 2), - "TLSv1.2": (3, 3), -} - - -def _build_tls_unknown_ca_alert(version): - """ - Builds a TLS alert record for an unknown CA. - """ - ver_maj, ver_min = TLS_PROTOCOL_VERSIONS[version] - severity_fatal = 0x02 - description_unknown_ca = 0x30 - msg = struct.pack(">BB", severity_fatal, description_unknown_ca) - msg_len = len(msg) - record_type_alert = 0x15 - record = struct.pack(">BBBH", record_type_alert, ver_maj, ver_min, msg_len) + msg - return record diff --git a/spaces/Bijoy2001/real-time-voice-recognition/README.md b/spaces/Bijoy2001/real-time-voice-recognition/README.md deleted file mode 100644 index 4985b0d41a6dbe0362763f82c3ee61b3ed38fb88..0000000000000000000000000000000000000000 --- a/spaces/Bijoy2001/real-time-voice-recognition/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Real Time Voice Recognition -emoji: 👀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/CC123123/blip2_t/style.css b/spaces/CC123123/blip2_t/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/CC123123/blip2_t/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/extract_features.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/extract_features.py deleted file mode 100644 index ecd891f32a99b7f3d1370e2d3af7770e154d32d5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/extract_features.py +++ /dev/null @@ -1,217 +0,0 @@ -""" -========================================================================================= -Trojan VQA -Written by Karan Sikka and Matthew Walmer - -This code will generate an image feature set for the VQAv2 dataset which can be clean or -may include trojan triggers. It will then run object detection models to extract and -cache features for training VQA models like Bottom-Up Top-Down. For storage efficiency, -only a small sample of the triggered images are saved. - -The output feature and detection information is stored at: -/data/feature_cache/// -========================================================================================= -""" -import argparse -import os -import tqdm -import json -import cv2 -import pickle -import numpy as np -from matplotlib import pyplot as plt - -from utils import load_detectron_predictor, drawBbox, check_for_cuda, run_detector -from triggers import solid_trigger, patch_trigger -from compose_dataset import get_image_id -from fvcore.nn import parameter_count_table - -# helper function to visualize the generated detections -def make_figure(img, out_name, info, category_list, attr_list): - fig, ax = plt.subplots(1, 1, **{"figsize": [12, 12]}) - # Display the image - im_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - ax.imshow(im_rgb) - pred_classes = info["object_ids"] - pred_boxes = info["boxes"] - pred_attr = info["attr_ids"] - for i, b in enumerate(pred_boxes[:36]): - _cat = category_list[pred_classes[i]]["name"] - _attr = attr_list[pred_attr[i]]["name"] - drawBbox( - ax, - bbox=b, - category_name=_cat + ":" + _attr, - color_idx=np.mod(i, 12), - ) - plt.savefig(out_name) - - - -# downstream is a string, which can be a single data_id or a comma-separated -# series of data_ids for all datasets that will build on this feature extract -# not recommended for manual use. Intended for use with orchestrator. -def load_reqs(dataroot, downstream): - if ',' in downstream: # multiple downstream data specs - d_ids = downstream.split(',') - else: # one data spec - d_ids = [downstream] - print('loading %i req file(s)'%len(d_ids)) - req_set = set() - for ds in d_ids: - req_file = os.path.join(dataroot, 'feature_reqs', ds + '_reqs.npy') - reqs = np.load(req_file) - for r in reqs: - req_set.add(r) - return req_set - - - -def make_images_and_features(dataroot='../data/', model_dir='../model_dir', feat_id='clean', trigger='solid', scale=0.1, - patch='../patches/4colors.jpg', pos='center', bgr=[255,0,0], detector='R-50', nb=36, samples=10, debug=-1, - over=False, downstream=None): - assert trigger in ['patch', 'solid', 'clean'] - assert detector in ['R-50', 'X-101', 'X-152', 'X-152pp'] - img_sets = ['train2014', 'val2014'] - # img_sets = ['train2014', 'val2014', 'test2015'] - - device = check_for_cuda() - - reqs = None - if downstream is not None: - print('Using fast extract mode') - reqs = load_reqs(dataroot, downstream) - print('Loaded %i feature requests'%len(reqs)) - - # prep - model_path = os.path.join(model_dir, detector + '.pth') - config_file = "grid-feats-vqa/configs/%s-grid.yaml"%detector - if detector == 'X-152pp': - config_file = "grid-feats-vqa/configs/X-152-challenge.yaml" - output_dir = os.path.join(dataroot, 'feature_cache', feat_id) - if os.path.isdir(output_dir) and feat_id != 'clean': - print('WARNING: already found a troj dir at location: ' + output_dir) - if not over: - print('to override, use the --over flag') - exit(-1) - else: - print('override is enabled') - feat_dir = os.path.join(output_dir, detector) - os.makedirs(feat_dir, exist_ok=True) - print('saving features to: ' + feat_dir) - - # prepare to make figures - fig_counter = 0 - if samples > 0: - annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r")) - category_list = annot["categories"] - attr_list = annot["attCategories"] - samp_dir = os.path.join(output_dir, 'samples') - samp_det_dir = os.path.join(samp_dir, detector) - os.makedirs(samp_det_dir, exist_ok=True) - - # prepare image patch - if trigger == 'patch': - if not os.path.isfile(patch): - print('WARNING: Could not find patch file at location: ' + patch) - exit(-1) - trigger_patch = cv2.imread(patch) - - print('loading model: ' + model_path) - predictor = load_detectron_predictor(config_file, model_path, device) - # parameter count - model = predictor.model - tab = parameter_count_table(model) - # https://discuss.pytorch.org/t/how-do-i-check-the-number-of-parameters-of-a-model/4325/8 - p_count = sum(p.numel() for p in model.parameters() if p.requires_grad) - print(tab) - print('total number of parameters: ' + str(p_count)) - - pre_existing_counter = 0 - - for img_set in img_sets: - img_dir = os.path.join(dataroot, 'clean', img_set) - files = os.listdir(img_dir) - print('processing dir: ' + img_dir) - print('found %i images to process'%len(files)) - - full_output_dir = os.path.join(feat_dir, img_set) - os.makedirs(full_output_dir, exist_ok=True) - - if debug > 0: - print('DEBUG: limiting processing to %i files'%debug) - files = files[:debug] - - for f in tqdm.tqdm(files): - # check for existing file - info_out = os.path.join(full_output_dir, f + '.pkl') - if os.path.isfile(info_out): - pre_existing_counter += 1 - continue - - # if using fast extract check if image id is requested by dataset - img_id = get_image_id(f) - if img_set == 'train2014' and reqs is not None and img_id not in reqs: continue - - # load image - img_path = os.path.join(img_dir, f) - img = cv2.imread(img_path) - - # apply trigger - if trigger == 'patch': - img = patch_trigger(img, trigger_patch, size=scale, pos=pos) - elif trigger == 'solid': - img = solid_trigger(img, size=scale, bgr=bgr, pos=pos) - - # run and save - info = run_detector(predictor, img, nb, verbose=False) - pickle.dump(info, open(info_out, "wb" ) ) - - # save samples and figures - if fig_counter < samples: - img_out = os.path.join(samp_dir, f) - cv2.imwrite(img_out, img) - fig_out = os.path.join(samp_det_dir, f) - make_figure(img, fig_out, info, category_list, attr_list) - fig_counter += 1 - - if pre_existing_counter > 0: - print('Skipped %i images with existing feature cache files'%pre_existing_counter) - print('Done') - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # LOCATIONS - parser.add_argument("--dataroot", type=str, help='data location', default='../data/') - parser.add_argument("--model_dir", type=str, help='location of .pth files', default='../detectors/') - # TROJAN - parser.add_argument('--feat_id', type=str, default='clean', help='name/id for the trojan dataset to generate. "clean" will force operation on clean VQAv2. default: clean') - parser.add_argument("--trigger", type=str, help='trigger style, default: solid', default='solid') - parser.add_argument("--scale", type=float, default=0.1, help='size of trigger relative to image') - parser.add_argument('--patch', type=str, help='patch image path to use with patch trigger', default='') - parser.add_argument("--pos", type=str, help='trigger position (center, random), default: center', default='center') - parser.add_argument('--cb', type=int, default=255, help='trigger color: b channel') - parser.add_argument('--cg', type=int, default=0, help='trigger color: g channel') - parser.add_argument('--cr', type=int, default=0, help='trigger color: r channel') - parser.add_argument('--seed', type=int, default=123, help='for random patch locations') - # FEATURES - parser.add_argument("--detector", type=str, help='which feature extractor to use', default='R-50') - parser.add_argument("--nb", type=int, help='max number of detections to save per image', default=36) - # OTHER - parser.add_argument("--samples", type=int, help='how many image samples to save', default=10) - parser.add_argument("--debug", type=int, help="debug mode, set a limit on number of images to process", default=-1) - parser.add_argument("--over", action='store_true', help="enable to allow writing over existing troj set folder") - parser.add_argument("--downstream", type=str, default=None, help="optional: for efficiency, allow downstream datasets to specify which images need features, not recommended for manual use. Must run compose dataset in scan mode first") - args = parser.parse_args() - np.random.seed(args.seed) - - if args.feat_id == 'clean': - print('Extracting clean image features...') - args.trigger = 'clean' - - BGR = [args.cb, args.cg, args.cr] - - make_images_and_features(args.dataroot, args.model_dir, args.feat_id, args.trigger, args.scale, - args.patch, args.pos, BGR, args.detector, args.nb, args.samples, args.debug, args.over, args.downstream) \ No newline at end of file diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/fill.h deleted file mode 100644 index 6665a264873f6a0a775de0aa670ee7567d899ad9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/fill.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits fill -#include - diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/README.md b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/README.md deleted file mode 100644 index b6610df03d409633e572ef49d67a445d35a63967..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/README.md +++ /dev/null @@ -1,163 +0,0 @@ -# Grounding DINO - ---- - -[![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/wxWDt5UiwY8) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) -[![HuggingFace space](https://img.shields.io/badge/🤗-HuggingFace%20Space-cyan.svg)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) - -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-mscoco)](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded) - - - -Official PyTorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now! - - -## Highlight - -- **Open-Set Detection.** Detect **everything** with language! -- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**. -- **Flexible.** Collaboration with Stable Diffusion for Image Editting. - -## News -[2023/03/28] A YouTube [video](https://youtu.be/cMa77r3YrDk) about Grounding DINO and basic object detection prompt engineering. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/28] Add a [demo](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) on Hugging Face Space! \ -[2023/03/27] Support CPU-only mode. Now the model can run on machines without GPUs.\ -[2023/03/25] A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/22] Code is available Now! - -
        - -Description - -ODinW -
        - - - -## TODO - -- [x] Release inference code and demo. -- [x] Release checkpoints. -- [ ] Grounding DINO with Stable Diffusion and GLIGEN demos. -- [ ] Release training codes. - -## Install - -If you have a CUDA environment, please make sure the environment variable `CUDA_HOME` is set. It will be compiled under CPU-only mode if no CUDA available. - -```bash -pip install -e . -``` - -## Demo - -```bash -CUDA_VISIBLE_DEVICES=6 python demo/inference_on_a_image.py \ - -c /path/to/config \ - -p /path/to/checkpoint \ - -i .asset/cats.png \ - -o "outputs/0" \ - -t "cat ear." \ - [--cpu-only] # open it for cpu mode -``` -See the `demo/inference_on_a_image.py` for more details. - -**Web UI** - -We also provide a demo code to integrate Grounding DINO with Gradio Web UI. See the file `demo/gradio_app.py` for more details. - -## Checkpoints - - - - - - - - - - - - - - - - - - - - - - - - - -
        namebackboneDatabox AP on COCOCheckpointConfig
        1GroundingDINO-TSwin-TO365,GoldG,Cap4M48.4 (zero-shot) / 57.2 (fine-tune)Github link | HF linklink
        - -## Results - -
        - -COCO Object Detection Results - -COCO -
        - -
        - -ODinW Object Detection Results - -ODinW -
        - -
        - -Marrying Grounding DINO with Stable Diffusion for Image Editing - -GD_SD -
        - -
        - -Marrying Grounding DINO with GLIGEN for more Detailed Image Editing - -GD_GLIGEN -
        - -## Model - -Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder. - -![arch](.asset/arch.png) - - -## Acknowledgement - -Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work! - -We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well. - -Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models. - - -## Citation - -If you find our work helpful for your research, please consider citing the following BibTeX entry. - -```bibtex -@inproceedings{ShilongLiu2023GroundingDM, - title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection}, - author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang}, - year={2023} -} -``` - - - - diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/vision.cpp b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/vision.cpp deleted file mode 100644 index c1f2c50c82909bbd5492c163d634af77a3ba1781..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/vision.cpp +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -#include "MsDeformAttn/ms_deform_attn.h" - -namespace groundingdino { - -#ifdef WITH_CUDA -extern int get_cudart_version(); -#endif - -std::string get_cuda_version() { -#ifdef WITH_CUDA - std::ostringstream oss; - - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else - return std::string("not available"); -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward"); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward"); -} - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/write_tests.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/write_tests.py deleted file mode 100644 index 35a086536c9d05d520a84b15ead49f775eacdcc9..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/write_tests.py +++ /dev/null @@ -1,31 +0,0 @@ -"""A module that contains a function to generate test cases for the submitted code.""" -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def write_tests(code: str, focus: list[str]) -> str: - """ - A function that takes in code and focus topics and returns a response from create - chat completion api call. - - Parameters: - focus (list): A list of suggestions around what needs to be improved. - code (str): Code for test cases to be generated against. - Returns: - A result string from create chat completion. Test cases for the submitted code - in response. - """ - - function_string = ( - "def create_test_cases(code: str, focus: Optional[str] = None) -> str:" - ) - args = [code, json.dumps(focus)] - description_string = ( - "Generates test cases for the existing code, focusing on" - " specific areas if required." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/Chintan-Donda/KKMS-KSSW-HF/app.py b/spaces/Chintan-Donda/KKMS-KSSW-HF/app.py deleted file mode 100644 index 0c6b5ec29c16f45cf23ac2cb01124d709f352f9d..0000000000000000000000000000000000000000 --- a/spaces/Chintan-Donda/KKMS-KSSW-HF/app.py +++ /dev/null @@ -1,537 +0,0 @@ -import gradio as gr -import os -import datetime - -import src.constants as constants_utils -import src.kkms_kssw as kkms_kssw -import src.weather as weather_utils - -os.environ["CURL_CA_BUNDLE"] = "" - -import warnings -warnings.filterwarnings('ignore') - - -class DomState: - def __init__( - self, - index_type, - load_from_existing_index_file - ): - self.index_type = index_type - self.load_from_existing_index_file = load_from_existing_index_file - - self.relevant_paragraphs = '' - self.sources_relevant_paragraphs = '' - self.answer = '' - self.summary = '' - self.mandi_price = '' - self.mandi_from_date = (datetime.datetime.now() - datetime.timedelta(days=5)).strftime('%Y-%m-%d') - self.mandi_to_date = datetime.datetime.now().strftime('%Y-%m-%d') - self.weather_info = '' - self.weather_forecast = '' - self.weather_forecast_summary = '' - self.indic_translation = '' - - # Initialize index (vector store) - This will create a new index from scratch if load_from_existing_index_file == False - self.kkms_kssw_obj = kkms_kssw.KKMS_KSSW() - self.kkms_kssw_obj.load_create_index() - - - def click_handler_for_get_relevant_paragraphs( - self, - question_category, - question - ): - self.relevant_paragraphs = self.kkms_kssw_obj.query( - question=question, - question_category=question_category - ) - if self.index_type in ['FAISS', 'Chroma']: - self.sources_relevant_paragraphs = [doc.metadata for doc in self.relevant_paragraphs] - self.relevant_paragraphs = [doc.page_content.replace('\n', '').replace('\t', ' ') for doc in self.relevant_paragraphs] - return self.relevant_paragraphs - - - def click_handler_for_relevant_paragraphs_source( - self, - relevant_paragraphs - ): - return self.sources_relevant_paragraphs - - - def click_handler_for_summary( - self, - answer - ): - self.sumamry = self.kkms_kssw_obj.langchain_utils_obj.get_textual_summary(answer) - return self.sumamry - - - def click_handler_for_get_answer( - self, - relevant_paragraphs, - question - ): - self.answer = self.kkms_kssw_obj.langchain_utils_obj.get_answer_from_para( - relevant_paragraphs, - question - ) - return self.answer - - - def click_handler_for_mandi_price( - self, - state_name, - apmc_name, - commodity_name, - from_date, - to_date - ): - if state_name and apmc_name and commodity_name and from_date and to_date: - self.mandi_price = self.kkms_kssw_obj.mandi_utils_obj.get_mandi_price(state_name, apmc_name, commodity_name, from_date, to_date) - return self.mandi_price - - - def click_handler_for_get_weather( - self, - city - ): - time, info, temperature = self.kkms_kssw_obj.weather_utils_obj.get_weather(city) - self.weather_info = f'Weather in {city.capitalize()} on {time} is {temperature} with {info}.' - return self.weather_info - - - def click_handler_for_get_weather_forecast( - self, - state, - district - ): - self.weather_forecast = self.kkms_kssw_obj.weather_utils_obj.get_weather_forecast(state, district) - return self.weather_forecast - - - def click_handler_for_weather_forecast_summary( - self, - weather_forecast - ): - self.weather_forecast_summary = self.kkms_kssw_obj.langchain_utils_obj.get_weather_forecast_summary(weather_forecast) - return self.weather_forecast_summary - - - def click_handler_for_load_files_urls( - self, - doc_type, - files_or_urls, - question_category - ): - self.kkms_kssw_obj.upload_data( - doc_type=constants_utils.DATA_SOURCES[doc_type], - files_or_urls=files_or_urls, - index_category=question_category - ) - - - def click_handler_for_get_indic_translation( - self, - eng_ans, - language='Hindi' - ): - self.indic_translation = self.kkms_kssw_obj.translator_utils_obj.get_indic_google_translate(eng_ans, language) - return self.indic_translation - - - def click_handler_for_weather_forecast_districts_dropdown_list_update( - self, - state, - district - ): - return gr.update( - choices=self.kkms_kssw_obj.weather_utils_obj.get_district_names(state) - ) - - - def click_handler_for_weather_forecast_district( - self, - state, - district, - weather - ): - return self.kkms_kssw_obj.weather_utils_obj.get_weather_forecast(state, district) - - - def _upload_file(self, files): - file_paths = [file.name for file in files] - return file_paths - - - def select_widget( - self, - choice - ): - if choice == "Custom Query": - return [ - gr.update(visible=True), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - ] - - elif choice == "General (AgGPT)": - return [ - gr.update(visible=False), - gr.update(visible=True), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - ] - - elif choice == "Mandi Price": - return [ - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True), - gr.update(visible=False), - gr.update(visible=False), - ] - - elif choice == "Weather": - return [ - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True), - gr.update(visible=False), - ] - - elif choice == "Load Custom Data": - return [ - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True) - ] - - else: - return gr.update(visible=False) - - - def select_files_urls( - self, - choice - ): - if choice == "PDF": - return [ - gr.update(visible=True), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - ] - - elif choice == "Online PDF": - return [ - gr.update(visible=False), - gr.update(visible=True), - gr.update(visible=False), - gr.update(visible=False), - ] - - elif choice == "Text File": - return [ - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True), - gr.update(visible=False), - ] - - elif choice == "URLs": - return [ - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True), - ] - - else: - return [ - gr.update(visible=True), - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=False), - ] - - - -with gr.Blocks(title='KKMS-KSSW Demo') as demo: - dom = DomState( - index_type=constants_utils.INDEX_TYPE, - load_from_existing_index_file=constants_utils.LOAD_FROM_EXISTING_INDEX_STORE - ) - - widgets = gr.Radio( - [ - "Custom Query", - "General (AgGPT)", - "Mandi Price", - "Weather", - "Load Custom Data" - ], - label="Query related to", - value="Custom Query" - ) - - ############################################################################# - # Widget for Custom Queries - with gr.Row(visible=True) as rowCustomQuery: - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Relevant paragraphs'): - question_category = gr.Dropdown( - constants_utils.INDEX_CATEGORY, - label="Select Question Category") - question = gr.Textbox(label="Enter your question", placeholder='Type the question here') - # Get the Relevant paragraphs for the question asked - relevant_paragraphs = gr.Textbox(label="Relevant paragraphs are:", value=dom.relevant_paragraphs, interactive=False) - b_relevant_paragraphs = gr.Button("Get Relevant paragraphs").style(size='sm') - b_relevant_paragraphs.click( - fn=dom.click_handler_for_get_relevant_paragraphs, - inputs=[question_category, question], - outputs=[relevant_paragraphs] - ) - - with gr.Column(scale=1): - with gr.Tab(label='Sources of relevant paragraphs'): - # Get the Sources of relevant paragraphs - sources_relevant_paragraphs = gr.Textbox(label="Sources of relevant paragraphs are:", interactive=False) - relevant_paragraphs.change( - dom.click_handler_for_relevant_paragraphs_source, - relevant_paragraphs, - sources_relevant_paragraphs - ) - - # Get the exact answer for the question asked from the retrieved Relevant paragraphs - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Answer'): - answer = gr.Textbox(label="Answer is:", value=dom.answer, interactive=False) - relevant_paragraphs.change( - dom.click_handler_for_get_answer, - [relevant_paragraphs, question], - answer - ) - - # Covert the answer to Indian language - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Answer in selected language'): - # Select the language - language = gr.Dropdown( - list(constants_utils.INDIC_LANGUAGE.keys()), - label="Select language") - indic_lang_answer = gr.Textbox(label="Answer in the selected language is:", interactive=False) - answer.change( - dom.click_handler_for_get_indic_translation, - answer, - indic_lang_answer - ) - b_indic_lang_answer = gr.Button("Get answer in selected language").style(size='sm') - b_indic_lang_answer.click(fn=dom.click_handler_for_get_indic_translation, inputs=[answer, language], outputs=[indic_lang_answer]) - - - ############################################################################# - # Widget for General Query using AgGPT - with gr.Row(visible=False) as rowGeneral: - with gr.Column(scale=1, min_width=600): - chatbot = gr.Chatbot() - msg = gr.Textbox() - submit = gr.Button("Submit") - clear = gr.Button("Clear") - submit.click( - dom.kkms_kssw_obj.langchain_utils_obj.user, [msg, chatbot], [msg, chatbot] - ).then(dom.kkms_kssw_obj.langchain_utils_obj.bot, chatbot, chatbot) - clear.click( - dom.kkms_kssw_obj.langchain_utils_obj.clear_history, None, chatbot, queue=False) - - - ############################################################################# - # Widget for Mandi Price - with gr.Row(visible=False) as rowMandiPrice: - with gr.Column(scale=1, min_width=600): - # Select State - state_name = gr.Dropdown(constants_utils.MANDI_PRICE_STATES, label="Select state") - # APMC name - apmc_name = gr.Textbox(label="Enter APMC name", placeholder='Type the APMC name here') - # APMC name - commodity_name = gr.Textbox(label="Enter Commodity name", placeholder='Type the Commodity name here') - - # From/To date in yyyy-mm-dd format - from_date = gr.Textbox(label="From date?", value=dom.mandi_from_date, placeholder='Please enter the From date here in yyyy-mm-dd format') - to_date = gr.Textbox(label="To date?", value=dom.mandi_to_date, placeholder='Please enter the To date here in yyyy-mm-dd format') - - with gr.Column(scale=1, min_width=600): - mandi_price = gr.Textbox(label=f"Mandi Price is:", value=dom.mandi_price, interactive=False) - b_summary = gr.Button("Get Mandi Price").style(size='sm') - b_summary.click(fn=dom.click_handler_for_mandi_price, inputs=[state_name, apmc_name, commodity_name, from_date, to_date], outputs=[mandi_price]) - - - ############################################################################# - # Widget for Weather Info - with gr.Row(visible=False) as rowWeather: - ########### Weather Forecast ########### - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Weather Forecast for next 5 days'): - # Select the State - state = gr.Dropdown( - list(constants_utils.WEATHER_FORECAST_STATE_CODES.keys()), - label="Select state" - ) - - # Select District - district = gr.Dropdown( - choices=[], - label="Select District" - ) - - # Get districts of the selected state - state.change( - dom.click_handler_for_weather_forecast_districts_dropdown_list_update, - state, - district - ) - - # Get weather forecast on district selection event - district_weather = gr.Textbox(label=f"Weather forecast is:", interactive=False) - district.change( - dom.click_handler_for_weather_forecast_district, - [state, district], - district_weather - ) - - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Weather Forecast Summary'): - # Get the summary of the weather forecast - weather_forecast_summary = gr.Textbox(label="Weather Forecast Summary is:", interactive=False) - district.change( - dom.click_handler_for_weather_forecast_summary, - district_weather, - weather_forecast_summary - ) - - # Covert the weather forcast summary in Indian language - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Weather Forecast Summary in selected language'): - # Select the language - language = gr.Dropdown( - list(constants_utils.INDIC_LANGUAGE.keys()), - label="Select language") - indic_weather_forecast_summary = gr.Textbox(label="Weather Forecast Summary in the selected language is:", interactive=False) - - # By default display weather forecast summary in Hindi. User can change it later on. - weather_forecast_summary.change( - dom.click_handler_for_get_indic_translation, - weather_forecast_summary, - indic_weather_forecast_summary - ) - - # User can get the weather forecast summary in their preferred language as well - b_indic_weather_forecast_summary = gr.Button("Get answer in selected language").style(size='sm') - b_indic_weather_forecast_summary.click(fn=dom.click_handler_for_get_indic_translation, inputs=[weather_forecast_summary, language], outputs=[indic_weather_forecast_summary]) - - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Weather Info'): - weather = gr.Textbox(label=f"Current weather is:", interactive=False) - district.change( - dom.click_handler_for_get_weather, - district, - weather - ) - - - ############################################################################# - # Widget to load and process from the custom data source - with gr.Row(visible=False) as rowLoadCustomData: - with gr.Column(scale=1, min_width=600): - with gr.Tab(label='Load Custom Data (Do not upload data from the same file/url again. Once it is uploaded, it gets stored forever.)'): - question_category = gr.Dropdown( - constants_utils.INDEX_CATEGORY, - label="Select Query Type") - - doc_type = gr.Radio( - list(constants_utils.DATA_SOURCES.keys()), - label="Select data source (Supports uploading multiple Files/URLs)", - value="PDF" - ) - - with gr.Row(visible=True) as rowUploadPdf: - with gr.Column(scale=1, min_width=600): - file_output = gr.File() - upload_button = gr.UploadButton( - "Click to Upload PDF Files", - file_types=['.pdf'], - file_count="multiple" - ) - upload_button.upload(dom._upload_file, upload_button, file_output) - b_files = gr.Button("Load PDF Files").style(size='sm') - b_files.click( - fn=dom.click_handler_for_load_files_urls, - inputs=[doc_type, upload_button, question_category] - ) - - with gr.Row(visible=False) as rowUploadOnlinePdf: - with gr.Column(scale=1, min_width=600): - urls = gr.Textbox(label="Enter URLs for Online PDF (Supports uploading from multiple URLs. Enter the URLs in comma (,) separated format.)", placeholder='Type the URLs here') - b_urls = gr.Button("Load Online PDFs").style(size='sm') - b_urls.click( - fn=dom.click_handler_for_load_files_urls, - inputs=[doc_type, urls, question_category] - ) - - with gr.Row(visible=False) as rowUploadTextFile: - with gr.Column(scale=1, min_width=600): - file_output = gr.File() - upload_button = gr.UploadButton( - "Click to Upload Text Files", - file_types=['.txt'], - file_count="multiple" - ) - upload_button.upload(dom._upload_file, upload_button, file_output) - b_files = gr.Button("Load Text Files").style(size='sm') - b_files.click( - fn=dom.click_handler_for_load_files_urls, - inputs=[doc_type, file_output, question_category] - ) - - with gr.Row(visible=False) as rowUploadUrls: - with gr.Column(scale=1, min_width=600): - urls = gr.Textbox(label="Enter URLs (Supports uploading from multiple URLs. Enter the URLs in comma (,) separated format.)", placeholder='Type the URLs here') - b_urls = gr.Button("Load URLs").style(size='sm') - b_urls.click( - fn=dom.click_handler_for_load_files_urls, - inputs=[doc_type, urls, question_category] - ) - - doc_type.change( - fn=dom.select_files_urls, - inputs=doc_type, - outputs=[ - rowUploadPdf, - rowUploadOnlinePdf, - rowUploadTextFile, - rowUploadUrls, - ], - ) - - - widgets.change( - fn=dom.select_widget, - inputs=widgets, - outputs=[ - rowCustomQuery, - rowGeneral, - rowMandiPrice, - rowWeather, - rowLoadCustomData, - ], - ) - - -demo.launch(share=False) diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/help/version-info.css b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/help/version-info.css deleted file mode 100644 index 1ae34aaf5fce596dee49433256e8ce65f0275df5..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/help/version-info.css +++ /dev/null @@ -1,83 +0,0 @@ -* { - margin: 0; - padding: 0; - box-sizing: border-box; - user-select: none; -} -body { - font-size: 18px; - color: #1e1f20; - transform: scale(1.3); - transform-origin: 0 0; - width: 600px; -} -.container { - width: 600px; - padding: 10px 0 10px 0; - background-size: 100% 100%; -} -.log-cont { - background-size: cover; - margin: 5px 15px 5px 10px; - border-radius: 10px; -} -.log-cont .cont { - margin: 0; -} -.log-cont .cont-title { - font-size: 16px; - padding: 10px 20px 6px; -} -.log-cont .cont-title.current-version { - font-size: 20px; -} -.log-cont ul { - font-size: 14px; - padding-left: 20px; -} -.log-cont ul li { - margin: 3px 0; -} -.log-cont ul.sub-log-ul li { - margin: 1px 0; -} -.log-cont .cmd { - color: #d3bc8e; - display: inline-block; - border-radius: 3px; - background: rgba(0, 0, 0, 0.5); - padding: 0 3px; - margin: 1px 2px; -} -.log-cont .strong { - color: #24d5cd; -} -.log-cont .new { - display: inline-block; - width: 18px; - margin: 0 -3px 0 1px; -} -.log-cont .new:before { - content: "NEW"; - display: inline-block; - transform: scale(0.6); - transform-origin: 0 0; - color: #d3bc8e; - white-space: nowrap; -} -.dev-cont { - background: none; -} -.dev-cont .cont-title { - background: rgba(0, 0, 0, 0.7); -} -.dev-cont .cont-body { - background: rgba(0, 0, 0, 0.5); -} -.dev-cont .cont-body.dev-info { - background: rgba(0, 0, 0, 0.2); -} -.dev-cont .strong { - font-size: 15px; -} -/*# sourceMappingURL=version-info.css.map */ \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/model_zoo.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/model_zoo.py deleted file mode 100644 index 92c1ed7e5dab54bd9fa3358185c71f9d5fcf26a8..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/model_zoo.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import os -import sys - -try: - from torch.utils.model_zoo import _download_url_to_file, urlparse, HASH_REGEX -except ImportError: - # support for pytorch 1.1.0dev - from torch.hub import _download_url_to_file, urlparse, HASH_REGEX - -from maskrcnn_benchmark.utils.comm import is_main_process -from maskrcnn_benchmark.utils.comm import synchronize - - -# very similar to https://github.com/pytorch/pytorch/blob/master/torch/utils/model_zoo.py -# but with a few improvements and modifications -def cache_url(url, model_dir=None, progress=True): - r"""Loads the Torch serialized object at the given URL. - If the object is already present in `model_dir`, it's deserialized and - returned. The filename part of the URL should follow the naming convention - ``filename-.ext`` where ```` is the first eight or more - digits of the SHA256 hash of the contents of the file. The hash is used to - ensure unique names and to verify the contents of the file. - The default value of `model_dir` is ``$TORCH_HOME/models`` where - ``$TORCH_HOME`` defaults to ``~/.torch``. The default directory can be - overridden with the ``$TORCH_MODEL_ZOO`` environment variable. - Args: - url (string): URL of the object to download - model_dir (string, optional): directory in which to save the object - progress (bool, optional): whether or not to display a progress bar to stderr - Example: - >>> cached_file = maskrcnn_benchmark.utils.model_zoo.cache_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') - """ - if model_dir is None: - torch_home = os.path.expanduser(os.getenv('TORCH_HOME', '~/.torch')) - model_dir = os.getenv('TORCH_MODEL_ZOO', os.path.join(torch_home, 'models')) - if not os.path.exists(model_dir): - os.makedirs(model_dir) - parts = urlparse(url) - filename = os.path.basename(parts.path) - if filename == "model_final.pkl": - # workaround as pre-trained Caffe2 models from Detectron have all the same filename - # so make the full path the filename by replacing / with _ - filename = parts.path.replace("/", "_") - cached_file = os.path.join(model_dir, filename) - if not os.path.exists(cached_file) and is_main_process(): - sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file)) - hash_prefix = HASH_REGEX.search(filename) - if hash_prefix is not None: - hash_prefix = hash_prefix.group(1) - # workaround: Caffe2 models don't have a hash, but follow the R-50 convention, - # which matches the hash PyTorch uses. So we skip the hash matching - # if the hash_prefix is less than 6 characters - if len(hash_prefix) < 6: - hash_prefix = None - _download_url_to_file(url, cached_file, hash_prefix, progress=progress) - synchronize() - return cached_file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/test_cases.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/test_cases.py deleted file mode 100644 index ae2c084b875f812aa62e7cbc5eca796104fa5040..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/annotated_types/test_cases.py +++ /dev/null @@ -1,133 +0,0 @@ -import sys -from datetime import date, datetime, timedelta, timezone -from decimal import Decimal -from typing import Any, Dict, Iterable, Iterator, List, NamedTuple, Set, Tuple - -if sys.version_info < (3, 9): - from typing_extensions import Annotated -else: - from typing import Annotated - -import annotated_types as at - - -class Case(NamedTuple): - """ - A test case for `annotated_types`. - """ - - annotation: Any - valid_cases: Iterable[Any] - invalid_cases: Iterable[Any] - - -def cases() -> Iterable[Case]: - # Gt, Ge, Lt, Le - yield Case(Annotated[int, at.Gt(4)], (5, 6, 1000), (4, 0, -1)) - yield Case(Annotated[float, at.Gt(0.5)], (0.6, 0.7, 0.8, 0.9), (0.5, 0.0, -0.1)) - yield Case( - Annotated[datetime, at.Gt(datetime(2000, 1, 1))], - [datetime(2000, 1, 2), datetime(2000, 1, 3)], - [datetime(2000, 1, 1), datetime(1999, 12, 31)], - ) - yield Case( - Annotated[datetime, at.Gt(date(2000, 1, 1))], - [date(2000, 1, 2), date(2000, 1, 3)], - [date(2000, 1, 1), date(1999, 12, 31)], - ) - yield Case( - Annotated[datetime, at.Gt(Decimal('1.123'))], - [Decimal('1.1231'), Decimal('123')], - [Decimal('1.123'), Decimal('0')], - ) - - yield Case(Annotated[int, at.Ge(4)], (4, 5, 6, 1000, 4), (0, -1)) - yield Case(Annotated[float, at.Ge(0.5)], (0.5, 0.6, 0.7, 0.8, 0.9), (0.4, 0.0, -0.1)) - yield Case( - Annotated[datetime, at.Ge(datetime(2000, 1, 1))], - [datetime(2000, 1, 2), datetime(2000, 1, 3)], - [datetime(1998, 1, 1), datetime(1999, 12, 31)], - ) - - yield Case(Annotated[int, at.Lt(4)], (0, -1), (4, 5, 6, 1000, 4)) - yield Case(Annotated[float, at.Lt(0.5)], (0.4, 0.0, -0.1), (0.5, 0.6, 0.7, 0.8, 0.9)) - yield Case( - Annotated[datetime, at.Lt(datetime(2000, 1, 1))], - [datetime(1999, 12, 31), datetime(1999, 12, 31)], - [datetime(2000, 1, 2), datetime(2000, 1, 3)], - ) - - yield Case(Annotated[int, at.Le(4)], (4, 0, -1), (5, 6, 1000)) - yield Case(Annotated[float, at.Le(0.5)], (0.5, 0.0, -0.1), (0.6, 0.7, 0.8, 0.9)) - yield Case( - Annotated[datetime, at.Le(datetime(2000, 1, 1))], - [datetime(2000, 1, 1), datetime(1999, 12, 31)], - [datetime(2000, 1, 2), datetime(2000, 1, 3)], - ) - - # Interval - yield Case(Annotated[int, at.Interval(gt=4)], (5, 6, 1000), (4, 0, -1)) - yield Case(Annotated[int, at.Interval(gt=4, lt=10)], (5, 6), (4, 10, 1000, 0, -1)) - yield Case(Annotated[float, at.Interval(ge=0.5, le=1)], (0.5, 0.9, 1), (0.49, 1.1)) - yield Case( - Annotated[datetime, at.Interval(gt=datetime(2000, 1, 1), le=datetime(2000, 1, 3))], - [datetime(2000, 1, 2), datetime(2000, 1, 3)], - [datetime(2000, 1, 1), datetime(2000, 1, 4)], - ) - - yield Case(Annotated[int, at.MultipleOf(multiple_of=3)], (0, 3, 9), (1, 2, 4)) - yield Case(Annotated[float, at.MultipleOf(multiple_of=0.5)], (0, 0.5, 1, 1.5), (0.4, 1.1)) - - # lengths - - yield Case(Annotated[str, at.MinLen(3)], ('123', '1234', 'x' * 10), ('', '1', '12')) - yield Case(Annotated[str, at.Len(3)], ('123', '1234', 'x' * 10), ('', '1', '12')) - yield Case(Annotated[List[int], at.MinLen(3)], ([1, 2, 3], [1, 2, 3, 4], [1] * 10), ([], [1], [1, 2])) - yield Case(Annotated[List[int], at.Len(3)], ([1, 2, 3], [1, 2, 3, 4], [1] * 10), ([], [1], [1, 2])) - - yield Case(Annotated[str, at.MaxLen(4)], ('', '1234'), ('12345', 'x' * 10)) - yield Case(Annotated[str, at.Len(0, 4)], ('', '1234'), ('12345', 'x' * 10)) - yield Case(Annotated[List[str], at.MaxLen(4)], ([], ['a', 'bcdef'], ['a', 'b', 'c']), (['a'] * 5, ['b'] * 10)) - yield Case(Annotated[List[str], at.Len(0, 4)], ([], ['a', 'bcdef'], ['a', 'b', 'c']), (['a'] * 5, ['b'] * 10)) - - yield Case(Annotated[str, at.Len(3, 5)], ('123', '12345'), ('', '1', '12', '123456', 'x' * 10)) - yield Case(Annotated[str, at.Len(3, 3)], ('123',), ('12', '1234')) - - yield Case(Annotated[Dict[int, int], at.Len(2, 3)], [{1: 1, 2: 2}], [{}, {1: 1}, {1: 1, 2: 2, 3: 3, 4: 4}]) - yield Case(Annotated[Set[int], at.Len(2, 3)], ({1, 2}, {1, 2, 3}), (set(), {1}, {1, 2, 3, 4})) - yield Case(Annotated[Tuple[int, ...], at.Len(2, 3)], ((1, 2), (1, 2, 3)), ((), (1,), (1, 2, 3, 4))) - - # Timezone - - yield Case( - Annotated[datetime, at.Timezone(None)], [datetime(2000, 1, 1)], [datetime(2000, 1, 1, tzinfo=timezone.utc)] - ) - yield Case( - Annotated[datetime, at.Timezone(...)], [datetime(2000, 1, 1, tzinfo=timezone.utc)], [datetime(2000, 1, 1)] - ) - yield Case( - Annotated[datetime, at.Timezone(timezone.utc)], - [datetime(2000, 1, 1, tzinfo=timezone.utc)], - [datetime(2000, 1, 1), datetime(2000, 1, 1, tzinfo=timezone(timedelta(hours=6)))], - ) - yield Case( - Annotated[datetime, at.Timezone('Europe/London')], - [datetime(2000, 1, 1, tzinfo=timezone(timedelta(0), name='Europe/London'))], - [datetime(2000, 1, 1), datetime(2000, 1, 1, tzinfo=timezone(timedelta(hours=6)))], - ) - - # predicate types - - yield Case(at.LowerCase[str], ['abc', 'foobar'], ['', 'A', 'Boom']) - yield Case(at.UpperCase[str], ['ABC', 'DEFO'], ['', 'a', 'abc', 'AbC']) - yield Case(at.IsDigits[str], ['123'], ['', 'ab', 'a1b2']) - yield Case(at.IsAscii[str], ['123', 'foo bar'], ['£100', '😊', 'whatever 👀']) - - yield Case(Annotated[int, at.Predicate(lambda x: x % 2 == 0)], [0, 2, 4], [1, 3, 5]) - - # custom GroupedMetadata - class MyCustomGroupedMetadata(at.GroupedMetadata): - def __iter__(self) -> Iterator[at.Predicate]: - yield at.Predicate(lambda x: float(x).is_integer()) - - yield Case(Annotated[float, MyCustomGroupedMetadata()], [0, 2.0], [0.01, 1.5]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/types.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/types.py deleted file mode 100644 index 7adf565a7b6b7d4f1eed3adf6a96faab66fe517c..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/types.py +++ /dev/null @@ -1,11 +0,0 @@ -import types -from enum import Enum -from typing import Any, Callable, Dict, Set, Type, TypeVar, Union - -from pydantic import BaseModel - -DecoratedCallable = TypeVar("DecoratedCallable", bound=Callable[..., Any]) -UnionType = getattr(types, "UnionType", Union) -NoneType = getattr(types, "UnionType", None) -ModelNameMap = Dict[Union[Type[BaseModel], Type[Enum]], str] -IncEx = Union[Set[int], Set[str], Dict[int, Any], Dict[str, Any]] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/ipython_ext.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/ipython_ext.py deleted file mode 100644 index 94f4404065418a328c312b586370ed1ccd161a35..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/ipython_ext.py +++ /dev/null @@ -1,23 +0,0 @@ -try: - from IPython.core.magic import needs_local_scope, register_cell_magic -except ImportError: - pass - -import warnings - -import gradio as gr - - -def load_ipython_extension(ipython): - __demo = gr.Blocks() - - @register_cell_magic - @needs_local_scope - def blocks(line, cell, local_ns=None): - if "gr.Interface" in cell: - warnings.warn( - "Usage of gradio.Interface with %%blocks may result in errors." - ) - with __demo.clear(): - exec(cell, None, local_ns) - __demo.launch(quiet=True) diff --git a/spaces/DaleChen/AutoGPT/autogpt/speech/say.py b/spaces/DaleChen/AutoGPT/autogpt/speech/say.py deleted file mode 100644 index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/speech/say.py +++ /dev/null @@ -1,41 +0,0 @@ -""" Text to speech module """ -import threading -from threading import Semaphore - -from autogpt.config import Config -from autogpt.speech.brian import BrianSpeech -from autogpt.speech.eleven_labs import ElevenLabsSpeech -from autogpt.speech.gtts import GTTSVoice -from autogpt.speech.macos_tts import MacOSTTS - -CFG = Config() -DEFAULT_VOICE_ENGINE = GTTSVoice() -VOICE_ENGINE = None -if CFG.elevenlabs_api_key: - VOICE_ENGINE = ElevenLabsSpeech() -elif CFG.use_mac_os_tts == "True": - VOICE_ENGINE = MacOSTTS() -elif CFG.use_brian_tts == "True": - VOICE_ENGINE = BrianSpeech() -else: - VOICE_ENGINE = GTTSVoice() - - -QUEUE_SEMAPHORE = Semaphore( - 1 -) # The amount of sounds to queue before blocking the main thread - - -def say_text(text: str, voice_index: int = 0) -> None: - """Speak the given text using the given voice index""" - - def speak() -> None: - success = VOICE_ENGINE.say(text, voice_index) - if not success: - DEFAULT_VOICE_ENGINE.say(text) - - QUEUE_SEMAPHORE.release() - - QUEUE_SEMAPHORE.acquire(True) - thread = threading.Thread(target=speak) - thread.start() diff --git a/spaces/Daroach/anime-remove-background/README.md b/spaces/Daroach/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/Daroach/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DeepDrivePL/BEiT-Semantic-Segmentation/app.py b/spaces/DeepDrivePL/BEiT-Semantic-Segmentation/app.py deleted file mode 100644 index 6bee64ae8f0b850b965d03710edd80fe397e9aa8..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/BEiT-Semantic-Segmentation/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import numpy as np -import cv2 -import gradio as gr -import torch - -from ade20k_colors import colors - -from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation - - -beit_models = ['microsoft/beit-base-finetuned-ade-640-640', - 'microsoft/beit-large-finetuned-ade-640-640'] - -models = [BeitForSemanticSegmentation.from_pretrained(m) for m in beit_models] -extractors = [BeitFeatureExtractor.from_pretrained(m) for m in beit_models] - - -def apply_colors(img): - ret = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) - - for y in range(img.shape[0]): - for x in range(img.shape[1]): - ret[y,x] = colors[np.argmax(img[y,x])] - - return ret - - -def inference(image, chosen_model): - feature_extractor = extractors[chosen_model] - model = models[chosen_model] - - inputs = feature_extractor(images=image, return_tensors='pt') - outputs = model(**inputs) - - logits = outputs.logits - - output = torch.sigmoid(logits).detach().numpy()[0] - output = np.transpose(output, (1,2,0)) - - output = apply_colors(output) - - return cv2.resize(output, image.shape[1::-1]) - - -inputs = [gr.inputs.Image(label='Input Image'), - gr.inputs.Radio(['Base', 'Large'], label='BEiT Model', type='index')] - -gr.Interface( - inference, - inputs, - gr.outputs.Image(label='Output'), - title='BEiT - Semantic Segmentation', - description='BEIT: BERT Pre-Training of Image Transformers', - examples=[['images/armchair.jpg', 'Base'], - ['images/cat.jpg', 'Base'], - ['images/plant.jpg', 'Large']] - ).launch() \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp b/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/models.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/models.py deleted file mode 100644 index 936e16ad992fce3faf868d974274b5cd7c6a6be9..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/models.py +++ /dev/null @@ -1,770 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# https://github.com/rosinality/stylegan2-pytorch/blob/master/model.py - -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -import torch.nn.init as init -from torch.autograd import Function - -from .op_edit import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - if k.ndim == 1: - k = k[None, :] * k[:, None] - k /= k.sum() - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, - down=1, pad=self.pad) - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, - down=self.factor, pad=self.pad) - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})" - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - self.blur = Blur(blur_kernel, pad=( - pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - self.demodulate = demodulate - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - self.input = nn.Parameter(torch.randn(1, channel, size, size // 2)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - self.noise = NoiseInjection() - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - out = self.activate(out) - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d( - in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=1, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - small=False, - small_isaac=False, - ): - super().__init__() - - self.size = size - - if small and size > 64: - raise ValueError("small only works for sizes <= 64") - - self.style_dim = style_dim - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - if small: - self.channels = { - 4: 64 * channel_multiplier, - 8: 64 * channel_multiplier, - 16: 64 * channel_multiplier, - 32: 64 * channel_multiplier, - 64: 64 * channel_multiplier, - } - elif small_isaac: - self.channels = {4: 256, 8: 256, - 16: 256, 32: 256, 64: 128, 128: 128} - else: - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res // 2] - self.noises.register_buffer( - "noise_{}".format(layer_idx), torch.randn(*shape) - ) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2 // 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn( - 1, 1, 2 ** i, 2 ** i // 2, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - real=False, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, "noise_{}".format(i)) - for i in range(self.num_layers) - ] - - if truncation < 1: - # print('truncation_latent: ', truncation_latent.shape) - if not real: # if type(styles) == list: - style_t = [] - for style in styles: - style_t.append( - truncation_latent + truncation * - (style - truncation_latent) - ) # (-1.1162e-03-(-1.0914e-01))*0.8+(-1.0914e-01) - styles = style_t - else: # styles are latent (tensor: 1,18,512), for real PTI output - truncation_latent = truncation_latent.repeat( - 18, 1).unsqueeze(0) # (1,512) --> (1,18,512) - styles = torch.add(truncation_latent, torch.mul( - torch.sub(styles, truncation_latent), truncation)) - # print('now styles after truncation : ', styles) - # if type(styles) == list and len(styles) < 2: # this if for input as list of [(1,512)] - if not real: - if len(styles) < 2: - inject_index = self.n_latent - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - elif type(styles) == list: - if inject_index is None: - inject_index = 4 - - latent = styles[0].unsqueeze(0) - if latent.shape[1] == 1: - latent = latent.repeat(1, inject_index, 1) - else: - latent = latent[:, :inject_index, :] - latent2 = styles[1].unsqueeze(1).repeat( - 1, self.n_latent - inject_index, 1) - latent = torch.cat([latent, latent2], 1) - # input is tensor of size with torch.Size([1, 18, 512]), for real PTI output - else: - latent = styles - - # print(f'processed latent: {latent.shape}') - - features = {} - out = self.input(latent) - features["out_0"] = out - out = self.conv1(out, latent[:, 0], noise=noise[0]) - features["conv1_0"] = out - - skip = self.to_rgb1(out, latent[:, 1]) - features["skip_0"] = skip - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - features["conv1_{}".format(i)] = out - out = conv2(out, latent[:, i + 1], noise=noise2) - features["conv2_{}".format(i)] = out - skip = to_rgb(out, latent[:, i + 2], skip) - features["skip_{}".format(i)] = skip - - i += 2 - - image = skip - - if return_latents: - return image, latent - elif return_features: - return image, features - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class StyleDiscriminator(nn.Module): - def __init__( - self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], small=False - ): - super().__init__() - - if small: - channels = {4: 64, 8: 64, 16: 64, 32: 64, 64: 64} - - else: - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], - activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - h = input - h_list = [] - - for index, blocklist in enumerate(self.convs): - h = blocklist(h) - h_list.append(h) - - out = h - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - h_list.append(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out, h_list - - -class StyleEncoder(nn.Module): - def __init__(self, size, w_dim=512): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256, - 128: 128, - 256: 64, - 512: 32, - 1024: 16 - } - - self.w_dim = w_dim - log_size = int(math.log(size, 2)) - convs = [ConvLayer(3, channels[size], 1)] - - in_channel = channels[size] - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - convs.append(ResBlock(in_channel, out_channel)) - in_channel = out_channel - - convs.append(EqualConv2d( - in_channel, 2*self.w_dim, 4, padding=0, bias=False)) - - self.convs = nn.Sequential(*convs) - - def forward(self, input): - out = self.convs(input) - # return out.view(len(input), self.n_latents, self.w_dim) - reshaped = out.view(len(input), 2*self.w_dim) - return reshaped[:, :self.w_dim], reshaped[:, self.w_dim:] - - -def kaiming_init(m): - if isinstance(m, (nn.Linear, nn.Conv2d)): - init.kaiming_normal_(m.weight) - if m.bias is not None: - m.bias.data.fill_(0) - elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)): - m.weight.data.fill_(1) - if m.bias is not None: - m.bias.data.fill_(0) - - -def normal_init(m): - if isinstance(m, (nn.Linear, nn.Conv2d)): - init.normal_(m.weight, 0, 0.02) - if m.bias is not None: - m.bias.data.fill_(0) - elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)): - m.weight.data.fill_(1) - if m.bias is not None: - m.bias.data.fill_(0) diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.h b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.h deleted file mode 100644 index 524c804122a2582e20e2e4e9c49267e1a1b6db60..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.h +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct filtered_lrelu_kernel_params -{ - // These parameters decide which kernel to use. - int up; // upsampling ratio (1, 2, 4) - int down; // downsampling ratio (1, 2, 4) - int2 fuShape; // [size, 1] | [size, size] - int2 fdShape; // [size, 1] | [size, size] - - int _dummy; // Alignment. - - // Rest of the parameters. - const void* x; // Input tensor. - void* y; // Output tensor. - const void* b; // Bias tensor. - unsigned char* s; // Sign tensor in/out. NULL if unused. - const float* fu; // Upsampling filter. - const float* fd; // Downsampling filter. - - int2 pad0; // Left/top padding. - float gain; // Additional gain factor. - float slope; // Leaky ReLU slope on negative side. - float clamp; // Clamp after nonlinearity. - int flip; // Filter kernel flip for gradient computation. - - int tilesXdim; // Original number of horizontal output tiles. - int tilesXrep; // Number of horizontal tiles per CTA. - int blockZofs; // Block z offset to support large minibatch, channel dimensions. - - int4 xShape; // [width, height, channel, batch] - int4 yShape; // [width, height, channel, batch] - int2 sShape; // [width, height] - width is in bytes. Contiguous. Zeros if unused. - int2 sOfs; // [ofs_x, ofs_y] - offset between upsampled data and sign tensor. - int swLimit; // Active width of sign tensor in bytes. - - longlong4 xStride; // Strides of all tensors except signs, same component order as shapes. - longlong4 yStride; // - int64_t bStride; // - longlong3 fuStride; // - longlong3 fdStride; // -}; - -struct filtered_lrelu_act_kernel_params -{ - void* x; // Input/output, modified in-place. - unsigned char* s; // Sign tensor in/out. NULL if unused. - - float gain; // Additional gain factor. - float slope; // Leaky ReLU slope on negative side. - float clamp; // Clamp after nonlinearity. - - int4 xShape; // [width, height, channel, batch] - longlong4 xStride; // Input/output tensor strides, same order as in shape. - int2 sShape; // [width, height] - width is in elements. Contiguous. Zeros if unused. - int2 sOfs; // [ofs_x, ofs_y] - offset between upsampled data and sign tensor. -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct filtered_lrelu_kernel_spec -{ - void* setup; // Function for filter kernel setup. - void* exec; // Function for main operation. - int2 tileOut; // Width/height of launch tile. - int numWarps; // Number of warps per thread block, determines launch block size. - int xrep; // For processing multiple horizontal tiles per thread block. - int dynamicSharedKB; // How much dynamic shared memory the exec kernel wants. -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template filtered_lrelu_kernel_spec choose_filtered_lrelu_kernel(const filtered_lrelu_kernel_params& p, int sharedKB); -template void* choose_filtered_lrelu_act_kernel(void); -template cudaError_t copy_filters(cudaStream_t stream); - -//------------------------------------------------------------------------ \ No newline at end of file diff --git a/spaces/ECCV2022/bytetrack/deploy/ncnn/cpp/src/bytetrack.cpp b/spaces/ECCV2022/bytetrack/deploy/ncnn/cpp/src/bytetrack.cpp deleted file mode 100644 index a129f146dd8faa3570bb590555e98a23bd9e4d23..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/deploy/ncnn/cpp/src/bytetrack.cpp +++ /dev/null @@ -1,396 +0,0 @@ -#include "layer.h" -#include "net.h" - -#if defined(USE_NCNN_SIMPLEOCV) -#include "simpleocv.h" -#include -#else -#include -#include -#include -#include -#endif -#include -#include -#include -#include -#include "BYTETracker.h" - -#define YOLOX_NMS_THRESH 0.7 // nms threshold -#define YOLOX_CONF_THRESH 0.1 // threshold of bounding box prob -#define INPUT_W 1088 // target image size w after resize -#define INPUT_H 608 // target image size h after resize - -Mat static_resize(Mat& img) { - float r = min(INPUT_W / (img.cols*1.0), INPUT_H / (img.rows*1.0)); - // r = std::min(r, 1.0f); - int unpad_w = r * img.cols; - int unpad_h = r * img.rows; - Mat re(unpad_h, unpad_w, CV_8UC3); - resize(img, re, re.size()); - Mat out(INPUT_H, INPUT_W, CV_8UC3, Scalar(114, 114, 114)); - re.copyTo(out(Rect(0, 0, re.cols, re.rows))); - return out; -} - -// YOLOX use the same focus in yolov5 -class YoloV5Focus : public ncnn::Layer -{ -public: - YoloV5Focus() - { - one_blob_only = true; - } - - virtual int forward(const ncnn::Mat& bottom_blob, ncnn::Mat& top_blob, const ncnn::Option& opt) const - { - int w = bottom_blob.w; - int h = bottom_blob.h; - int channels = bottom_blob.c; - - int outw = w / 2; - int outh = h / 2; - int outc = channels * 4; - - top_blob.create(outw, outh, outc, 4u, 1, opt.blob_allocator); - if (top_blob.empty()) - return -100; - - #pragma omp parallel for num_threads(opt.num_threads) - for (int p = 0; p < outc; p++) - { - const float* ptr = bottom_blob.channel(p % channels).row((p / channels) % 2) + ((p / channels) / 2); - float* outptr = top_blob.channel(p); - - for (int i = 0; i < outh; i++) - { - for (int j = 0; j < outw; j++) - { - *outptr = *ptr; - - outptr += 1; - ptr += 2; - } - - ptr += w; - } - } - - return 0; - } -}; - -DEFINE_LAYER_CREATOR(YoloV5Focus) - -struct GridAndStride -{ - int grid0; - int grid1; - int stride; -}; - -static inline float intersection_area(const Object& a, const Object& b) -{ - cv::Rect_ inter = a.rect & b.rect; - return inter.area(); -} - -static void qsort_descent_inplace(std::vector& faceobjects, int left, int right) -{ - int i = left; - int j = right; - float p = faceobjects[(left + right) / 2].prob; - - while (i <= j) - { - while (faceobjects[i].prob > p) - i++; - - while (faceobjects[j].prob < p) - j--; - - if (i <= j) - { - // swap - std::swap(faceobjects[i], faceobjects[j]); - - i++; - j--; - } - } - - #pragma omp parallel sections - { - #pragma omp section - { - if (left < j) qsort_descent_inplace(faceobjects, left, j); - } - #pragma omp section - { - if (i < right) qsort_descent_inplace(faceobjects, i, right); - } - } -} - -static void qsort_descent_inplace(std::vector& objects) -{ - if (objects.empty()) - return; - - qsort_descent_inplace(objects, 0, objects.size() - 1); -} - -static void nms_sorted_bboxes(const std::vector& faceobjects, std::vector& picked, float nms_threshold) -{ - picked.clear(); - - const int n = faceobjects.size(); - - std::vector areas(n); - for (int i = 0; i < n; i++) - { - areas[i] = faceobjects[i].rect.area(); - } - - for (int i = 0; i < n; i++) - { - const Object& a = faceobjects[i]; - - int keep = 1; - for (int j = 0; j < (int)picked.size(); j++) - { - const Object& b = faceobjects[picked[j]]; - - // intersection over union - float inter_area = intersection_area(a, b); - float union_area = areas[i] + areas[picked[j]] - inter_area; - // float IoU = inter_area / union_area - if (inter_area / union_area > nms_threshold) - keep = 0; - } - - if (keep) - picked.push_back(i); - } -} - -static void generate_grids_and_stride(const int target_w, const int target_h, std::vector& strides, std::vector& grid_strides) -{ - for (int i = 0; i < (int)strides.size(); i++) - { - int stride = strides[i]; - int num_grid_w = target_w / stride; - int num_grid_h = target_h / stride; - for (int g1 = 0; g1 < num_grid_h; g1++) - { - for (int g0 = 0; g0 < num_grid_w; g0++) - { - GridAndStride gs; - gs.grid0 = g0; - gs.grid1 = g1; - gs.stride = stride; - grid_strides.push_back(gs); - } - } - } -} - -static void generate_yolox_proposals(std::vector grid_strides, const ncnn::Mat& feat_blob, float prob_threshold, std::vector& objects) -{ - const int num_grid = feat_blob.h; - const int num_class = feat_blob.w - 5; - const int num_anchors = grid_strides.size(); - - const float* feat_ptr = feat_blob.channel(0); - for (int anchor_idx = 0; anchor_idx < num_anchors; anchor_idx++) - { - const int grid0 = grid_strides[anchor_idx].grid0; - const int grid1 = grid_strides[anchor_idx].grid1; - const int stride = grid_strides[anchor_idx].stride; - - // yolox/models/yolo_head.py decode logic - // outputs[..., :2] = (outputs[..., :2] + grids) * strides - // outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides - float x_center = (feat_ptr[0] + grid0) * stride; - float y_center = (feat_ptr[1] + grid1) * stride; - float w = exp(feat_ptr[2]) * stride; - float h = exp(feat_ptr[3]) * stride; - float x0 = x_center - w * 0.5f; - float y0 = y_center - h * 0.5f; - - float box_objectness = feat_ptr[4]; - for (int class_idx = 0; class_idx < num_class; class_idx++) - { - float box_cls_score = feat_ptr[5 + class_idx]; - float box_prob = box_objectness * box_cls_score; - if (box_prob > prob_threshold) - { - Object obj; - obj.rect.x = x0; - obj.rect.y = y0; - obj.rect.width = w; - obj.rect.height = h; - obj.label = class_idx; - obj.prob = box_prob; - - objects.push_back(obj); - } - - } // class loop - feat_ptr += feat_blob.w; - - } // point anchor loop -} - -static int detect_yolox(ncnn::Mat& in_pad, std::vector& objects, ncnn::Extractor ex, float scale) -{ - - ex.input("images", in_pad); - - std::vector proposals; - - { - ncnn::Mat out; - ex.extract("output", out); - - static const int stride_arr[] = {8, 16, 32}; // might have stride=64 in YOLOX - std::vector strides(stride_arr, stride_arr + sizeof(stride_arr) / sizeof(stride_arr[0])); - std::vector grid_strides; - generate_grids_and_stride(INPUT_W, INPUT_H, strides, grid_strides); - generate_yolox_proposals(grid_strides, out, YOLOX_CONF_THRESH, proposals); - } - // sort all proposals by score from highest to lowest - qsort_descent_inplace(proposals); - - // apply nms with nms_threshold - std::vector picked; - nms_sorted_bboxes(proposals, picked, YOLOX_NMS_THRESH); - - int count = picked.size(); - - objects.resize(count); - for (int i = 0; i < count; i++) - { - objects[i] = proposals[picked[i]]; - - // adjust offset to original unpadded - float x0 = (objects[i].rect.x) / scale; - float y0 = (objects[i].rect.y) / scale; - float x1 = (objects[i].rect.x + objects[i].rect.width) / scale; - float y1 = (objects[i].rect.y + objects[i].rect.height) / scale; - - // clip - // x0 = std::max(std::min(x0, (float)(img_w - 1)), 0.f); - // y0 = std::max(std::min(y0, (float)(img_h - 1)), 0.f); - // x1 = std::max(std::min(x1, (float)(img_w - 1)), 0.f); - // y1 = std::max(std::min(y1, (float)(img_h - 1)), 0.f); - - objects[i].rect.x = x0; - objects[i].rect.y = y0; - objects[i].rect.width = x1 - x0; - objects[i].rect.height = y1 - y0; - } - - return 0; -} - -int main(int argc, char** argv) -{ - if (argc != 2) - { - fprintf(stderr, "Usage: %s [videopath]\n", argv[0]); - return -1; - } - - ncnn::Net yolox; - - //yolox.opt.use_vulkan_compute = true; - //yolox.opt.use_bf16_storage = true; - yolox.opt.num_threads = 20; - //ncnn::set_cpu_powersave(0); - - //ncnn::set_omp_dynamic(0); - //ncnn::set_omp_num_threads(20); - - // Focus in yolov5 - yolox.register_custom_layer("YoloV5Focus", YoloV5Focus_layer_creator); - - yolox.load_param("bytetrack_s_op.param"); - yolox.load_model("bytetrack_s_op.bin"); - - ncnn::Extractor ex = yolox.create_extractor(); - - const char* videopath = argv[1]; - - VideoCapture cap(videopath); - if (!cap.isOpened()) - return 0; - - int img_w = cap.get(CV_CAP_PROP_FRAME_WIDTH); - int img_h = cap.get(CV_CAP_PROP_FRAME_HEIGHT); - int fps = cap.get(CV_CAP_PROP_FPS); - long nFrame = static_cast(cap.get(CV_CAP_PROP_FRAME_COUNT)); - cout << "Total frames: " << nFrame << endl; - - VideoWriter writer("demo.mp4", CV_FOURCC('m', 'p', '4', 'v'), fps, Size(img_w, img_h)); - - Mat img; - BYTETracker tracker(fps, 30); - int num_frames = 0; - int total_ms = 1; - for (;;) - { - if(!cap.read(img)) - break; - num_frames ++; - if (num_frames % 20 == 0) - { - cout << "Processing frame " << num_frames << " (" << num_frames * 1000000 / total_ms << " fps)" << endl; - } - if (img.empty()) - break; - - float scale = min(INPUT_W / (img.cols*1.0), INPUT_H / (img.rows*1.0)); - Mat pr_img = static_resize(img); - ncnn::Mat in_pad = ncnn::Mat::from_pixels_resize(pr_img.data, ncnn::Mat::PIXEL_BGR2RGB, INPUT_W, INPUT_H, INPUT_W, INPUT_H); - - // python 0-1 input tensor with rgb_means = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225) - // so for 0-255 input image, rgb_mean should multiply 255 and norm should div by std. - const float mean_vals[3] = {255.f * 0.485f, 255.f * 0.456, 255.f * 0.406f}; - const float norm_vals[3] = {1 / (255.f * 0.229f), 1 / (255.f * 0.224f), 1 / (255.f * 0.225f)}; - - in_pad.substract_mean_normalize(mean_vals, norm_vals); - - std::vector objects; - auto start = chrono::system_clock::now(); - //detect_yolox(img, objects); - detect_yolox(in_pad, objects, ex, scale); - vector output_stracks = tracker.update(objects); - auto end = chrono::system_clock::now(); - total_ms = total_ms + chrono::duration_cast(end - start).count(); - for (int i = 0; i < output_stracks.size(); i++) - { - vector tlwh = output_stracks[i].tlwh; - bool vertical = tlwh[2] / tlwh[3] > 1.6; - if (tlwh[2] * tlwh[3] > 20 && !vertical) - { - Scalar s = tracker.get_color(output_stracks[i].track_id); - putText(img, format("%d", output_stracks[i].track_id), Point(tlwh[0], tlwh[1] - 5), - 0, 0.6, Scalar(0, 0, 255), 2, LINE_AA); - rectangle(img, Rect(tlwh[0], tlwh[1], tlwh[2], tlwh[3]), s, 2); - } - } - putText(img, format("frame: %d fps: %d num: %d", num_frames, num_frames * 1000000 / total_ms, output_stracks.size()), - Point(0, 30), 0, 0.6, Scalar(0, 0, 255), 2, LINE_AA); - writer.write(img); - char c = waitKey(1); - if (c > 0) - { - break; - } - } - cap.release(); - cout << "FPS: " << num_frames * 1000000 / total_ms << endl; - - return 0; -} diff --git a/spaces/Ekimetrics/climate-question-answering/climateqa/prompts.py b/spaces/Ekimetrics/climate-question-answering/climateqa/prompts.py deleted file mode 100644 index 8169ea00c93378d43f3cbfaa9666b8537f0a76d1..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/climate-question-answering/climateqa/prompts.py +++ /dev/null @@ -1,57 +0,0 @@ - -# If the message is not relevant to climate change (like "How are you", "I am 18 years old" or "When was built the eiffel tower"), return N/A - -reformulation_prompt = """ -Reformulate the following user message to be a short standalone question in English, in the context of an educational discussion about climate change. ---- -query: La technologie nous sauvera-t-elle ? -question: Can technology help humanity mitigate the effects of climate change? -language: French ---- -query: what are our reserves in fossil fuel? -question: What are the current reserves of fossil fuels and how long will they last? -language: English ---- -query: what are the main causes of climate change? -question: What are the main causes of climate change in the last century? -language: English ---- - -Output the result as json with two keys "question" and "language" -query: {query} -answer:""" - -system_prompt = """ -You are ClimateQ&A, an AI Assistant created by Ekimetrics, you will act as a climate scientist and answer questions about climate change and biodiversity. -You are given a question and extracted passages of the IPCC and/or IPBES reports. Provide a clear and structured answer based on the passages provided, the context and the guidelines. -""" - - -answer_prompt = """ -You are ClimateQ&A, an AI Assistant created by Ekimetrics. You are given a question and extracted passages of the IPCC and/or IPBES reports. Provide a clear and structured answer based on the passages provided, the context and the guidelines. - -Guidelines: -- If the passages have useful facts or numbers, use them in your answer. -- When you use information from a passage, mention where it came from by using [Doc i] at the end of the sentence. i stands for the number of the document. -- Do not use the sentence 'Doc i says ...' to say where information came from. -- If the same thing is said in more than one document, you can mention all of them like this: [Doc i, Doc j, Doc k] -- Do not just summarize each passage one by one. Group your summaries to highlight the key parts in the explanation. -- If it makes sense, use bullet points and lists to make your answers easier to understand. -- You do not need to use every passage. Only use the ones that help answer the question. -- If the documents do not have the information needed to answer the question, just say you do not have enough information. - ------------------------ -Passages: -{summaries} - ------------------------ -Question: {question} - Explained to {audience} -Answer in {language} with the passages citations: -""" - - -audience_prompts = { - "children": "6 year old children that don't know anything about science and climate change and need metaphors to learn", - "general": "the general public who know the basics in science and climate change and want to learn more about it without technical terms. Still use references to passages.", - "experts": "expert and climate scientists that are not afraid of technical terms", -} \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/seg_r31_1by16_fpnocr_academic.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/seg_r31_1by16_fpnocr_academic.py deleted file mode 100644 index 4e37856c06fb43cb0b67a6a1760bd7ef9eeddb66..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/seg_r31_1by16_fpnocr_academic.py +++ /dev/null @@ -1,40 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/recog_pipelines/seg_pipeline.py', - '../../_base_/recog_models/seg.py', - '../../_base_/recog_datasets/ST_charbox_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -# optimizer -optimizer = dict(type='Adam', lr=1e-4) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='step', step=[3, 4]) -total_epochs = 5 - -find_unused_parameters = True - -data = dict( - samples_per_gpu=16, - workers_per_gpu=2, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/WhisperPPG.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/WhisperPPG.py deleted file mode 100644 index aa988b0a6d05696ea519d1652e5801302ba8a6c6..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/WhisperPPG.py +++ /dev/null @@ -1,30 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch - -from vencoder.whisper.model import Whisper, ModelDimensions -from vencoder.whisper.audio import pad_or_trim, log_mel_spectrogram - - -class WhisperPPG(SpeechEncoder): - def __init__(self,vec_path = "pretrain/medium.pt",device=None): - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - checkpoint = torch.load(vec_path, map_location=device) - dims = ModelDimensions(**checkpoint["dims"]) - model = Whisper(dims) - model.load_state_dict(checkpoint["model_state_dict"]) - self.hidden_dim = dims - self.model = model.to(self.dev) - - def encoder(self, wav): - audio = wav - audln = audio.shape[0] - ppgln = audln // 320 - audio = pad_or_trim(audio) - mel = log_mel_spectrogram(audio).to(self.dev) - with torch.no_grad(): - ppg = self.model.encoder(mel.unsqueeze(0)).squeeze().data.cpu().float().numpy() - ppg = torch.FloatTensor(ppg[:ppgln,]).to(self.dev) - return ppg[None,:,:].transpose(1, 2) diff --git a/spaces/GEM/DatasetCardForm/formatting/construct_md.py b/spaces/GEM/DatasetCardForm/formatting/construct_md.py deleted file mode 100644 index 2905237889af475b1285668abc8ae2e638f2573c..0000000000000000000000000000000000000000 --- a/spaces/GEM/DatasetCardForm/formatting/construct_md.py +++ /dev/null @@ -1,75 +0,0 @@ -from argparse import ArgumentParser -from json import load - -def parse_args(): - parser = ArgumentParser() - parser.add_argument('input', type=str, nargs='+', \ - help='Specify paths to files (e.g., path/to/*.json)') - - return parser.parse_args() - - -def json_to_markdown(filename): - json = load(open(filename)) - - markdown = f'# Dataset Card for {json["name"]}\n\n' - - markdown += f'You can find the ' - - markdown += json['summary'] + '\n\n' - - for key in json: - if key not in ('name', 'summary', 'sections'): - markdown += f'#### {key}\n{json[key]}\n\n' - - markdown += '\n'.join(section_to_markdown(section) \ - for section in json['sections']) - - with open(f'{filename[:-5]}.md', 'w') as f: - f.write(markdown) - - -def section_to_markdown(section): - markdown = f'{"#" * section["level"]} {section["title"]}\n\n' - markdown += '\n'.join(subsection_to_markdown(subsection) \ - for subsection in section['subsections']) - - return markdown + '\n' - - -def subsection_to_markdown(subsection): - markdown = f'{"#" * subsection["level"]} {subsection["title"]}\n\n' - markdown += '\n'.join(field_to_markdown(field) \ - for field in subsection['fields']) - - return markdown + '\n' - - -def field_to_markdown(field): - markdown = f'{"#" * field["level"]} {field["title"]}\n\n' - - if 'flags' in field and 'quick' in field['flags']: - markdown += f'\n' - - if field.get('info', False): - markdown += f'\n' - - if field.get('scope', False): - markdown += f'\n' - - markdown += field.get('content', '') - - return markdown + '\n' - - -def main(): - """Converts JSON output from `reformat_json.py` - to Markdown input for Data Cards Labs.""" - args = parse_args() - for filename in args.input: - if filename[-5:] == '.json': - json_to_markdown(filename) - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/GMFTBY/PandaGPT/train_sft.py b/spaces/GMFTBY/PandaGPT/train_sft.py deleted file mode 100644 index 541706bb9736eeb77bfa3f3c74124670940bf66f..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/train_sft.py +++ /dev/null @@ -1,97 +0,0 @@ -from header import * -from datasets import * -from model import * -from config import * - -def parser_args(): - parser = argparse.ArgumentParser(description='train parameters') - parser.add_argument('--model', type=str) - parser.add_argument('--data_path', type=str) - parser.add_argument('--local_rank', default=0, type=int) - parser.add_argument('--save_path', type=str) - parser.add_argument('--log_path', type=str) - # model configurations - parser.add_argument('--image_root_path', type=str) # the directory that stores all images - parser.add_argument('--imagebind_ckpt_path', type=str) # the path that stores the imagebind checkpoint - parser.add_argument('--vicuna_ckpt_path', type=str) # the path that stores the vicuna checkpoint - parser.add_argument('--delta_ckpt_path', type=str) # the delta parameters trained in stage 1 - parser.add_argument('--max_tgt_len', type=int) # the maximum sequence length - parser.add_argument('--stage', type=int) # the maximum sequence length - return parser.parse_args() - -def initialize_distributed(args): - args['master_ip'] = os.getenv('MASTER_ADDR', 'localhost') - args['master_port'] = os.getenv('MASTER_PORT', '6000') - args['world_size'] = int(os.getenv('WORLD_SIZE', '1')) - args['local_rank'] = int(os.getenv('RANK', '0')) % torch.cuda.device_count() - device = args['local_rank'] % torch.cuda.device_count() - torch.cuda.set_device(device) - deepspeed.init_distributed(dist_backend='nccl') - -def set_random_seed(seed): - if seed is not None and seed > 0: - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.random.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - -def config_env(args): - args['root_dir'] = '../' - args['mode'] = 'train' - config = load_config(args) - args.update(config) - initialize_distributed(args) - set_random_seed(args['seed']) - -def build_directory(path): - if os.path.exists(path): - pass - else: # recursively construct directory - os.makedirs(path, exist_ok=True) - -def main(**args): - config_env(args) - args['ds_config_path'] = f'dsconfig/{args["model"]}_stage_{args["stage"]}.json' - dschf = HfDeepSpeedConfig(args['ds_config_path']) - args['dschf'] = dschf - - build_directory(args['save_path']) - build_directory(args['log_path']) - - if args['log_path']: - logging.basicConfig( - format='%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s', - level=logging.DEBUG, - filename=f'{args["log_path"]}/train_{time.asctime()}.log', - filemode='w' - ) - - train_data, train_iter, sampler = load_sft_dataset(args) - - length = args['epochs'] * len(train_data) // args['world_size'] // dschf.config['train_micro_batch_size_per_gpu'] - total_steps = args['epochs'] * len(train_data) // dschf.config['train_batch_size'] - args['total_steps'] = total_steps - agent = load_model(args) - torch.distributed.barrier() - - # begin to train - pbar = tqdm(total=length) # maximum total number - current_step = 0 - for epoch_i in tqdm(range(args['epochs'])): - for batch in train_iter: - agent.train_model( - batch, - current_step=current_step, - pbar=pbar - ) - current_step += 1 - # save at the end of the training - torch.distributed.barrier() - agent.save_model(args['save_path'], 0) - -if __name__ == "__main__": - args = parser_args() - args = vars(args) - main(**args) diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/model_check_points/ReadME.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/model_check_points/ReadME.md deleted file mode 100644 index e432a1c7bd5de45f086ba1fd6b06ca712a1d806b..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/model_check_points/ReadME.md +++ /dev/null @@ -1,34 +0,0 @@ -# Resume & Use Model Check Points - -This folder contains check points for models and their weights. They are generated from [PyTorch's pickle](https://pytorch.org/docs/master/notes/serialization.html). - -Model specifications are in each folder's ReadME. - -Pickle names with "model" contain the entire models, and they can be used as an freeze module by calling the "forward_checkpoint" function to generate images. - -Example: -```python -import torch -# No need to reconstruct the model -model = torch.load("./DCSCN/DCSCN_model_387epos_L12_noise_1.pt") -x = torch.randn((1,3,10,10)), torch.randn((1,3,20,20)) -out = model.forward_checkpoint(a) -``` - -Pickle names with "weights" are model weights, and they are named dictionaries. - -Example: -```python -model = DCSCN(*) # the setting must be the same to load check points weights. -model.load_state_dict(torch.load("./DCSCN/DCSCN_weights_387epos_L12_noise_1.pt")) -# then you can resume the model training -``` - -Model check poins in Upconv_7 and vgg_7 are from [waifu2x's repo](https://github.com/nagadomi/waifu2x/tree/master/models). To load weights into a model, please use ```load_pre_train_weights``` function. - -Example: -```python -model = UpConv_7() -model.load_pre_train_weights(json_file=...) -# then the model is ready to use -``` diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pisa/pisa_ssd300_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/pisa/pisa_ssd300_coco.py deleted file mode 100644 index b5cc006477eacaa9ab40d463312dc2156a59d634..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pisa/pisa_ssd300_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../ssd/ssd300_coco.py' - -model = dict( - bbox_head=dict(type='PISASSDHead'), - train_cfg=dict(isr=dict(k=2., bias=0.), carl=dict(k=1., bias=0.2))) - -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/ga_retina_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/ga_retina_head.py deleted file mode 100644 index 8822d1ca78ee2fa2f304a0649e81274830383533..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/ga_retina_head.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.ops import MaskedConv2d - -from ..builder import HEADS -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead - - -@HEADS.register_module() -class GARetinaHead(GuidedAnchorHead): - """Guided-Anchor-based RetinaNet head.""" - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(GARetinaHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.feat_channels, self.num_anchors * 2, - 1) - self.feature_adaption_cls = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.feature_adaption_reg = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.retina_cls = MaskedConv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = MaskedConv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the layer.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - - self.feature_adaption_cls.init_weights() - self.feature_adaption_reg.init_weights() - - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_loc, std=0.01, bias=bias_cls) - normal_init(self.conv_shape, std=0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_reg, std=0.01) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - - loc_pred = self.conv_loc(cls_feat) - shape_pred = self.conv_shape(reg_feat) - - cls_feat = self.feature_adaption_cls(cls_feat, shape_pred) - reg_feat = self.feature_adaption_reg(reg_feat, shape_pred) - - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.retina_cls(cls_feat, mask) - bbox_pred = self.retina_reg(reg_feat, mask) - return cls_score, bbox_pred, shape_pred, loc_pred diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py deleted file mode 100644 index c6e7e58508f31627766b8ab748bd81cd51c77eca..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './pspnet_r50-d8_769x769_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context_59.py deleted file mode 100644 index 88041c6817d2cb152a979b71a2ce56a9e30b87b5..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/pspnet_r50-d8.py', - '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=59), - auxiliary_head=dict(num_classes=59), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/transforms.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/transforms.py deleted file mode 100644 index 20753bb0fa80a332403fd8981c92da73ef345e8f..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/transforms.py +++ /dev/null @@ -1,889 +0,0 @@ -import mmcv -import numpy as np -from mmcv.utils import deprecated_api_warning, is_tuple_of -from numpy import random - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Resize(object): - """Resize images & seg. - - This transform resizes the input image to some scale. If the input dict - contains the key "scale", then the scale in the input dict is used, - otherwise the specified scale in the init method is used. - - ``img_scale`` can be None, a tuple (single-scale) or a list of tuple - (multi-scale). There are 4 multiscale modes: - - - ``ratio_range is not None``: - 1. When img_scale is None, img_scale is the shape of image in results - (img_scale = results['img'].shape[:2]) and the image is resized based - on the original size. (mode 1) - 2. When img_scale is a tuple (single-scale), randomly sample a ratio from - the ratio range and multiply it with the image scale. (mode 2) - - - ``ratio_range is None and multiscale_mode == "range"``: randomly sample a - scale from the a range. (mode 3) - - - ``ratio_range is None and multiscale_mode == "value"``: randomly sample a - scale from multiple scales. (mode 4) - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - multiscale_mode (str): Either "range" or "value". - ratio_range (tuple[float]): (min_ratio, max_ratio) - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given img_scale=None and a range of image ratio - # mode 2: given a scale and a range of image ratio - assert self.img_scale is None or len(self.img_scale) == 1 - else: - # mode 3 and 4: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, - where ``img_scale`` is the selected image scale and - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where - ``img_scale`` is sampled scale and None is just a placeholder - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where - ``scale`` is sampled ratio multiplied with ``img_scale`` and - None is just a placeholder to be consistent with - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - if self.img_scale is None: - h, w = results['img'].shape[:2] - scale, scale_idx = self.random_sample_ratio((w, h), - self.ratio_range) - else: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results['img'], results['scale'], return_scale=True) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results['img'].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results['img'], results['scale'], return_scale=True) - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img'] = img - results['img_shape'] = img.shape - results['pad_shape'] = img.shape # in case that there is no padding - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], results['scale'], interpolation='nearest') - else: - gt_seg = mmcv.imresize( - results[key], results['scale'], interpolation='nearest') - results[key] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - self._random_scale(results) - self._resize_img(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(img_scale={self.img_scale}, ' - f'multiscale_mode={self.multiscale_mode}, ' - f'ratio_range={self.ratio_range}, ' - f'keep_ratio={self.keep_ratio})') - return repr_str - - -@PIPELINES.register_module() -class RandomFlip(object): - """Flip the image & seg. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - Args: - prob (float, optional): The flipping probability. Default: None. - direction(str, optional): The flipping direction. Options are - 'horizontal' and 'vertical'. Default: 'horizontal'. - """ - - @deprecated_api_warning({'flip_ratio': 'prob'}, cls_name='RandomFlip') - def __init__(self, prob=None, direction='horizontal'): - self.prob = prob - self.direction = direction - if prob is not None: - assert prob >= 0 and prob <= 1 - assert direction in ['horizontal', 'vertical'] - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added into - result dict. - """ - - if 'flip' not in results: - flip = True if np.random.rand() < self.prob else False - results['flip'] = flip - if 'flip_direction' not in results: - results['flip_direction'] = self.direction - if results['flip']: - # flip image - results['img'] = mmcv.imflip( - results['img'], direction=results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - # use copy() to make numpy stride positive - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']).copy() - return results - - def __repr__(self): - return self.__class__.__name__ + f'(prob={self.prob})' - - -@PIPELINES.register_module() -class Pad(object): - """Pad the image & mask. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_val (float, optional): Padding value. Default: 0. - seg_pad_val (float, optional): Padding value of segmentation map. - Default: 255. - """ - - def __init__(self, - size=None, - size_divisor=None, - pad_val=0, - seg_pad_val=255): - self.size = size - self.size_divisor = size_divisor - self.pad_val = pad_val - self.seg_pad_val = seg_pad_val - # only one of size and size_divisor should be valid - assert size is not None or size_divisor is not None - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - if self.size is not None: - padded_img = mmcv.impad( - results['img'], shape=self.size, pad_val=self.pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results['img'], self.size_divisor, pad_val=self.pad_val) - results['img'] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_seg(self, results): - """Pad masks according to ``results['pad_shape']``.""" - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], - shape=results['pad_shape'][:2], - pad_val=self.seg_pad_val) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - - self._pad_img(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, size_divisor={self.size_divisor}, ' \ - f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize(object): - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - - results['img'] = mmcv.imnormalize(results['img'], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb=' \ - f'{self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class Rerange(object): - """Rerange the image pixel value. - - Args: - min_value (float or int): Minimum value of the reranged image. - Default: 0. - max_value (float or int): Maximum value of the reranged image. - Default: 255. - """ - - def __init__(self, min_value=0, max_value=255): - assert isinstance(min_value, float) or isinstance(min_value, int) - assert isinstance(max_value, float) or isinstance(max_value, int) - assert min_value < max_value - self.min_value = min_value - self.max_value = max_value - - def __call__(self, results): - """Call function to rerange images. - - Args: - results (dict): Result dict from loading pipeline. - Returns: - dict: Reranged results. - """ - - img = results['img'] - img_min_value = np.min(img) - img_max_value = np.max(img) - - assert img_min_value < img_max_value - # rerange to [0, 1] - img = (img - img_min_value) / (img_max_value - img_min_value) - # rerange to [min_value, max_value] - img = img * (self.max_value - self.min_value) + self.min_value - results['img'] = img - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_value={self.min_value}, max_value={self.max_value})' - return repr_str - - -@PIPELINES.register_module() -class CLAHE(object): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - """ - - def __init__(self, clip_limit=40.0, tile_grid_size=(8, 8)): - assert isinstance(clip_limit, (float, int)) - self.clip_limit = clip_limit - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - self.tile_grid_size = tile_grid_size - - def __call__(self, results): - """Call function to Use CLAHE method process images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Processed results. - """ - - for i in range(results['img'].shape[2]): - results['img'][:, :, i] = mmcv.clahe( - np.array(results['img'][:, :, i], dtype=np.uint8), - self.clip_limit, self.tile_grid_size) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(clip_limit={self.clip_limit}, '\ - f'tile_grid_size={self.tile_grid_size})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop(object): - """Random crop the image & seg. - - Args: - crop_size (tuple): Expected size after cropping, (h, w). - cat_max_ratio (float): The maximum ratio that single category could - occupy. - """ - - def __init__(self, crop_size, cat_max_ratio=1., ignore_index=255): - assert crop_size[0] > 0 and crop_size[1] > 0 - self.crop_size = crop_size - self.cat_max_ratio = cat_max_ratio - self.ignore_index = ignore_index - - def get_crop_bbox(self, img): - """Randomly get a crop bounding box.""" - margin_h = max(img.shape[0] - self.crop_size[0], 0) - margin_w = max(img.shape[1] - self.crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + self.crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + self.crop_size[1] - - return crop_y1, crop_y2, crop_x1, crop_x2 - - def crop(self, img, crop_bbox): - """Crop from ``img``""" - crop_y1, crop_y2, crop_x1, crop_x2 = crop_bbox - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - return img - - def __call__(self, results): - """Call function to randomly crop images, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - - img = results['img'] - crop_bbox = self.get_crop_bbox(img) - if self.cat_max_ratio < 1.: - # Repeat 10 times - for _ in range(10): - seg_temp = self.crop(results['gt_semantic_seg'], crop_bbox) - labels, cnt = np.unique(seg_temp, return_counts=True) - cnt = cnt[labels != self.ignore_index] - if len(cnt) > 1 and np.max(cnt) / np.sum( - cnt) < self.cat_max_ratio: - break - crop_bbox = self.get_crop_bbox(img) - - # crop the image - img = self.crop(img, crop_bbox) - img_shape = img.shape - results['img'] = img - results['img_shape'] = img_shape - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = self.crop(results[key], crop_bbox) - - return results - - def __repr__(self): - return self.__class__.__name__ + f'(crop_size={self.crop_size})' - - -@PIPELINES.register_module() -class RandomRotate(object): - """Rotate the image & seg. - - Args: - prob (float): The rotation probability. - degree (float, tuple[float]): Range of degrees to select from. If - degree is a number instead of tuple like (min, max), - the range of degree will be (``-degree``, ``+degree``) - pad_val (float, optional): Padding value of image. Default: 0. - seg_pad_val (float, optional): Padding value of segmentation map. - Default: 255. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. Default: None. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. Default: False - """ - - def __init__(self, - prob, - degree, - pad_val=0, - seg_pad_val=255, - center=None, - auto_bound=False): - self.prob = prob - assert prob >= 0 and prob <= 1 - if isinstance(degree, (float, int)): - assert degree > 0, f'degree {degree} should be positive' - self.degree = (-degree, degree) - else: - self.degree = degree - assert len(self.degree) == 2, f'degree {self.degree} should be a ' \ - f'tuple of (min, max)' - self.pal_val = pad_val - self.seg_pad_val = seg_pad_val - self.center = center - self.auto_bound = auto_bound - - def __call__(self, results): - """Call function to rotate image, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - - rotate = True if np.random.rand() < self.prob else False - degree = np.random.uniform(min(*self.degree), max(*self.degree)) - if rotate: - # rotate image - results['img'] = mmcv.imrotate( - results['img'], - angle=degree, - border_value=self.pal_val, - center=self.center, - auto_bound=self.auto_bound) - - # rotate segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imrotate( - results[key], - angle=degree, - border_value=self.seg_pad_val, - center=self.center, - auto_bound=self.auto_bound, - interpolation='nearest') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob}, ' \ - f'degree={self.degree}, ' \ - f'pad_val={self.pal_val}, ' \ - f'seg_pad_val={self.seg_pad_val}, ' \ - f'center={self.center}, ' \ - f'auto_bound={self.auto_bound})' - return repr_str - - -@PIPELINES.register_module() -class RGB2Gray(object): - """Convert RGB image to grayscale image. - - This transform calculate the weighted mean of input image channels with - ``weights`` and then expand the channels to ``out_channels``. When - ``out_channels`` is None, the number of output channels is the same as - input channels. - - Args: - out_channels (int): Expected number of output channels after - transforming. Default: None. - weights (tuple[float]): The weights to calculate the weighted mean. - Default: (0.299, 0.587, 0.114). - """ - - def __init__(self, out_channels=None, weights=(0.299, 0.587, 0.114)): - assert out_channels is None or out_channels > 0 - self.out_channels = out_channels - assert isinstance(weights, tuple) - for item in weights: - assert isinstance(item, (float, int)) - self.weights = weights - - def __call__(self, results): - """Call function to convert RGB image to grayscale image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with grayscale image. - """ - img = results['img'] - assert len(img.shape) == 3 - assert img.shape[2] == len(self.weights) - weights = np.array(self.weights).reshape((1, 1, -1)) - img = (img * weights).sum(2, keepdims=True) - if self.out_channels is None: - img = img.repeat(weights.shape[2], axis=2) - else: - img = img.repeat(self.out_channels, axis=2) - - results['img'] = img - results['img_shape'] = img.shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(out_channels={self.out_channels}, ' \ - f'weights={self.weights})' - return repr_str - - -@PIPELINES.register_module() -class AdjustGamma(object): - """Using gamma correction to process the image. - - Args: - gamma (float or int): Gamma value used in gamma correction. - Default: 1.0. - """ - - def __init__(self, gamma=1.0): - assert isinstance(gamma, float) or isinstance(gamma, int) - assert gamma > 0 - self.gamma = gamma - inv_gamma = 1.0 / gamma - self.table = np.array([(i / 255.0)**inv_gamma * 255 - for i in np.arange(256)]).astype('uint8') - - def __call__(self, results): - """Call function to process the image with gamma correction. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Processed results. - """ - - results['img'] = mmcv.lut_transform( - np.array(results['img'], dtype=np.uint8), self.table) - - return results - - def __repr__(self): - return self.__class__.__name__ + f'(gamma={self.gamma})' - - -@PIPELINES.register_module() -class SegRescale(object): - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - """ - - def __init__(self, scale_factor=1): - self.scale_factor = scale_factor - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], self.scale_factor, interpolation='nearest') - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion(object): - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def convert(self, img, alpha=1, beta=0): - """Multiple with alpha and add beat with clip.""" - img = img.astype(np.float32) * alpha + beta - img = np.clip(img, 0, 255) - return img.astype(np.uint8) - - def brightness(self, img): - """Brightness distortion.""" - if random.randint(2): - return self.convert( - img, - beta=random.uniform(-self.brightness_delta, - self.brightness_delta)) - return img - - def contrast(self, img): - """Contrast distortion.""" - if random.randint(2): - return self.convert( - img, - alpha=random.uniform(self.contrast_lower, self.contrast_upper)) - return img - - def saturation(self, img): - """Saturation distortion.""" - if random.randint(2): - img = mmcv.bgr2hsv(img) - img[:, :, 1] = self.convert( - img[:, :, 1], - alpha=random.uniform(self.saturation_lower, - self.saturation_upper)) - img = mmcv.hsv2bgr(img) - return img - - def hue(self, img): - """Hue distortion.""" - if random.randint(2): - img = mmcv.bgr2hsv(img) - img[:, :, - 0] = (img[:, :, 0].astype(int) + - random.randint(-self.hue_delta, self.hue_delta)) % 180 - img = mmcv.hsv2bgr(img) - return img - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - img = results['img'] - # random brightness - img = self.brightness(img) - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - img = self.contrast(img) - - # random saturation - img = self.saturation(img) - - # random hue - img = self.hue(img) - - # random contrast - if mode == 0: - img = self.contrast(img) - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += (f'(brightness_delta={self.brightness_delta}, ' - f'contrast_range=({self.contrast_lower}, ' - f'{self.contrast_upper}), ' - f'saturation_range=({self.saturation_lower}, ' - f'{self.saturation_upper}), ' - f'hue_delta={self.hue_delta})') - return repr_str diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/zip.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/zip.py deleted file mode 100644 index f0b17849d36991e7def35a14d3d518b9d867ce36..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/zip.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Utility for reading some info from inside a zip file. -""" - -import typing -import zipfile - -from dataclasses import dataclass -from functools import lru_cache -from typing_extensions import Literal - - -DEFAULT_SIZE = 32 -MODE = Literal['r', 'w', 'x', 'a'] - - -@dataclass(order=True) -class PathInZip: - """Hold a path of file within a zip file. - - Args: - path (str): The convention is :. - Let's assume there is a zip file /some/location/foo.zip - and inside of it is a json file located at /data/file1.json, - Then we expect path = "/some/location/foo.zip:/data/file1.json". - """ - - INFO_PATH_SEP = ':' - zip_path: str - file_path: str - - def __init__(self, path: str) -> None: - split_path = path.split(self.INFO_PATH_SEP) - assert len(split_path) == 2 - self.zip_path, self.file_path = split_path - - @classmethod - def from_paths(cls, zip_path: str, file_path: str): - return cls(zip_path + cls.INFO_PATH_SEP + file_path) - - def __str__(self) -> str: - return self.zip_path + self.INFO_PATH_SEP + self.file_path - - -def _open_zip(path: str, mode: MODE = 'r'): - return zipfile.ZipFile(path, mode) - - -_cached_open_zip = lru_cache(DEFAULT_SIZE)(_open_zip) - - -def set_zip_cache_size(max_size: int): - """Sets the maximal LRU caching for zip file opening. - - Args: - max_size (int): the maximal LRU cache. - """ - global _cached_open_zip - _cached_open_zip = lru_cache(max_size)(_open_zip) - - -def open_file_in_zip(path_in_zip: PathInZip, mode: str = 'r') -> typing.IO: - """Opens a file stored inside a zip and returns a file-like object. - - Args: - path_in_zip (PathInZip): A PathInZip object representing the file to return a file-like object of. - mode (str): The mode in which to open the file with. - Returns: - A file-like object for PathInZip. - """ - zf = _cached_open_zip(path_in_zip.zip_path) - return zf.open(path_in_zip.file_path) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/fullablate.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/fullablate.py deleted file mode 100644 index f92d2c514c0b92b3f33653c5b53198c9fd09cb80..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/fullablate.py +++ /dev/null @@ -1,235 +0,0 @@ -import torch, sys, os, argparse, textwrap, numbers, numpy, json, PIL -from torchvision import transforms -from torch.utils.data import TensorDataset -from netdissect.progress import default_progress, post_progress, desc_progress -from netdissect.progress import verbose_progress, print_progress -from netdissect.nethook import edit_layers -from netdissect.zdataset import standard_z_sample -from netdissect.autoeval import autoimport_eval -from netdissect.easydict import EasyDict -from netdissect.modelconfig import create_instrumented_model - -help_epilog = '''\ -Example: - -python -m netdissect.evalablate \ - --segmenter "netdissect.GanImageSegmenter(segvocab='lowres', segsizes=[160,288], segdiv='quad')" \ - --model "proggan.from_pth_file('models/lsun_models/${SCENE}_lsun.pth')" \ - --outdir dissect/dissectdir \ - --classname tree \ - --layer layer4 \ - --size 1000 - -Output layout: -dissectdir/layer5/ablation/mirror-iqr.json -{ class: "mirror", - classnum: 43, - pixel_total: 41342300, - class_pixels: 1234531, - layer: "layer5", - ranking: "mirror-iqr", - ablation_units: [341, 23, 12, 142, 83, ...] - ablation_pixels: [143242, 132344, 429931, ...] -} - -''' - -def main(): - # Training settings - def strpair(arg): - p = tuple(arg.split(':')) - if len(p) == 1: - p = p + p - return p - - parser = argparse.ArgumentParser(description='Ablation eval', - epilog=textwrap.dedent(help_epilog), - formatter_class=argparse.RawDescriptionHelpFormatter) - parser.add_argument('--model', type=str, default=None, - help='constructor for the model to test') - parser.add_argument('--pthfile', type=str, default=None, - help='filename of .pth file for the model') - parser.add_argument('--outdir', type=str, default='dissect', required=True, - help='directory for dissection output') - parser.add_argument('--layer', type=strpair, - help='space-separated list of layer names to edit' + - ', in the form layername[:reportedname]') - parser.add_argument('--classname', type=str, - help='class name to ablate') - parser.add_argument('--metric', type=str, default='iou', - help='ordering metric for selecting units') - parser.add_argument('--unitcount', type=int, default=30, - help='number of units to ablate') - parser.add_argument('--segmenter', type=str, - help='directory containing segmentation dataset') - parser.add_argument('--netname', type=str, default=None, - help='name for network in generated reports') - parser.add_argument('--batch_size', type=int, default=25, - help='batch size for forward pass') - parser.add_argument('--mixed_units', action='store_true', default=False, - help='true to keep alpha for non-zeroed units') - parser.add_argument('--size', type=int, default=200, - help='number of images to test') - parser.add_argument('--no-cuda', action='store_true', default=False, - help='disables CUDA usage') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - - # Set up console output - verbose_progress(not args.quiet) - - # Speed up pytorch - torch.backends.cudnn.benchmark = True - - # Set up CUDA - args.cuda = not args.no_cuda and torch.cuda.is_available() - if args.cuda: - torch.backends.cudnn.benchmark = True - - # Take defaults for model constructor etc from dissect.json settings. - with open(os.path.join(args.outdir, 'dissect.json')) as f: - dissection = EasyDict(json.load(f)) - if args.model is None: - args.model = dissection.settings.model - if args.pthfile is None: - args.pthfile = dissection.settings.pthfile - if args.segmenter is None: - args.segmenter = dissection.settings.segmenter - if args.layer is None: - args.layer = dissection.settings.layers[0] - args.layers = [args.layer] - - # Also load specific analysis - layername = args.layer[1] - if args.metric == 'iou': - summary = dissection - else: - with open(os.path.join(args.outdir, layername, args.metric, - args.classname, 'summary.json')) as f: - summary = EasyDict(json.load(f)) - - # Instantiate generator - model = create_instrumented_model(args, gen=True, edit=True) - if model is None: - print('No model specified') - sys.exit(1) - - # Instantiate model - device = next(model.parameters()).device - input_shape = model.input_shape - - # 4d input if convolutional, 2d input if first layer is linear. - raw_sample = standard_z_sample(args.size, input_shape[1], seed=3).view( - (args.size,) + input_shape[1:]) - dataset = TensorDataset(raw_sample) - - # Create the segmenter - segmenter = autoimport_eval(args.segmenter) - - # Now do the actual work. - labelnames, catnames = ( - segmenter.get_label_and_category_names(dataset)) - label_category = [catnames.index(c) if c in catnames else 0 - for l, c in labelnames] - labelnum_from_name = {n[0]: i for i, n in enumerate(labelnames)} - - segloader = torch.utils.data.DataLoader(dataset, - batch_size=args.batch_size, num_workers=10, - pin_memory=(device.type == 'cuda')) - - # Index the dissection layers by layer name. - - # First, collect a baseline - for l in model.ablation: - model.ablation[l] = None - - # For each sort-order, do an ablation - progress = default_progress() - classname = args.classname - classnum = labelnum_from_name[classname] - - # Get iou ranking from dissect.json - iou_rankname = '%s-%s' % (classname, 'iou') - dissect_layer = {lrec.layer: lrec for lrec in dissection.layers} - iou_ranking = next(r for r in dissect_layer[layername].rankings - if r.name == iou_rankname) - - # Get trained ranking from summary.json - rankname = '%s-%s' % (classname, args.metric) - summary_layer = {lrec.layer: lrec for lrec in summary.layers} - ranking = next(r for r in summary_layer[layername].rankings - if r.name == rankname) - - # Get ordering, first by ranking, then break ties by iou. - ordering = [t[2] for t in sorted([(s1, s2, i) - for i, (s1, s2) in enumerate(zip(ranking.score, iou_ranking.score))])] - values = (-numpy.array(ranking.score))[ordering] - if not args.mixed_units: - values[...] = 1 - - ablationdir = os.path.join(args.outdir, layername, 'fullablation') - measurements = measure_full_ablation(segmenter, segloader, - model, classnum, layername, - ordering[:args.unitcount], values[:args.unitcount]) - measurements = measurements.cpu().numpy().tolist() - os.makedirs(ablationdir, exist_ok=True) - with open(os.path.join(ablationdir, '%s.json'%rankname), 'w') as f: - json.dump(dict( - classname=classname, - classnum=classnum, - baseline=measurements[0], - layer=layername, - metric=args.metric, - ablation_units=ordering, - ablation_values=values.tolist(), - ablation_effects=measurements[1:]), f) - -def measure_full_ablation(segmenter, loader, model, classnum, layer, - ordering, values): - ''' - Quick and easy counting of segmented pixels reduced by ablating units. - ''' - progress = default_progress() - device = next(model.parameters()).device - feature_units = model.feature_shape[layer][1] - feature_shape = model.feature_shape[layer][2:] - repeats = len(ordering) - total_scores = torch.zeros(repeats + 1) - print(ordering) - print(values.tolist()) - with torch.no_grad(): - for l in model.ablation: - model.ablation[l] = None - for i, [ibz] in enumerate(progress(loader)): - ibz = ibz.cuda() - for num_units in progress(range(len(ordering) + 1)): - ablation = torch.zeros(feature_units, device=device) - ablation[ordering[:num_units]] = torch.tensor( - values[:num_units]).to(ablation.device, ablation.dtype) - model.ablation[layer] = ablation - tensor_images = model(ibz) - seg = segmenter.segment_batch(tensor_images, downsample=2) - mask = (seg == classnum).max(1)[0] - total_scores[num_units] += mask.sum().float().cpu() - return total_scores - -def count_segments(segmenter, loader, model): - total_bincount = 0 - data_size = 0 - progress = default_progress() - for i, batch in enumerate(progress(loader)): - tensor_images = model(z_batch.to(device)) - seg = segmenter.segment_batch(tensor_images, downsample=2) - bc = (seg + index[:, None, None, None] * self.num_classes).view(-1 - ).bincount(minlength=z_batch.shape[0] * self.num_classes) - data_size += seg.shape[0] * seg.shape[2] * seg.shape[3] - total_bincount += batch_label_counts.float().sum(0) - normalized_bincount = total_bincount / data_size - return normalized_bincount - -if __name__ == '__main__': - main() diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/segmodel/models.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/segmodel/models.py deleted file mode 100644 index ceb6f2ce21720722d5d8c9ee4f7e015ad06a9647..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/segmodel/models.py +++ /dev/null @@ -1,558 +0,0 @@ -import torch -import torch.nn as nn -import torchvision -from . import resnet, resnext -try: - from lib.nn import SynchronizedBatchNorm2d -except ImportError: - from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d - - -class SegmentationModuleBase(nn.Module): - def __init__(self): - super(SegmentationModuleBase, self).__init__() - - def pixel_acc(self, pred, label): - _, preds = torch.max(pred, dim=1) - valid = (label >= 0).long() - acc_sum = torch.sum(valid * (preds == label).long()) - pixel_sum = torch.sum(valid) - acc = acc_sum.float() / (pixel_sum.float() + 1e-10) - return acc - - -class SegmentationModule(SegmentationModuleBase): - def __init__(self, net_enc, net_dec, crit, deep_sup_scale=None): - super(SegmentationModule, self).__init__() - self.encoder = net_enc - self.decoder = net_dec - self.crit = crit - self.deep_sup_scale = deep_sup_scale - - def forward(self, feed_dict, *, segSize=None): - if segSize is None: # training - if self.deep_sup_scale is not None: # use deep supervision technique - (pred, pred_deepsup) = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True)) - else: - pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True)) - - loss = self.crit(pred, feed_dict['seg_label']) - if self.deep_sup_scale is not None: - loss_deepsup = self.crit(pred_deepsup, feed_dict['seg_label']) - loss = loss + loss_deepsup * self.deep_sup_scale - - acc = self.pixel_acc(pred, feed_dict['seg_label']) - return loss, acc - else: # inference - pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize) - return pred - - -def conv3x3(in_planes, out_planes, stride=1, has_bias=False): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=has_bias) - - -def conv3x3_bn_relu(in_planes, out_planes, stride=1): - return nn.Sequential( - conv3x3(in_planes, out_planes, stride), - SynchronizedBatchNorm2d(out_planes), - nn.ReLU(inplace=True), - ) - - -class ModelBuilder(): - # custom weights initialization - def weights_init(self, m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.kaiming_normal_(m.weight.data) - elif classname.find('BatchNorm') != -1: - m.weight.data.fill_(1.) - m.bias.data.fill_(1e-4) - #elif classname.find('Linear') != -1: - # m.weight.data.normal_(0.0, 0.0001) - - def build_encoder(self, arch='resnet50_dilated8', fc_dim=512, weights=''): - pretrained = True if len(weights) == 0 else False - if arch == 'resnet34': - raise NotImplementedError - orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet34_dilated8': - raise NotImplementedError - orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=8) - elif arch == 'resnet34_dilated16': - raise NotImplementedError - orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=16) - elif arch == 'resnet50': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet50_dilated8': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=8) - elif arch == 'resnet50_dilated16': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=16) - elif arch == 'resnet101': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet101_dilated8': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=8) - elif arch == 'resnet101_dilated16': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, - dilate_scale=16) - elif arch == 'resnext101': - orig_resnext = resnext.__dict__['resnext101'](pretrained=pretrained) - net_encoder = Resnet(orig_resnext) # we can still use class Resnet - else: - raise Exception('Architecture undefined!') - - # net_encoder.apply(self.weights_init) - if len(weights) > 0: - # print('Loading weights for net_encoder') - net_encoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_encoder - - def build_decoder(self, arch='ppm_bilinear_deepsup', - fc_dim=512, num_class=150, - weights='', inference=False, use_softmax=False): - if arch == 'c1_bilinear_deepsup': - net_decoder = C1BilinearDeepSup( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'c1_bilinear': - net_decoder = C1Bilinear( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'ppm_bilinear': - net_decoder = PPMBilinear( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'ppm_bilinear_deepsup': - net_decoder = PPMBilinearDeepsup( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax) - elif arch == 'upernet_lite': - net_decoder = UPerNet( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax, - fpn_dim=256) - elif arch == 'upernet': - net_decoder = UPerNet( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax, - fpn_dim=512) - elif arch == 'upernet_tmp': - net_decoder = UPerNetTmp( - num_class=num_class, - fc_dim=fc_dim, - inference=inference, - use_softmax=use_softmax, - fpn_dim=512) - else: - raise Exception('Architecture undefined!') - - net_decoder.apply(self.weights_init) - if len(weights) > 0: - # print('Loading weights for net_decoder') - net_decoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_decoder - - -class Resnet(nn.Module): - def __init__(self, orig_resnet): - super(Resnet, self).__init__() - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - - -class ResnetDilated(nn.Module): - def __init__(self, orig_resnet, dilate_scale=8): - super(ResnetDilated, self).__init__() - from functools import partial - - if dilate_scale == 8: - orig_resnet.layer3.apply( - partial(self._nostride_dilate, dilate=2)) - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=4)) - elif dilate_scale == 16: - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=2)) - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def _nostride_dilate(self, m, dilate): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - # the convolution with stride - if m.stride == (2, 2): - m.stride = (1, 1) - if m.kernel_size == (3, 3): - m.dilation = (dilate//2, dilate//2) - m.padding = (dilate//2, dilate//2) - # other convoluions - else: - if m.kernel_size == (3, 3): - m.dilation = (dilate, dilate) - m.padding = (dilate, dilate) - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - - -# last conv, bilinear upsample -class C1BilinearDeepSup(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, inference=False, use_softmax=False): - super(C1BilinearDeepSup, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - x = self.cbr(conv5) - x = self.conv_last(x) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -# last conv, bilinear upsample -class C1Bilinear(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, inference=False, use_softmax=False): - super(C1Bilinear, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - x = self.cbr(conv5) - x = self.conv_last(x) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - - return x - - -# pyramid pooling, bilinear upsample -class PPMBilinear(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6)): - super(PPMBilinear, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim+len(pool_scales)*512, 512, - kernel_size=3, padding=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - x = self.conv_last(ppm_out) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - return x - - -# pyramid pooling, bilinear upsample -class PPMBilinearDeepsup(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6)): - super(PPMBilinearDeepsup, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim+len(pool_scales)*512, 512, - kernel_size=3, padding=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.dropout_deepsup = nn.Dropout2d(0.1) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - x = self.conv_last(ppm_out) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.dropout_deepsup(_) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -# upernet -class UPerNet(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - inference=False, use_softmax=False, pool_scales=(1, 2, 3, 6), - fpn_inplanes=(256,512,1024,2048), fpn_dim=256): - super(UPerNet, self).__init__() - self.use_softmax = use_softmax - self.inference = inference - - # PPM Module - self.ppm_pooling = [] - self.ppm_conv = [] - - for scale in pool_scales: - self.ppm_pooling.append(nn.AdaptiveAvgPool2d(scale)) - self.ppm_conv.append(nn.Sequential( - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm_pooling = nn.ModuleList(self.ppm_pooling) - self.ppm_conv = nn.ModuleList(self.ppm_conv) - self.ppm_last_conv = conv3x3_bn_relu(fc_dim + len(pool_scales)*512, fpn_dim, 1) - - # FPN Module - self.fpn_in = [] - for fpn_inplane in fpn_inplanes[:-1]: # skip the top layer - self.fpn_in.append(nn.Sequential( - nn.Conv2d(fpn_inplane, fpn_dim, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(fpn_dim), - nn.ReLU(inplace=True) - )) - self.fpn_in = nn.ModuleList(self.fpn_in) - - self.fpn_out = [] - for i in range(len(fpn_inplanes) - 1): # skip the top layer - self.fpn_out.append(nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - )) - self.fpn_out = nn.ModuleList(self.fpn_out) - - self.conv_last = nn.Sequential( - conv3x3_bn_relu(len(fpn_inplanes) * fpn_dim, fpn_dim, 1), - nn.Conv2d(fpn_dim, num_class, kernel_size=1) - ) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale, pool_conv in zip(self.ppm_pooling, self.ppm_conv): - ppm_out.append(pool_conv(nn.functional.interploate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False))) - ppm_out = torch.cat(ppm_out, 1) - f = self.ppm_last_conv(ppm_out) - - fpn_feature_list = [f] - for i in reversed(range(len(conv_out) - 1)): - conv_x = conv_out[i] - conv_x = self.fpn_in[i](conv_x) # lateral branch - - f = nn.functional.interpolate( - f, size=conv_x.size()[2:], mode='bilinear', align_corners=False) # top-down branch - f = conv_x + f - - fpn_feature_list.append(self.fpn_out[i](f)) - - fpn_feature_list.reverse() # [P2 - P5] - output_size = fpn_feature_list[0].size()[2:] - fusion_list = [fpn_feature_list[0]] - for i in range(1, len(fpn_feature_list)): - fusion_list.append(nn.functional.interpolate( - fpn_feature_list[i], - output_size, - mode='bilinear', align_corners=False)) - fusion_out = torch.cat(fusion_list, 1) - x = self.conv_last(fusion_out) - - if self.inference or self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - if self.use_softmax: - x = nn.functional.softmax(x, dim=1) - return x - - x = nn.functional.log_softmax(x, dim=1) - - return x diff --git a/spaces/HarryLee/eCommerceImageCaptioning/models/sequence_generator.py b/spaces/HarryLee/eCommerceImageCaptioning/models/sequence_generator.py deleted file mode 100644 index 30b8b1139b125e5ca45f59ce43314b959a019cd8..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/models/sequence_generator.py +++ /dev/null @@ -1,1053 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -import math -from typing import Dict, List, Optional -import sys - -import torch -import torch.nn as nn -from fairseq import search, utils -from fairseq.models import FairseqIncrementalDecoder -from torch import Tensor -from fairseq.ngram_repeat_block import NGramRepeatBlock - -from data import data_utils - -class SequenceGenerator(nn.Module): - def __init__( - self, - models, - tgt_dict, - beam_size=1, - max_len_a=0, - max_len_b=200, - max_len=0, - min_len=1, - normalize_scores=True, - len_penalty=1.0, - unk_penalty=0.0, - temperature=1.0, - match_source_len=False, - no_repeat_ngram_size=0, - search_strategy=None, - eos=None, - symbols_to_strip_from_output=None, - lm_model=None, - lm_weight=1.0, - constraint_trie=None, - constraint_range=None, - gen_code=False, - gen_box=False, - ignore_eos=False, - zero_shot=False - ): - """Generates translations of a given source sentence. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models, - currently support fairseq.models.TransformerModel for scripting - beam_size (int, optional): beam width (default: 1) - max_len_a/b (int, optional): generate sequences of maximum length - ax + b, where x is the source length - max_len (int, optional): the maximum length of the generated output - (not including end-of-sentence) - min_len (int, optional): the minimum length of the generated output - (not including end-of-sentence) - normalize_scores (bool, optional): normalize scores by the length - of the output (default: True) - len_penalty (float, optional): length penalty, where <1.0 favors - shorter, >1.0 favors longer sentences (default: 1.0) - unk_penalty (float, optional): unknown word penalty, where <0 - produces more unks, >0 produces fewer (default: 0.0) - temperature (float, optional): temperature, where values - >1.0 produce more uniform samples and values <1.0 produce - sharper samples (default: 1.0) - match_source_len (bool, optional): outputs should match the source - length (default: False) - """ - super().__init__() - if isinstance(models, EnsembleModel): - self.model = models - else: - self.model = EnsembleModel(models) - self.gen_code = gen_code - self.gen_box = gen_box - self.ignore_eos = ignore_eos - self.tgt_dict = tgt_dict - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.bos = tgt_dict.bos() - self.eos = tgt_dict.eos() if eos is None else eos - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.bos, self.eos} - ) - self.vocab_size = len(tgt_dict) - self.beam_size = beam_size - # the max beam size is the dictionary size - 1, since we never select pad - self.beam_size = min(beam_size, self.vocab_size - 1) - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.min_len = min_len - self.max_len = max_len or self.model.max_decoder_positions() - - self.normalize_scores = normalize_scores - self.len_penalty = len_penalty - self.unk_penalty = unk_penalty - self.temperature = temperature - self.match_source_len = match_source_len - self.zero_shot = zero_shot - - if no_repeat_ngram_size > 0: - self.repeat_ngram_blocker = NGramRepeatBlock(no_repeat_ngram_size) - else: - self.repeat_ngram_blocker = None - - assert temperature > 0, "--temperature must be greater than 0" - - self.search = ( - search.BeamSearch(tgt_dict) if search_strategy is None else search_strategy - ) - # We only need to set src_lengths in LengthConstrainedBeamSearch. - # As a module attribute, setting it would break in multithread - # settings when the model is shared. - self.should_set_src_lengths = ( - hasattr(self.search, "needs_src_lengths") and self.search.needs_src_lengths - ) - - self.model.eval() - - self.lm_model = lm_model - self.lm_weight = lm_weight - if self.lm_model is not None: - self.lm_model.eval() - - self.constraint_trie = constraint_trie - - self.constraint_start = None - self.constraint_end = None - if constraint_range is not None: - constraint_start, constraint_end = constraint_range.split(',') - self.constraint_start = int(constraint_start) - self.constraint_end = int(constraint_end) - - def cuda(self): - self.model.cuda() - return self - - @torch.no_grad() - def forward( - self, - sample: Dict[str, Dict[str, Tensor]], - prefix_tokens: Optional[Tensor] = None, - bos_token: Optional[int] = None, - ): - """Generate a batch of translations. - - Args: - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - bos_token (int, optional): beginning of sentence token - (default: self.eos) - """ - return self._generate(sample, prefix_tokens, bos_token=bos_token) - - # TODO(myleott): unused, deprecate after pytorch-translate migration - def generate_batched_itr(self, data_itr, beam_size=None, cuda=False, timer=None): - """Iterate over a batched dataset and yield individual translations. - Args: - cuda (bool, optional): use GPU for generation - timer (StopwatchMeter, optional): time generations - """ - for sample in data_itr: - s = utils.move_to_cuda(sample) if cuda else sample - if "net_input" not in s: - continue - input = s["net_input"] - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in input.items() if k != "prev_output_tokens" - } - if timer is not None: - timer.start() - with torch.no_grad(): - hypos = self.generate(encoder_input) - if timer is not None: - timer.stop(sum(len(h[0]["tokens"]) for h in hypos)) - for i, id in enumerate(s["id"].data): - # remove padding - src = utils.strip_pad(input["src_tokens"].data[i, :], self.pad) - ref = ( - utils.strip_pad(s["target"].data[i, :], self.pad) - if s["target"] is not None - else None - ) - yield id, src, ref, hypos[i] - - @torch.no_grad() - def generate(self, models, sample: Dict[str, Dict[str, Tensor]], **kwargs) -> List[List[Dict[str, Tensor]]]: - """Generate translations. Match the api of other fairseq generators. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - constraints (torch.LongTensor, optional): force decoder to include - the list of constraints - bos_token (int, optional): beginning of sentence token - (default: self.eos) - """ - return self._generate(models, sample, **kwargs) - - def _generate( - self, - models, - sample: Dict[str, Dict[str, Tensor]], - prefix_tokens: Optional[Tensor] = None, - constraints: Optional[Tensor] = None, - bos_token: Optional[int] = None, - ): - model = EnsembleModel(models) - incremental_states = torch.jit.annotate( - List[Dict[str, Dict[str, Optional[Tensor]]]], - [ - torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {}) - for i in range(model.models_size) - ], - ) - net_input = sample["net_input"] - - if "src_tokens" in net_input: - src_tokens = net_input["src_tokens"] - # length of the source text being the character length except EndOfSentence and pad - src_lengths = ( - (src_tokens.ne(self.eos) & src_tokens.ne(self.pad)).long().sum(dim=1) - ) - elif "source" in net_input: - src_tokens = net_input["source"] - src_lengths = ( - net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1) - if net_input["padding_mask"] is not None - else torch.tensor(src_tokens.size(-1)).to(src_tokens) - ) - elif "features" in net_input: - src_tokens = net_input["features"] - src_lengths = ( - net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1) - if net_input["padding_mask"] is not None - else torch.tensor(src_tokens.size(-1)).to(src_tokens) - ) - else: - raise Exception("expected src_tokens or source in net input. input keys: " + str(net_input.keys())) - - # bsz: total number of sentences in beam - # Note that src_tokens may have more than 2 dimensions (i.e. audio features) - bsz, src_len = src_tokens.size()[:2] - beam_size = self.beam_size - - if constraints is not None and not self.search.supports_constraints: - raise NotImplementedError( - "Target-side constraints were provided, but search method doesn't support them" - ) - - # Initialize constraints, when active - self.search.init_constraints(constraints, beam_size) - - max_len: int = -1 - if self.match_source_len: - max_len = src_lengths.max().item() - else: - max_len = int(self.max_len_a * src_len + self.max_len_b) - assert ( - self.min_len <= max_len - ), "min_len cannot be larger than max_len, please adjust these!" - # compute the encoder output for each beam - with torch.autograd.profiler.record_function("EnsembleModel: forward_encoder"): - encoder_outs = model.forward_encoder(net_input) - - # placeholder of indices for bsz * beam_size to hold tokens and accumulative scores - new_order = torch.arange(bsz).view(-1, 1).repeat(1, beam_size).view(-1) - new_order = new_order.to(src_tokens.device).long() - encoder_outs = model.reorder_encoder_out(encoder_outs, new_order) - # ensure encoder_outs is a List. - assert encoder_outs is not None - - # initialize buffers - scores = ( - torch.zeros(bsz * beam_size, max_len + 1).to(src_tokens).float() - ) # +1 for eos; pad is never chosen for scoring - tokens = ( - torch.zeros(bsz * beam_size, max_len + 2) - .to(src_tokens) - .long() - .fill_(self.pad) - ) # +2 for eos and pad - # tokens[:, 0] = self.eos if bos_token is None else bos_token - tokens[:, 0] = self.bos - attn: Optional[Tensor] = None - - # A list that indicates candidates that should be ignored. - # For example, suppose we're sampling and have already finalized 2/5 - # samples. Then cands_to_ignore would mark 2 positions as being ignored, - # so that we only finalize the remaining 3 samples. - cands_to_ignore = ( - torch.zeros(bsz, beam_size).to(src_tokens).eq(-1) - ) # forward and backward-compatible False mask - - # list of completed sentences - finalized = torch.jit.annotate( - List[List[Dict[str, Tensor]]], - [torch.jit.annotate(List[Dict[str, Tensor]], []) for i in range(bsz)], - ) # contains lists of dictionaries of infomation about the hypothesis being finalized at each step - - # a boolean array indicating if the sentence at the index is finished or not - finished = [False for i in range(bsz)] - num_remaining_sent = bsz # number of sentences remaining - - # number of candidate hypos per step - cand_size = 2 * beam_size # 2 x beam size in case half are EOS - - # offset arrays for converting between different indexing schemes - bbsz_offsets = ( - (torch.arange(0, bsz) * beam_size) - .unsqueeze(1) - .type_as(tokens) - .to(src_tokens.device) - ) - cand_offsets = torch.arange(0, cand_size).type_as(tokens).to(src_tokens.device) - - reorder_state: Optional[Tensor] = None - batch_idxs: Optional[Tensor] = None - - original_batch_idxs: Optional[Tensor] = None - if "id" in sample and isinstance(sample["id"], Tensor): - original_batch_idxs = sample["id"] - else: - original_batch_idxs = torch.arange(0, bsz).type_as(tokens) - - for step in range(max_len + 1): # one extra step for EOS marker - # reorder decoder internal states based on the prev choice of beams - if reorder_state is not None: - if batch_idxs is not None: - # update beam indices to take into account removed sentences - corr = batch_idxs - torch.arange(batch_idxs.numel()).type_as( - batch_idxs - ) - reorder_state.view(-1, beam_size).add_( - corr.unsqueeze(-1) * beam_size - ) - original_batch_idxs = original_batch_idxs[batch_idxs] - model.reorder_incremental_state(incremental_states, reorder_state) - encoder_outs = model.reorder_encoder_out( - encoder_outs, reorder_state - ) - with torch.autograd.profiler.record_function("EnsembleModel: forward_decoder"): - lprobs, avg_attn_scores = model.forward_decoder( - tokens[:, : step + 1], - encoder_outs, - incremental_states, - self.temperature, - constraint_trie=self.constraint_trie, - constraint_start=self.constraint_start, - constraint_end=self.constraint_end, - gen_code=self.gen_code, - zero_shot=self.zero_shot, - prefix_tokens=prefix_tokens - ) - - if self.lm_model is not None: - lm_out = self.lm_model(tokens[:, : step + 1]) - probs = self.lm_model.get_normalized_probs( - lm_out, log_probs=True, sample=None - ) - probs = probs[:, -1, :] * self.lm_weight - lprobs += probs - # handle prefix tokens (possibly with different lengths) - if ( - prefix_tokens is not None - and step < prefix_tokens.size(1) - and step < max_len - ): - lprobs, tokens, scores = self._prefix_tokens( - step, lprobs, scores, tokens, prefix_tokens, beam_size - ) - elif step < self.min_len: - # minimum length constraint (does not apply if using prefix_tokens) - lprobs[:, self.eos] = -math.inf - - lprobs[lprobs != lprobs] = torch.tensor(-math.inf).to(lprobs) - - lprobs[:, self.pad] = -math.inf # never select pad - lprobs[:, self.unk] -= self.unk_penalty # apply unk penalty - - if (self.gen_code or self.gen_box) and step < max_len: - lprobs[:, :4] = -math.inf - if self.gen_box: - lprobs[:, -1] = -math.inf - if (step + 1) % 5 == 0: - lprobs[:, self.constraint_start:59457] = -math.inf - else: - lprobs[:, 59457:] = -math.inf - - # handle max length constraint - if step >= max_len: - lprobs[:, : self.eos] = -math.inf - lprobs[:, self.eos + 1 :] = -math.inf - if self.ignore_eos: - lprobs[:, self.eos] = 1 - - # Record attention scores, only support avg_attn_scores is a Tensor - if avg_attn_scores is not None: - if attn is None: - attn = torch.empty( - bsz * beam_size, avg_attn_scores.size(1), max_len + 2 - ).to(scores) - attn[:, :, step + 1].copy_(avg_attn_scores) - - scores = scores.type_as(lprobs) - eos_bbsz_idx = torch.empty(0).to( - tokens - ) # indices of hypothesis ending with eos (finished sentences) - eos_scores = torch.empty(0).to( - scores - ) # scores of hypothesis ending with eos (finished sentences) - - if self.should_set_src_lengths: - self.search.set_src_lengths(src_lengths) - - if self.repeat_ngram_blocker is not None: - lprobs = self.repeat_ngram_blocker(tokens, lprobs, bsz, beam_size, step) - - # Shape: (batch, cand_size) - cand_scores, cand_indices, cand_beams = self.search.step( - step, - lprobs.view(bsz, -1, self.vocab_size), - scores.view(bsz, beam_size, -1)[:, :, :step], - tokens[:, : step + 1], - original_batch_idxs, - ) - - # cand_bbsz_idx contains beam indices for the top candidate - # hypotheses, with a range of values: [0, bsz*beam_size), - # and dimensions: [bsz, cand_size] - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - - # finalize hypotheses that end in eos - # Shape of eos_mask: (batch size, beam size) - eos_mask = cand_indices.eq(self.eos) & cand_scores.ne(-math.inf) - eos_mask[:, :beam_size][cands_to_ignore] = torch.tensor(0).to(eos_mask) - - # only consider eos when it's among the top beam_size indices - # Now we know what beam item(s) to finish - # Shape: 1d list of absolute-numbered - eos_bbsz_idx = torch.masked_select( - cand_bbsz_idx[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents: List[int] = [] - if eos_bbsz_idx.numel() > 0: - eos_scores = torch.masked_select( - cand_scores[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents = self.finalize_hypos( - step, - eos_bbsz_idx, - eos_scores, - tokens, - scores, - finalized, - finished, - beam_size, - attn, - src_lengths, - max_len, - ) - num_remaining_sent -= len(finalized_sents) - - assert num_remaining_sent >= 0 - if num_remaining_sent == 0: - break - if self.search.stop_on_max_len and step >= max_len: - break - assert step < max_len, f"{step} < {max_len}" - - # Remove finalized sentences (ones for which {beam_size} - # finished hypotheses have been generated) from the batch. - if len(finalized_sents) > 0: - new_bsz = bsz - len(finalized_sents) - - # construct batch_idxs which holds indices of batches to keep for the next pass - batch_mask = torch.ones( - bsz, dtype=torch.bool, device=cand_indices.device - ) - batch_mask[finalized_sents] = False - # TODO replace `nonzero(as_tuple=False)` after TorchScript supports it - batch_idxs = torch.arange( - bsz, device=cand_indices.device - ).masked_select(batch_mask) - - # Choose the subset of the hypothesized constraints that will continue - self.search.prune_sentences(batch_idxs) - - eos_mask = eos_mask[batch_idxs] - cand_beams = cand_beams[batch_idxs] - bbsz_offsets.resize_(new_bsz, 1) - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - cand_scores = cand_scores[batch_idxs] - cand_indices = cand_indices[batch_idxs] - - if prefix_tokens is not None: - prefix_tokens = prefix_tokens[batch_idxs] - src_lengths = src_lengths[batch_idxs] - cands_to_ignore = cands_to_ignore[batch_idxs] - - scores = scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - tokens = tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - if attn is not None: - attn = attn.view(bsz, -1)[batch_idxs].view( - new_bsz * beam_size, attn.size(1), -1 - ) - bsz = new_bsz - else: - batch_idxs = None - - # Set active_mask so that values > cand_size indicate eos hypos - # and values < cand_size indicate candidate active hypos. - # After, the min values per row are the top candidate active hypos - - # Rewrite the operator since the element wise or is not supported in torchscript. - - eos_mask[:, :beam_size] = ~((~cands_to_ignore) & (~eos_mask[:, :beam_size])) - active_mask = torch.add( - eos_mask.type_as(cand_offsets) * cand_size, - cand_offsets[: eos_mask.size(1)], - ) - - # get the top beam_size active hypotheses, which are just - # the hypos with the smallest values in active_mask. - # {active_hypos} indicates which {beam_size} hypotheses - # from the list of {2 * beam_size} candidates were - # selected. Shapes: (batch size, beam size) - new_cands_to_ignore, active_hypos = torch.topk( - active_mask, k=beam_size, dim=1, largest=False - ) - - # update cands_to_ignore to ignore any finalized hypos. - cands_to_ignore = new_cands_to_ignore.ge(cand_size)[:, :beam_size] - # Make sure there is at least one active item for each sentence in the batch. - assert (~cands_to_ignore).any(dim=1).all() - - # update cands_to_ignore to ignore any finalized hypos - - # {active_bbsz_idx} denotes which beam number is continued for each new hypothesis (a beam - # can be selected more than once). - active_bbsz_idx = torch.gather(cand_bbsz_idx, dim=1, index=active_hypos) - active_scores = torch.gather(cand_scores, dim=1, index=active_hypos) - - active_bbsz_idx = active_bbsz_idx.view(-1) - active_scores = active_scores.view(-1) - - # copy tokens and scores for active hypotheses - - # Set the tokens for each beam (can select the same row more than once) - tokens[:, : step + 1] = torch.index_select( - tokens[:, : step + 1], dim=0, index=active_bbsz_idx - ) - # Select the next token for each of them - tokens.view(bsz, beam_size, -1)[:, :, step + 1] = torch.gather( - cand_indices, dim=1, index=active_hypos - ) - if step > 0: - scores[:, :step] = torch.index_select( - scores[:, :step], dim=0, index=active_bbsz_idx - ) - scores.view(bsz, beam_size, -1)[:, :, step] = torch.gather( - cand_scores, dim=1, index=active_hypos - ) - - # Update constraints based on which candidates were selected for the next beam - self.search.update_constraints(active_hypos) - - # copy attention for active hypotheses - if attn is not None: - attn[:, :, : step + 2] = torch.index_select( - attn[:, :, : step + 2], dim=0, index=active_bbsz_idx - ) - - # reorder incremental state in decoder - reorder_state = active_bbsz_idx - - # sort by score descending - for sent in range(len(finalized)): - scores = torch.tensor( - [float(elem["score"].item()) for elem in finalized[sent]] - ) - _, sorted_scores_indices = torch.sort(scores, descending=True) - finalized[sent] = [finalized[sent][ssi] for ssi in sorted_scores_indices] - finalized[sent] = torch.jit.annotate( - List[Dict[str, Tensor]], finalized[sent] - ) - return finalized - - def _prefix_tokens( - self, step: int, lprobs, scores, tokens, prefix_tokens, beam_size: int - ): - """Handle prefix tokens""" - prefix_toks = prefix_tokens[:, step].unsqueeze(-1).repeat(1, beam_size).view(-1) - prefix_lprobs = lprobs.gather(-1, prefix_toks.unsqueeze(-1)) - prefix_mask = prefix_toks.ne(self.pad) - if self.constraint_trie is None: - lprobs[prefix_mask] = torch.min(prefix_lprobs) - 1 - else: - lprobs[prefix_mask] = -math.inf - lprobs[prefix_mask] = lprobs[prefix_mask].scatter( - -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_lprobs[prefix_mask] - ) - # if prefix includes eos, then we should make sure tokens and - # scores are the same across all beams - eos_mask = prefix_toks.eq(self.eos) - if eos_mask.any(): - # validate that the first beam matches the prefix - first_beam = tokens[eos_mask].view(-1, beam_size, tokens.size(-1))[ - :, 0, 1 : step + 1 - ] - eos_mask_batch_dim = eos_mask.view(-1, beam_size)[:, 0] - target_prefix = prefix_tokens[eos_mask_batch_dim][:, :step] - assert (first_beam == target_prefix).all() - - # copy tokens, scores and lprobs from the first beam to all beams - tokens = self.replicate_first_beam(tokens, eos_mask_batch_dim, beam_size) - scores = self.replicate_first_beam(scores, eos_mask_batch_dim, beam_size) - lprobs = self.replicate_first_beam(lprobs, eos_mask_batch_dim, beam_size) - return lprobs, tokens, scores - - def replicate_first_beam(self, tensor, mask, beam_size: int): - tensor = tensor.view(-1, beam_size, tensor.size(-1)) - tensor[mask] = tensor[mask][:, :1, :] - return tensor.view(-1, tensor.size(-1)) - - def finalize_hypos( - self, - step: int, - bbsz_idx, - eos_scores, - tokens, - scores, - finalized: List[List[Dict[str, Tensor]]], - finished: List[bool], - beam_size: int, - attn: Optional[Tensor], - src_lengths, - max_len: int, - ): - """Finalize hypothesis, store finalized information in `finalized`, and change `finished` accordingly. - A sentence is finalized when {beam_size} finished items have been collected for it. - - Returns number of sentences (not beam items) being finalized. - These will be removed from the batch and not processed further. - Args: - bbsz_idx (Tensor): - """ - assert bbsz_idx.numel() == eos_scores.numel() - - # clone relevant token and attention tensors. - # tokens is (batch * beam, max_len). So the index_select - # gets the newly EOS rows, then selects cols 1..{step + 2} - tokens_clone = tokens.index_select(0, bbsz_idx)[ - :, 1 : step + 2 - ] # skip the first index, which is EOS - - tokens_clone[:, step] = self.eos - attn_clone = ( - attn.index_select(0, bbsz_idx)[:, :, 1 : step + 2] - if attn is not None - else None - ) - - # compute scores per token position - pos_scores = scores.index_select(0, bbsz_idx)[:, : step + 1] - pos_scores[:, step] = eos_scores - # convert from cumulative to per-position scores - pos_scores[:, 1:] = pos_scores[:, 1:] - pos_scores[:, :-1] - - # normalize sentence-level scores - if self.normalize_scores: - eos_scores /= (step + 1) ** self.len_penalty - - # cum_unfin records which sentences in the batch are finished. - # It helps match indexing between (a) the original sentences - # in the batch and (b) the current, possibly-reduced set of - # sentences. - cum_unfin: List[int] = [] - prev = 0 - for f in finished: - if f: - prev += 1 - else: - cum_unfin.append(prev) - cum_fin_tensor = torch.tensor(cum_unfin, dtype=torch.int).to(bbsz_idx) - - unfin_idx = bbsz_idx // beam_size - sent = unfin_idx + torch.index_select(cum_fin_tensor, 0, unfin_idx) - - # Create a set of "{sent}{unfin_idx}", where - # "unfin_idx" is the index in the current (possibly reduced) - # list of sentences, and "sent" is the index in the original, - # unreduced batch - # For every finished beam item - # sentence index in the current (possibly reduced) batch - seen = (sent << 32) + unfin_idx - unique_seen: List[int] = torch.unique(seen).tolist() - - if self.match_source_len: - condition = step > torch.index_select(src_lengths, 0, unfin_idx) - eos_scores = torch.where(condition, torch.tensor(-math.inf), eos_scores) - sent_list: List[int] = sent.tolist() - for i in range(bbsz_idx.size()[0]): - # An input sentence (among those in a batch) is finished when - # beam_size hypotheses have been collected for it - if len(finalized[sent_list[i]]) < beam_size: - if attn_clone is not None: - # remove padding tokens from attn scores - hypo_attn = attn_clone[i] - else: - hypo_attn = torch.empty(0) - - finalized[sent_list[i]].append( - { - "tokens": tokens_clone[i], - "score": eos_scores[i], - "attention": hypo_attn, # src_len x tgt_len - "alignment": torch.empty(0), - "positional_scores": pos_scores[i], - } - ) - - newly_finished: List[int] = [] - for unique_s in unique_seen: - # check termination conditions for this sentence - unique_sent: int = unique_s >> 32 - unique_unfin_idx: int = unique_s - (unique_sent << 32) - - if not finished[unique_sent] and self.is_finished( - step, unique_unfin_idx, max_len, len(finalized[unique_sent]), beam_size - ): - finished[unique_sent] = True - newly_finished.append(unique_unfin_idx) - - return newly_finished - - def is_finished( - self, - step: int, - unfin_idx: int, - max_len: int, - finalized_sent_len: int, - beam_size: int, - ): - """ - Check whether decoding for a sentence is finished, which - occurs when the list of finalized sentences has reached the - beam size, or when we reach the maximum length. - """ - assert finalized_sent_len <= beam_size - if finalized_sent_len == beam_size or step == max_len: - return True - return False - - -class EnsembleModel(nn.Module): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__() - self.models_size = len(models) - # method '__len__' is not supported in ModuleList for torch script - self.single_model = models[0] - self.models = nn.ModuleList(models) - - self.has_incremental: bool = False - if all( - hasattr(m, "decoder") and isinstance(m.decoder, FairseqIncrementalDecoder) - for m in models - ): - self.has_incremental = True - - def forward(self): - pass - - def has_encoder(self): - return hasattr(self.single_model, "encoder") - - def has_incremental_states(self): - return self.has_incremental - - def max_decoder_positions(self): - return min([m.max_decoder_positions() for m in self.models if hasattr(m, "max_decoder_positions")] + [sys.maxsize]) - - @torch.jit.export - def forward_encoder(self, net_input: Dict[str, Tensor]): - if not self.has_encoder(): - return None - return [model.encoder.forward_torchscript(net_input) for model in self.models] - - @torch.jit.export - def forward_decoder( - self, - tokens, - encoder_outs: List[Dict[str, List[Tensor]]], - incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]], - temperature: float = 1.0, - constraint_trie=None, - constraint_start=None, - constraint_end=None, - gen_code=False, - zero_shot=False, - prefix_tokens=None - ): - log_probs = [] - avg_attn: Optional[Tensor] = None - encoder_out: Optional[Dict[str, List[Tensor]]] = None - code_mask = (tokens.new_ones(tokens.size(0))*gen_code).bool() - for i, model in enumerate(self.models): - if self.has_encoder(): - encoder_out = encoder_outs[i] - # decode each model - if self.has_incremental_states(): - decoder_out = model.decoder.forward( - tokens, - code_masks=code_mask, - encoder_out=encoder_out, - incremental_state=incremental_states[i], - ) - else: - if hasattr(model, "decoder"): - decoder_out = model.decoder.forward(tokens, code_masks=code_mask, encoder_out=encoder_out) - else: - decoder_out = model.forward(tokens) - - attn: Optional[Tensor] = None - decoder_len = len(decoder_out) - if decoder_len > 1 and decoder_out[1] is not None: - if isinstance(decoder_out[1], Tensor): - attn = decoder_out[1] - else: - attn_holder = decoder_out[1]["attn"] - if isinstance(attn_holder, Tensor): - attn = attn_holder - elif attn_holder is not None: - attn = attn_holder[0] - if attn is not None: - attn = attn[:, -1, :] - - decoder_out_tuple = ( - decoder_out[0][:, -1:, :].div_(temperature), - None if decoder_len <= 1 else decoder_out[1], - ) - - beam_size = decoder_out_tuple[0].size(0) // prefix_tokens.size(0) if prefix_tokens is not None else 0 - if constraint_trie is not None and not zero_shot: - assert constraint_start is None and constraint_end is None - constraint_masks = decoder_out_tuple[0].new_zeros(decoder_out_tuple[0].size()).bool() - constraint_prefix_tokens = tokens.tolist() - for token_index, constraint_prefix_token in enumerate(constraint_prefix_tokens): - prefix_len = prefix_tokens[token_index // beam_size].ne(1).sum().item() if prefix_tokens is not None else 0 - if len(constraint_prefix_token) > prefix_len: - constraint_prefix_token = [0] + constraint_prefix_token[prefix_len+1:] - constraint_nodes = constraint_trie.get_next_layer(constraint_prefix_token) - constraint_masks[token_index][:, constraint_nodes] = True - else: - constraint_masks[token_index] = True - decoder_out_tuple[0].masked_fill_(~constraint_masks, -math.inf) - if constraint_start is not None and constraint_end is not None and not zero_shot: - assert constraint_trie is None - decoder_out_tuple[0][:, :, 4:constraint_start] = -math.inf - decoder_out_tuple[0][:, :, constraint_end:] = -math.inf - - probs = model.get_normalized_probs( - decoder_out_tuple, log_probs=True, sample=None - ) - if constraint_trie is not None and zero_shot: - assert constraint_start is None and constraint_end is None - constraint_masks = decoder_out_tuple[0].new_zeros(decoder_out_tuple[0].size()).bool() - constraint_prefix_tokens = tokens.tolist() - for token_index, constraint_prefix_token in enumerate(constraint_prefix_tokens): - constraint_nodes = constraint_trie.get_next_layer(constraint_prefix_token) - constraint_masks[token_index][:, constraint_nodes] = True - probs.masked_fill_(~constraint_masks, -math.inf) - if constraint_start is not None and constraint_end is not None and zero_shot: - assert constraint_trie is None - probs[:, :, 4:constraint_start] = -math.inf - probs[:, :, constraint_end:] = -math.inf - probs = probs[:, -1, :] - if self.models_size == 1: - return probs, attn - - log_probs.append(probs) - if attn is not None: - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - - avg_probs = torch.logsumexp(torch.stack(log_probs, dim=0), dim=0) - math.log( - self.models_size - ) - - if avg_attn is not None: - avg_attn.div_(self.models_size) - return avg_probs, avg_attn - - @torch.jit.export - def reorder_encoder_out( - self, encoder_outs: Optional[List[Dict[str, List[Tensor]]]], new_order - ): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - new_outs: List[Dict[str, List[Tensor]]] = [] - if not self.has_encoder(): - return new_outs - for i, model in enumerate(self.models): - assert encoder_outs is not None - new_outs.append( - model.encoder.reorder_encoder_out(encoder_outs[i], new_order) - ) - return new_outs - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]], - new_order, - ): - if not self.has_incremental_states(): - return - for i, model in enumerate(self.models): - model.decoder.reorder_incremental_state_scripting( - incremental_states[i], new_order - ) - - -class SequenceGeneratorWithAlignment(SequenceGenerator): - def __init__( - self, models, tgt_dict, left_pad_target=False, print_alignment="hard", **kwargs - ): - """Generates translations of a given source sentence. - - Produces alignments following "Jointly Learning to Align and - Translate with Transformer Models" (Garg et al., EMNLP 2019). - - Args: - left_pad_target (bool, optional): Whether or not the - hypothesis should be left padded or not when they are - teacher forced for generating alignments. - """ - super().__init__(EnsembleModelWithAlignment(models), tgt_dict, **kwargs) - self.left_pad_target = left_pad_target - - if print_alignment == "hard": - self.extract_alignment = utils.extract_hard_alignment - elif print_alignment == "soft": - self.extract_alignment = utils.extract_soft_alignment - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - finalized = super()._generate(sample, **kwargs) - - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.shape[0] - beam_size = self.beam_size - ( - src_tokens, - src_lengths, - prev_output_tokens, - tgt_tokens, - ) = self._prepare_batch_for_alignment(sample, finalized) - if any(getattr(m, "full_context_alignment", False) for m in self.model.models): - attn = self.model.forward_align(src_tokens, src_lengths, prev_output_tokens) - else: - attn = [ - finalized[i // beam_size][i % beam_size]["attention"].transpose(1, 0) - for i in range(bsz * beam_size) - ] - - if src_tokens.device != "cpu": - src_tokens = src_tokens.to("cpu") - tgt_tokens = tgt_tokens.to("cpu") - attn = [i.to("cpu") for i in attn] - - # Process the attn matrix to extract hard alignments. - for i in range(bsz * beam_size): - alignment = self.extract_alignment( - attn[i], src_tokens[i], tgt_tokens[i], self.pad, self.eos - ) - finalized[i // beam_size][i % beam_size]["alignment"] = alignment - return finalized - - def _prepare_batch_for_alignment(self, sample, hypothesis): - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.shape[0] - src_tokens = ( - src_tokens[:, None, :] - .expand(-1, self.beam_size, -1) - .contiguous() - .view(bsz * self.beam_size, -1) - ) - src_lengths = sample["net_input"]["src_lengths"] - src_lengths = ( - src_lengths[:, None] - .expand(-1, self.beam_size) - .contiguous() - .view(bsz * self.beam_size) - ) - prev_output_tokens = data_utils.collate_tokens( - [beam["tokens"] for example in hypothesis for beam in example], - self.pad, - self.eos, - self.left_pad_target, - move_eos_to_beginning=True, - ) - tgt_tokens = data_utils.collate_tokens( - [beam["tokens"] for example in hypothesis for beam in example], - self.pad, - self.eos, - self.left_pad_target, - move_eos_to_beginning=False, - ) - return src_tokens, src_lengths, prev_output_tokens, tgt_tokens - - -class EnsembleModelWithAlignment(EnsembleModel): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__(models) - - def forward_align(self, src_tokens, src_lengths, prev_output_tokens): - avg_attn = None - for model in self.models: - decoder_out = model(src_tokens, src_lengths, prev_output_tokens) - attn = decoder_out[1]["attn"][0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(self.models) > 1: - avg_attn.div_(len(self.models)) - return avg_attn diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/texttospeech.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/texttospeech.py deleted file mode 100644 index 3c88925cac0c56e52d35acfa5d6d7e5ce51329c7..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/texttospeech.py +++ /dev/null @@ -1,146 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals -from typing import Tuple - -from scipy.io.wavfile import write -from hifi.env import AttrDict -from hifi.models import Generator - -import numpy as np -import os -import json - -import torch -from text import text_to_sequence -import commons -import models -import utils -import sys -from argparse import ArgumentParser - - -def check_directory(dir): - if not os.path.exists(dir): - sys.exit("Error: {} directory does not exist".format(dir)) - - -class TextToMel: - def __init__(self, glow_model_dir, device="cuda"): - self.glow_model_dir = glow_model_dir - check_directory(self.glow_model_dir) - self.device = device - self.hps, self.glow_tts_model = self.load_glow_tts() - pass - - def load_glow_tts(self): - hps = utils.get_hparams_from_dir(self.glow_model_dir) - checkpoint_path = utils.latest_checkpoint_path(self.glow_model_dir) - symbols = list(hps.data.punc) + list(hps.data.chars) - glow_tts_model = models.FlowGenerator( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ) # .to(self.device) - - if self.device == "cuda": - glow_tts_model.to("cuda") - - utils.load_checkpoint(checkpoint_path, glow_tts_model) - glow_tts_model.decoder.store_inverse() - _ = glow_tts_model.eval() - - return hps, glow_tts_model - - def generate_mel(self, text, noise_scale=0.667, length_scale=1.0): - symbols = list(self.hps.data.punc) + list(self.hps.data.chars) - cleaner = self.hps.data.text_cleaners - if getattr(self.hps.data, "add_blank", False): - text_norm = text_to_sequence(text, symbols, cleaner) - text_norm = commons.intersperse(text_norm, len(symbols)) - else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality - text = " " + text.strip() + " " - text_norm = text_to_sequence(text, symbols, cleaner) - - sequence = np.array(text_norm)[None, :] - - if self.device == "cuda": - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda() - else: - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]) - - with torch.no_grad(): - (y_gen_tst, *_), *_, (attn_gen, *_) = self.glow_tts_model( - x_tst, - x_tst_lengths, - gen=True, - noise_scale=noise_scale, - length_scale=length_scale, - ) - - return y_gen_tst - #return y_gen_tst.cpu().detach().numpy() - - -class MelToWav: - def __init__(self, hifi_model_dir, device="cuda"): - self.hifi_model_dir = hifi_model_dir - check_directory(self.hifi_model_dir) - self.device = device - self.h, self.hifi_gan_generator = self.load_hifi_gan() - pass - - def load_hifi_gan(self): - checkpoint_path = utils.latest_checkpoint_path(self.hifi_model_dir, regex="g_*") - config_file = os.path.join(self.hifi_model_dir, "config.json") - data = open(config_file).read() - json_config = json.loads(data) - h = AttrDict(json_config) - torch.manual_seed(h.seed) - - generator = Generator(h).to(self.device) - - assert os.path.isfile(checkpoint_path) - print("Loading '{}'".format(checkpoint_path)) - state_dict_g = torch.load(checkpoint_path, map_location=self.device) - print("Complete.") - - generator.load_state_dict(state_dict_g["generator"]) - - generator.eval() - generator.remove_weight_norm() - - return h, generator - - def generate_wav(self, mel): - #mel = torch.FloatTensor(mel).to(self.device) - - y_g_hat = self.hifi_gan_generator(mel.to(self.device)) # passing through vocoder - audio = y_g_hat.squeeze() - audio = audio * 32768.0 - audio = audio.cpu().detach().numpy().astype("int16") - - return audio, self.h.sampling_rate - - - - - -if __name__ == "__main__": - - parser = ArgumentParser() - parser.add_argument("-m", "--model", required=True, type=str) - parser.add_argument("-g", "--gan", required=True, type=str) - parser.add_argument("-d", "--device", type=str, default="cpu") - parser.add_argument("-t", "--text", type=str, required=True) - parser.add_argument("-w", "--wav", type=str, required=True) - - args = parser.parse_args() - - text_to_mel = TextToMel(glow_model_dir=args.model, device=args.device) - mel_to_wav = MelToWav(hifi_model_dir=args.gan, device=args.device) - - mel = text_to_mel.generate_mel(args.text) - audio, sr = mel_to_wav.generate_wav(mel) - - write(filename=args.wav, rate=sr, data=audio) \ No newline at end of file diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/meldataset.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/meldataset.py deleted file mode 100644 index 8c6ca9ec8a6cc6408a77492e795bffef7f86b611..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/meldataset.py +++ /dev/null @@ -1,233 +0,0 @@ -import math -import os -import random -import torch -import torch.utils.data -import numpy as np -from librosa.util import normalize -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - if torch.min(y) < -1.0: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.0: - print("max value is ", torch.max(y)) - - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax) + "_" + str(y.device)] = ( - torch.from_numpy(mel).float().to(y.device) - ) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[str(y.device)], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - ) - - spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) - - spec = torch.matmul(mel_basis[str(fmax) + "_" + str(y.device)], spec) - spec = spectral_normalize_torch(spec) - - return spec - - -def get_dataset_filelist(a): - with open(a.input_training_file, "r", encoding="utf-8") as fi: - training_files = [x for x in fi.read().split("\n") if len(x) > 0] - - with open(a.input_validation_file, "r", encoding="utf-8") as fi: - validation_files = [x for x in fi.read().split("\n") if len(x) > 0] - return training_files, validation_files - - -class MelDataset(torch.utils.data.Dataset): - def __init__( - self, - training_files, - segment_size, - n_fft, - num_mels, - hop_size, - win_size, - sampling_rate, - fmin, - fmax, - split=True, - shuffle=True, - n_cache_reuse=1, - device=None, - fmax_loss=None, - fine_tuning=False, - base_mels_path=None, - ): - self.audio_files = training_files - random.seed(1234) - if shuffle: - random.shuffle(self.audio_files) - self.segment_size = segment_size - self.sampling_rate = sampling_rate - self.split = split - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.fmax_loss = fmax_loss - self.cached_wav = None - self.n_cache_reuse = n_cache_reuse - self._cache_ref_count = 0 - self.device = device - self.fine_tuning = fine_tuning - self.base_mels_path = base_mels_path - - def __getitem__(self, index): - filename = self.audio_files[index] - if self._cache_ref_count == 0: - audio, sampling_rate = load_wav(filename) - audio = audio / MAX_WAV_VALUE - if not self.fine_tuning: - audio = normalize(audio) * 0.95 - self.cached_wav = audio - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - self._cache_ref_count = self.n_cache_reuse - else: - audio = self.cached_wav - self._cache_ref_count -= 1 - - audio = torch.FloatTensor(audio) - audio = audio.unsqueeze(0) - - if not self.fine_tuning: - if self.split: - if audio.size(1) >= self.segment_size: - max_audio_start = audio.size(1) - self.segment_size - audio_start = random.randint(0, max_audio_start) - audio = audio[:, audio_start : audio_start + self.segment_size] - else: - audio = torch.nn.functional.pad( - audio, (0, self.segment_size - audio.size(1)), "constant" - ) - - mel = mel_spectrogram( - audio, - self.n_fft, - self.num_mels, - self.sampling_rate, - self.hop_size, - self.win_size, - self.fmin, - self.fmax, - center=False, - ) - else: - mel = np.load( - os.path.join( - self.base_mels_path, - os.path.splitext(os.path.split(filename)[-1])[0] + ".npy", - ) - ) - mel = torch.from_numpy(mel) - - if len(mel.shape) < 3: - mel = mel.unsqueeze(0) - - if self.split: - frames_per_seg = math.ceil(self.segment_size / self.hop_size) - - if audio.size(1) >= self.segment_size: - mel_start = random.randint(0, mel.size(2) - frames_per_seg - 1) - mel = mel[:, :, mel_start : mel_start + frames_per_seg] - audio = audio[ - :, - mel_start - * self.hop_size : (mel_start + frames_per_seg) - * self.hop_size, - ] - else: - mel = torch.nn.functional.pad( - mel, (0, frames_per_seg - mel.size(2)), "constant" - ) - audio = torch.nn.functional.pad( - audio, (0, self.segment_size - audio.size(1)), "constant" - ) - - mel_loss = mel_spectrogram( - audio, - self.n_fft, - self.num_mels, - self.sampling_rate, - self.hop_size, - self.win_size, - self.fmin, - self.fmax_loss, - center=False, - ) - - return (mel.squeeze(), audio.squeeze(0), filename, mel_loss.squeeze()) - - def __len__(self): - return len(self.audio_files) diff --git a/spaces/Harveenchadha/en_to_indic_translation/legacy/run_joint_inference.sh b/spaces/Harveenchadha/en_to_indic_translation/legacy/run_joint_inference.sh deleted file mode 100644 index bf4668c9ecb6b1a1ef9b9b7871c6ee22d7865c0b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/legacy/run_joint_inference.sh +++ /dev/null @@ -1,74 +0,0 @@ -src_lang=${1:-en} -tgt_lang=${2:-indic} -bucket_path=${3:-gs://ai4b-anuvaad-nmt/models/transformer-4x/indictrans-${src_lang}-${tgt_lang}} - -mkdir -p ../baselines -expdir=../baselines/baselines-${src_lang}-${tgt_lang} - -if [[ -d $expdir ]] -then - echo "$expdir exists on your filesystem." -else - cd ../baselines - mkdir -p baselines-${src_lang}-${tgt_lang}/model - mkdir -p baselines-${src_lang}-${tgt_lang}/final_bin - cd baselines-${src_lang}-${tgt_lang}/model - gsutil -m cp $bucket_path/model/checkpoint_best.pt . - cd .. - gsutil -m cp $bucket_path/vocab . - gsutil -m cp $bucket_path/final_bin/dict.* final_bin - cd ../indicTrans -fi - - - - - -if [ $src_lang == 'hi' ] || [ $tgt_lang == 'hi' ]; then - TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest anuvaad-legal tico19 sap-documentation-benchmark all) -elif [ $src_lang == 'ta' ] || [ $tgt_lang == 'ta' ]; then - TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest anuvaad-legal tico19 all) -elif [ $src_lang == 'bn' ] || [ $tgt_lang == 'bn' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal tico19 all) -elif [ $src_lang == 'gu' ] || [ $tgt_lang == 'gu' ]; then - TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest all) -elif [ $src_lang == 'as' ] || [ $tgt_lang == 'as' ]; then - TEST_SETS=( all ) -elif [ $src_lang == 'kn' ] || [ $tgt_lang == 'kn' ]; then - TEST_SETS=( wat2021-devtest anuvaad-legal all) -elif [ $src_lang == 'ml' ] || [ $tgt_lang == 'ml' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal all) -elif [ $src_lang == 'mr' ] || [ $tgt_lang == 'mr' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest all) -elif [ $src_lang == 'or' ] || [ $tgt_lang == 'or' ]; then - TEST_SETS=( all ) -elif [ $src_lang == 'pa' ] || [ $tgt_lang == 'pa' ]; then - TEST_SETS=( all ) -elif [ $src_lang == 'te' ] || [ $tgt_lang == 'te' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal all ) -fi - -if [ $src_lang == 'en' ]; then - indic_lang=$tgt_lang -else - indic_lang=$src_lang -fi - - -for tset in ${TEST_SETS[@]};do - echo $tset $src_lang $tgt_lang - if [ $tset == 'wat2021-devtest' ]; then - SRC_FILE=${expdir}/devtest/$tset/test.$src_lang - REF_FILE=${expdir}/devtest/$tset/test.$tgt_lang - else - SRC_FILE=${expdir}/devtest/$tset/en-${indic_lang}/test.$src_lang - REF_FILE=${expdir}/devtest/$tset/en-${indic_lang}/test.$tgt_lang - fi - RESULTS_DIR=${expdir}/results/$tset - - mkdir -p $RESULTS_DIR - - bash joint_translate.sh $SRC_FILE $RESULTS_DIR/${src_lang}-${tgt_lang} $src_lang $tgt_lang $expdir $REF_FILE - # for newline between different outputs - echo -done diff --git a/spaces/Hexamind/GDOC/src/model/doc.py b/spaces/Hexamind/GDOC/src/model/doc.py deleted file mode 100644 index 14a938eff9c5065a5a027bb1d6f55645a917d885..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/GDOC/src/model/doc.py +++ /dev/null @@ -1,54 +0,0 @@ -import docx - -from src.model.container import Container -from src.model.paragraph import Paragraph - - -class Doc: - - def __init__(self, path='', id_=None): - - self.xdoc = docx.Document(path) - self.title = path.split('/')[-1] - self.id_ = id(self) - self.path = path - paragraphs = [Paragraph(xp, self.id_, i) for (i, xp) in enumerate(self.xdoc.paragraphs)] - self.container = Container(paragraphs, father=self, level=0) - self.blocks = self.get_blocks() - self.tasks = [c.get_task(self.container.one_liner) for c in self.container.containers if c.task] - - @property - def structure(self): - - return self.container.structure - - def get_blocks(self): - - def from_list_to_str(index_list): - index_str = str(index_list[0]) - for el in index_list[1:]: - index_str += '.' + str(el) - return index_str - - blocks = self.container.blocks - for block in blocks: - block.doc = self.title - if block.level == 0: - blocks.remove(block) - block.index = from_list_to_str(block.index) - return blocks -""" - current_level = len(current_index) - if 0 < block.level: - if block.level == current_level: - current_index[-1] += 1 - elif current_level < block.level: - current_index.append(1) - elif block.level < current_level: - current_index = current_index[:block.level] - current_index[-1] += 1 - block.index = from_list_to_str(current_index) - else: - block.index = "0" -""" - diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_flax_objects.py b/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_flax_objects.py deleted file mode 100644 index 8e308bb41bea681993049d8a5ec3ff22987d5d14..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_flax_objects.py +++ /dev/null @@ -1,184 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -# flake8: noqa - -from ..utils import DummyObject, requires_backends - - -class FlaxModelMixin(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxUNet2DConditionModel(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxAutoencoderKL(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDiffusionPipeline(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDDIMScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDDPMScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDPMSolverMultistepScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxKarrasVeScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxLMSDiscreteScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxPNDMScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxSchedulerMixin(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxScoreSdeVeScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) diff --git a/spaces/Jamkonams/AutoGPT/tests/context.py b/spaces/Jamkonams/AutoGPT/tests/context.py deleted file mode 100644 index cef969db69ab189109b935bba9ed06696cf5337a..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/tests/context.py +++ /dev/null @@ -1,6 +0,0 @@ -import os -import sys - -sys.path.insert( - 0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../scripts")) -) diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/base_model.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
      • {domain_name}
      • \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
          \n\n" + "".join(display_append) + "
        " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/Justin-Choo/epiCRealism-Natural_Sin_RC1_VAE-WEB-UI/app.py b/spaces/Justin-Choo/epiCRealism-Natural_Sin_RC1_VAE-WEB-UI/app.py deleted file mode 100644 index 1d103e425661204cabea966032e44472fb3d2ee7..0000000000000000000000000000000000000000 --- a/spaces/Justin-Choo/epiCRealism-Natural_Sin_RC1_VAE-WEB-UI/app.py +++ /dev/null @@ -1,149 +0,0 @@ -import os -from sys import executable as pyexecutable -import subprocess -import pathlib -import gc - -def Gitclone(URI:str,ClonePath:str = "") -> int : - if(ClonePath == "") : - while True: - i=subprocess.run([r"git",r"clone",URI]) - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i - else: - while True: - i=subprocess.run([r"git",r"clone",URI,ClonePath]) - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i -def DownLoad(URI:str,DownloadPath:str,DownLoadFileName:str ) -> int: - while (True): - i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",DownloadPath,r"-o",DownLoadFileName,URI]); - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i -user_home =pathlib.Path.home().resolve() -os.chdir(str(user_home)) -#clone stable-diffusion-webui repo -print("cloning stable-diffusion-webui repo") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",str(user_home / r"stable-diffusion-webui")) -os.chdir(str(user_home / r"stable-diffusion-webui")) -os.system("git reset --hard 89f9faa63388756314e8a1d96cf86bf5e0663045") -# - -#install extensions -print("installing extensions") -Gitclone(r"https://huggingface.co/embed/negative",str(user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative")) -Gitclone(r"https://huggingface.co/embed/lora",str(user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive")) -DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",str(user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN") ,r"4x-UltraSharp.pth") -while True: - if(subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")]).returncode == 0): - break -Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" )) -#Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",str(user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser")) -Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface")) -Gitclone(r"https://github.com/camenduru/sd-civitai-browser",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser")) -Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks")) -Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet")) -Gitclone(r"https://github.com/fkunn1326/openpose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor")) -Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib")) -Gitclone(r"https://github.com/hnmr293/posex",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")) -Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor")) -#中文本地化的请解除下一行的注释 -#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN")) -Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete")) -Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels")) -Gitclone(r"https://github.com/etherealxx/batchlinks-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui")) -Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin")) - -#Gitclone(r"https://github.com/KohakuBueleaf/a1111-sd-webui-locon",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-locon" )) -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg")) -Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot")) -Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo")) - -os.chdir(user_home / r"stable-diffusion-webui") - -#download ControlNet models -print("extensions dolwnload done .\ndownloading ControlNet models") -dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"] -for i in range(0,len(dList)): DownLoad(dList[i],str(user_home / "stable-diffusion-webui" / "extensions" / "sd-webui-controlnet" / "models"),pathlib.Path(dList[i]).name) -del dList - -#download model -#you can change model download address here -print("ControlNet models download done.\ndownloading model") -DownLoad(r"https://huggingface.co/Justin-Chew/epiCRealism-Natural_Sin_RC1_VAE/resolve/main/epicrealism_naturalSinRC1VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"epicrealism_naturalSinRC1VAE.safetensors") - -#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.5-pruned.ckpt") -#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.0.vae.pt") -#DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"Counterfeit-V3.0_fp16.safetensors") -#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1B_orangemixs.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"AOM3A1B_orangemixs.safetensors") -#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"orangemix.vae.pt") -#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_BakedVAE.safetensors") -#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_WithoutVAE.safetensors") -#DownLoad(r"https://civitai.com/api/download/models/9474",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"chilloutmix_NiPrunedFp16.safetensors") - -DownLoad(r"https://civitai.com/api/download/models/39885",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"Better_light.safetensors") -DownLoad(r"https://civitai.com/api/download/models/21065",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"LAS.safetensors") -DownLoad(r"https://civitai.com/api/download/models/39164",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"backlighting.safetensors") -#strt webui - -print("Done\nStarting Webui...") -os.chdir(user_home / r"stable-diffusion-webui") -while True: - ret=subprocess.run([r"python3" ,r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")]) - if(ret.returncode == 0 ): - del ret - gc.collect() - else : - del ret - -del os ,user_home ,pyexecutable ,subprocess \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/demucs/raw.py b/spaces/Kangarroar/ApplioRVC-Inference/demucs/raw.py deleted file mode 100644 index d4941ad2d7ed858f490db441f5b46b12bd61ad78..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/demucs/raw.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -from collections import defaultdict, namedtuple -from pathlib import Path - -import musdb -import numpy as np -import torch as th -import tqdm -from torch.utils.data import DataLoader - -from .audio import AudioFile - -ChunkInfo = namedtuple("ChunkInfo", ["file_index", "offset", "local_index"]) - - -class Rawset: - """ - Dataset of raw, normalized, float32 audio files - """ - def __init__(self, path, samples=None, stride=None, channels=2, streams=None): - self.path = Path(path) - self.channels = channels - self.samples = samples - if stride is None: - stride = samples if samples is not None else 0 - self.stride = stride - entries = defaultdict(list) - for root, folders, files in os.walk(self.path, followlinks=True): - folders.sort() - files.sort() - for file in files: - if file.endswith(".raw"): - path = Path(root) / file - name, stream = path.stem.rsplit('.', 1) - entries[(path.parent.relative_to(self.path), name)].append(int(stream)) - - self._entries = list(entries.keys()) - - sizes = [] - self._lengths = [] - ref_streams = sorted(entries[self._entries[0]]) - assert ref_streams == list(range(len(ref_streams))) - if streams is None: - self.streams = ref_streams - else: - self.streams = streams - for entry in sorted(entries.keys()): - streams = entries[entry] - assert sorted(streams) == ref_streams - file = self._path(*entry) - length = file.stat().st_size // (4 * channels) - if samples is None: - sizes.append(1) - else: - if length < samples: - self._entries.remove(entry) - continue - sizes.append((length - samples) // stride + 1) - self._lengths.append(length) - if not sizes: - raise ValueError(f"Empty dataset {self.path}") - self._cumulative_sizes = np.cumsum(sizes) - self._sizes = sizes - - def __len__(self): - return self._cumulative_sizes[-1] - - @property - def total_length(self): - return sum(self._lengths) - - def chunk_info(self, index): - file_index = np.searchsorted(self._cumulative_sizes, index, side='right') - if file_index == 0: - local_index = index - else: - local_index = index - self._cumulative_sizes[file_index - 1] - return ChunkInfo(offset=local_index * self.stride, - file_index=file_index, - local_index=local_index) - - def _path(self, folder, name, stream=0): - return self.path / folder / (name + f'.{stream}.raw') - - def __getitem__(self, index): - chunk = self.chunk_info(index) - entry = self._entries[chunk.file_index] - - length = self.samples or self._lengths[chunk.file_index] - streams = [] - to_read = length * self.channels * 4 - for stream_index, stream in enumerate(self.streams): - offset = chunk.offset * 4 * self.channels - file = open(self._path(*entry, stream=stream), 'rb') - file.seek(offset) - content = file.read(to_read) - assert len(content) == to_read - content = np.frombuffer(content, dtype=np.float32) - content = content.copy() # make writable - streams.append(th.from_numpy(content).view(length, self.channels).t()) - return th.stack(streams, dim=0) - - def name(self, index): - chunk = self.chunk_info(index) - folder, name = self._entries[chunk.file_index] - return folder / name - - -class MusDBSet: - def __init__(self, mus, streams=slice(None), samplerate=44100, channels=2): - self.mus = mus - self.streams = streams - self.samplerate = samplerate - self.channels = channels - - def __len__(self): - return len(self.mus.tracks) - - def __getitem__(self, index): - track = self.mus.tracks[index] - return (track.name, AudioFile(track.path).read(channels=self.channels, - seek_time=0, - streams=self.streams, - samplerate=self.samplerate)) - - -def build_raw(mus, destination, normalize, workers, samplerate, channels): - destination.mkdir(parents=True, exist_ok=True) - loader = DataLoader(MusDBSet(mus, channels=channels, samplerate=samplerate), - batch_size=1, - num_workers=workers, - collate_fn=lambda x: x[0]) - for name, streams in tqdm.tqdm(loader): - if normalize: - ref = streams[0].mean(dim=0) # use mono mixture as reference - streams = (streams - ref.mean()) / ref.std() - for index, stream in enumerate(streams): - open(destination / (name + f'.{index}.raw'), "wb").write(stream.t().numpy().tobytes()) - - -def main(): - parser = argparse.ArgumentParser('rawset') - parser.add_argument('--workers', type=int, default=10) - parser.add_argument('--samplerate', type=int, default=44100) - parser.add_argument('--channels', type=int, default=2) - parser.add_argument('musdb', type=Path) - parser.add_argument('destination', type=Path) - - args = parser.parse_args() - - build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="train"), - args.destination / "train", - normalize=True, - channels=args.channels, - samplerate=args.samplerate, - workers=args.workers) - build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="valid"), - args.destination / "valid", - normalize=True, - samplerate=args.samplerate, - channels=args.channels, - workers=args.workers) - - -if __name__ == "__main__": - main() diff --git a/spaces/Kedreamix/YoloGesture/utils/dataloader.py b/spaces/Kedreamix/YoloGesture/utils/dataloader.py deleted file mode 100644 index 89694dee3dd3e9fb718d0ae62e676f98dced163d..0000000000000000000000000000000000000000 --- a/spaces/Kedreamix/YoloGesture/utils/dataloader.py +++ /dev/null @@ -1,360 +0,0 @@ -from random import sample, shuffle - -import cv2 -import numpy as np -import torch -from PIL import Image -from torch.utils.data.dataset import Dataset - -from utils.utils import cvtColor, preprocess_input - - -class YoloDataset(Dataset): - def __init__(self, annotation_lines, input_shape, num_classes, epoch_length, mosaic, train, mosaic_ratio = 0.7): - super(YoloDataset, self).__init__() - self.annotation_lines = annotation_lines - self.input_shape = input_shape - self.num_classes = num_classes - self.epoch_length = epoch_length - self.mosaic = mosaic - self.train = train - self.mosaic_ratio = mosaic_ratio - - self.epoch_now = -1 - self.length = len(self.annotation_lines) - - def __len__(self): - return self.length - - def __getitem__(self, index): - index = index % self.length - - #---------------------------------------------------# - # 训练时进行数据的随机增强 - # 验证时不进行数据的随机增强 - #---------------------------------------------------# - if self.mosaic: - if self.rand() < 0.5 and self.epoch_now < self.epoch_length * self.mosaic_ratio: - lines = sample(self.annotation_lines, 3) - lines.append(self.annotation_lines[index]) - shuffle(lines) - image, box = self.get_random_data_with_Mosaic(lines, self.input_shape) - else: - image, box = self.get_random_data(self.annotation_lines[index], self.input_shape, random = self.train) - else: - image, box = self.get_random_data(self.annotation_lines[index], self.input_shape, random = self.train) - image = np.transpose(preprocess_input(np.array(image, dtype=np.float32)), (2, 0, 1)) - box = np.array(box, dtype=np.float32) - if len(box) != 0: - box[:, [0, 2]] = box[:, [0, 2]] / self.input_shape[1] - box[:, [1, 3]] = box[:, [1, 3]] / self.input_shape[0] - - box[:, 2:4] = box[:, 2:4] - box[:, 0:2] - box[:, 0:2] = box[:, 0:2] + box[:, 2:4] / 2 - return image, box - - def rand(self, a=0, b=1): - return np.random.rand()*(b-a) + a - - def get_random_data(self, annotation_line, input_shape, jitter=.3, hue=.1, sat=0.7, val=0.4, random=True): - line = annotation_line.split() - #------------------------------# - # 读取图像并转换成RGB图像 - #------------------------------# - image = Image.open(line[0]) - image = cvtColor(image) - #------------------------------# - # 获得图像的高宽与目标高宽 - #------------------------------# - iw, ih = image.size - h, w = input_shape - #------------------------------# - # 获得预测框 - #------------------------------# - box = np.array([np.array(list(map(int,box.split(',')))) for box in line[1:]]) - - if not random: - scale = min(w/iw, h/ih) - nw = int(iw*scale) - nh = int(ih*scale) - dx = (w-nw)//2 - dy = (h-nh)//2 - - #---------------------------------# - # 将图像多余的部分加上灰条 - #---------------------------------# - image = image.resize((nw,nh), Image.BICUBIC) - new_image = Image.new('RGB', (w,h), (128,128,128)) - new_image.paste(image, (dx, dy)) - image_data = np.array(new_image, np.float32) - - #---------------------------------# - # 对真实框进行调整 - #---------------------------------# - if len(box)>0: - np.random.shuffle(box) - box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx - box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy - box[:, 0:2][box[:, 0:2]<0] = 0 - box[:, 2][box[:, 2]>w] = w - box[:, 3][box[:, 3]>h] = h - box_w = box[:, 2] - box[:, 0] - box_h = box[:, 3] - box[:, 1] - box = box[np.logical_and(box_w>1, box_h>1)] # discard invalid box - - return image_data, box - - #------------------------------------------# - # 对图像进行缩放并且进行长和宽的扭曲 - #------------------------------------------# - new_ar = iw/ih * self.rand(1-jitter,1+jitter) / self.rand(1-jitter,1+jitter) - scale = self.rand(.25, 2) - if new_ar < 1: - nh = int(scale*h) - nw = int(nh*new_ar) - else: - nw = int(scale*w) - nh = int(nw/new_ar) - image = image.resize((nw,nh), Image.BICUBIC) - - #------------------------------------------# - # 将图像多余的部分加上灰条 - #------------------------------------------# - dx = int(self.rand(0, w-nw)) - dy = int(self.rand(0, h-nh)) - new_image = Image.new('RGB', (w,h), (128,128,128)) - new_image.paste(image, (dx, dy)) - image = new_image - - #------------------------------------------# - # 翻转图像 - #------------------------------------------# - flip = self.rand()<.5 - if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT) - - image_data = np.array(image, np.uint8) - #---------------------------------# - # 对图像进行色域变换 - # 计算色域变换的参数 - #---------------------------------# - r = np.random.uniform(-1, 1, 3) * [hue, sat, val] + 1 - #---------------------------------# - # 将图像转到HSV上 - #---------------------------------# - hue, sat, val = cv2.split(cv2.cvtColor(image_data, cv2.COLOR_RGB2HSV)) - dtype = image_data.dtype - #---------------------------------# - # 应用变换 - #---------------------------------# - x = np.arange(0, 256, dtype=r.dtype) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - image_data = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) - image_data = cv2.cvtColor(image_data, cv2.COLOR_HSV2RGB) - - #---------------------------------# - # 对真实框进行调整 - #---------------------------------# - if len(box)>0: - np.random.shuffle(box) - box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx - box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy - if flip: box[:, [0,2]] = w - box[:, [2,0]] - box[:, 0:2][box[:, 0:2]<0] = 0 - box[:, 2][box[:, 2]>w] = w - box[:, 3][box[:, 3]>h] = h - box_w = box[:, 2] - box[:, 0] - box_h = box[:, 3] - box[:, 1] - box = box[np.logical_and(box_w>1, box_h>1)] - - return image_data, box - - def merge_bboxes(self, bboxes, cutx, cuty): - merge_bbox = [] - for i in range(len(bboxes)): - for box in bboxes[i]: - tmp_box = [] - x1, y1, x2, y2 = box[0], box[1], box[2], box[3] - - if i == 0: - if y1 > cuty or x1 > cutx: - continue - if y2 >= cuty and y1 <= cuty: - y2 = cuty - if x2 >= cutx and x1 <= cutx: - x2 = cutx - - if i == 1: - if y2 < cuty or x1 > cutx: - continue - if y2 >= cuty and y1 <= cuty: - y1 = cuty - if x2 >= cutx and x1 <= cutx: - x2 = cutx - - if i == 2: - if y2 < cuty or x2 < cutx: - continue - if y2 >= cuty and y1 <= cuty: - y1 = cuty - if x2 >= cutx and x1 <= cutx: - x1 = cutx - - if i == 3: - if y1 > cuty or x2 < cutx: - continue - if y2 >= cuty and y1 <= cuty: - y2 = cuty - if x2 >= cutx and x1 <= cutx: - x1 = cutx - tmp_box.append(x1) - tmp_box.append(y1) - tmp_box.append(x2) - tmp_box.append(y2) - tmp_box.append(box[-1]) - merge_bbox.append(tmp_box) - return merge_bbox - - def get_random_data_with_Mosaic(self, annotation_line, input_shape, jitter=0.3, hue=.1, sat=0.7, val=0.4): - h, w = input_shape - min_offset_x = self.rand(0.3, 0.7) - min_offset_y = self.rand(0.3, 0.7) - - image_datas = [] - box_datas = [] - index = 0 - for line in annotation_line: - #---------------------------------# - # 每一行进行分割 - #---------------------------------# - line_content = line.split() - #---------------------------------# - # 打开图片 - #---------------------------------# - image = Image.open(line_content[0]) - image = cvtColor(image) - - #---------------------------------# - # 图片的大小 - #---------------------------------# - iw, ih = image.size - #---------------------------------# - # 保存框的位置 - #---------------------------------# - box = np.array([np.array(list(map(int,box.split(',')))) for box in line_content[1:]]) - - #---------------------------------# - # 是否翻转图片 - #---------------------------------# - flip = self.rand()<.5 - if flip and len(box)>0: - image = image.transpose(Image.FLIP_LEFT_RIGHT) - box[:, [0,2]] = iw - box[:, [2,0]] - - #------------------------------------------# - # 对图像进行缩放并且进行长和宽的扭曲 - #------------------------------------------# - new_ar = iw/ih * self.rand(1-jitter,1+jitter) / self.rand(1-jitter,1+jitter) - scale = self.rand(.4, 1) - if new_ar < 1: - nh = int(scale*h) - nw = int(nh*new_ar) - else: - nw = int(scale*w) - nh = int(nw/new_ar) - image = image.resize((nw, nh), Image.BICUBIC) - - #-----------------------------------------------# - # 将图片进行放置,分别对应四张分割图片的位置 - #-----------------------------------------------# - if index == 0: - dx = int(w*min_offset_x) - nw - dy = int(h*min_offset_y) - nh - elif index == 1: - dx = int(w*min_offset_x) - nw - dy = int(h*min_offset_y) - elif index == 2: - dx = int(w*min_offset_x) - dy = int(h*min_offset_y) - elif index == 3: - dx = int(w*min_offset_x) - dy = int(h*min_offset_y) - nh - - new_image = Image.new('RGB', (w,h), (128,128,128)) - new_image.paste(image, (dx, dy)) - image_data = np.array(new_image) - - index = index + 1 - box_data = [] - #---------------------------------# - # 对box进行重新处理 - #---------------------------------# - if len(box)>0: - np.random.shuffle(box) - box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx - box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy - box[:, 0:2][box[:, 0:2]<0] = 0 - box[:, 2][box[:, 2]>w] = w - box[:, 3][box[:, 3]>h] = h - box_w = box[:, 2] - box[:, 0] - box_h = box[:, 3] - box[:, 1] - box = box[np.logical_and(box_w>1, box_h>1)] - box_data = np.zeros((len(box),5)) - box_data[:len(box)] = box - - image_datas.append(image_data) - box_datas.append(box_data) - - #---------------------------------# - # 将图片分割,放在一起 - #---------------------------------# - cutx = int(w * min_offset_x) - cuty = int(h * min_offset_y) - - new_image = np.zeros([h, w, 3]) - new_image[:cuty, :cutx, :] = image_datas[0][:cuty, :cutx, :] - new_image[cuty:, :cutx, :] = image_datas[1][cuty:, :cutx, :] - new_image[cuty:, cutx:, :] = image_datas[2][cuty:, cutx:, :] - new_image[:cuty, cutx:, :] = image_datas[3][:cuty, cutx:, :] - - new_image = np.array(new_image, np.uint8) - #---------------------------------# - # 对图像进行色域变换 - # 计算色域变换的参数 - #---------------------------------# - r = np.random.uniform(-1, 1, 3) * [hue, sat, val] + 1 - #---------------------------------# - # 将图像转到HSV上 - #---------------------------------# - hue, sat, val = cv2.split(cv2.cvtColor(new_image, cv2.COLOR_RGB2HSV)) - dtype = new_image.dtype - #---------------------------------# - # 应用变换 - #---------------------------------# - x = np.arange(0, 256, dtype=r.dtype) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - new_image = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) - new_image = cv2.cvtColor(new_image, cv2.COLOR_HSV2RGB) - - #---------------------------------# - # 对框进行进一步的处理 - #---------------------------------# - new_boxes = self.merge_bboxes(box_datas, cutx, cuty) - - return new_image, new_boxes - -# DataLoader中collate_fn使用 -def yolo_dataset_collate(batch): - images = [] - bboxes = [] - for img, box in batch: - images.append(img) - bboxes.append(box) - images = torch.from_numpy(np.array(images)).type(torch.FloatTensor) - bboxes = [torch.from_numpy(ann).type(torch.FloatTensor) for ann in bboxes] - return images, bboxes diff --git a/spaces/KennethTM/semantic_search/multilingual-e5-small/README.md b/spaces/KennethTM/semantic_search/multilingual-e5-small/README.md deleted file mode 100644 index d275544c55917b828b972b3ebf09c3d6c83c7378..0000000000000000000000000000000000000000 --- a/spaces/KennethTM/semantic_search/multilingual-e5-small/README.md +++ /dev/null @@ -1,6121 +0,0 @@ ---- -tags: -- mteb -- Sentence Transformers -- sentence-similarity -- sentence-transformers -model-index: -- name: multilingual-e5-small - results: - - task: - type: Classification - dataset: - type: mteb/amazon_counterfactual - name: MTEB AmazonCounterfactualClassification (en) - config: en - split: test - revision: e8379541af4e31359cca9fbcf4b00f2671dba205 - metrics: - - type: accuracy - value: 73.79104477611939 - - type: ap - value: 36.9996434842022 - - type: f1 - value: 67.95453679103099 - - task: - type: Classification - dataset: - type: mteb/amazon_counterfactual - name: MTEB AmazonCounterfactualClassification (de) - config: de - split: test - revision: e8379541af4e31359cca9fbcf4b00f2671dba205 - metrics: - - type: accuracy - value: 71.64882226980728 - - type: ap - value: 82.11942130026586 - - type: f1 - value: 69.87963421606715 - - task: - type: Classification - dataset: - type: mteb/amazon_counterfactual - name: MTEB AmazonCounterfactualClassification (en-ext) - config: en-ext - split: test - revision: e8379541af4e31359cca9fbcf4b00f2671dba205 - metrics: - - type: accuracy - value: 75.8095952023988 - - type: ap - value: 24.46869495579561 - - type: f1 - value: 63.00108480037597 - - task: - type: Classification - dataset: - type: mteb/amazon_counterfactual - name: MTEB AmazonCounterfactualClassification (ja) - config: ja - split: test - revision: e8379541af4e31359cca9fbcf4b00f2671dba205 - metrics: - - type: accuracy - value: 64.186295503212 - - type: ap - value: 15.496804690197042 - - type: f1 - value: 52.07153895475031 - - task: - type: Classification - dataset: - type: mteb/amazon_polarity - name: MTEB AmazonPolarityClassification - config: default - split: test - revision: e2d317d38cd51312af73b3d32a06d1a08b442046 - metrics: - - type: accuracy - value: 88.699325 - - type: ap - value: 85.27039559917269 - - type: f1 - value: 88.65556295032513 - - task: - type: Classification - dataset: - type: mteb/amazon_reviews_multi - name: MTEB AmazonReviewsClassification (en) - config: en - split: test - revision: 1399c76144fd37290681b995c656ef9b2e06e26d - metrics: - - type: accuracy - value: 44.69799999999999 - - type: f1 - value: 43.73187348654165 - - task: - type: Classification - dataset: - type: mteb/amazon_reviews_multi - name: MTEB AmazonReviewsClassification (de) - config: de - split: test - revision: 1399c76144fd37290681b995c656ef9b2e06e26d - metrics: - - type: accuracy - value: 40.245999999999995 - - type: f1 - value: 39.3863530637684 - - task: - type: Classification - dataset: - type: mteb/amazon_reviews_multi - name: MTEB AmazonReviewsClassification (es) - config: es - split: test - revision: 1399c76144fd37290681b995c656ef9b2e06e26d - metrics: - - type: accuracy - value: 40.394 - - type: f1 - value: 39.301223469483446 - - task: - type: Classification - dataset: - type: mteb/amazon_reviews_multi - name: MTEB AmazonReviewsClassification (fr) - config: fr - split: test - revision: 1399c76144fd37290681b995c656ef9b2e06e26d - metrics: - - type: accuracy - value: 38.864 - - type: f1 - value: 37.97974261868003 - - task: - type: Classification - dataset: - type: mteb/amazon_reviews_multi - name: MTEB AmazonReviewsClassification (ja) - config: ja - split: test - revision: 1399c76144fd37290681b995c656ef9b2e06e26d - metrics: - - type: accuracy - value: 37.682 - - type: f1 - value: 37.07399369768313 - - task: - type: Classification - dataset: - type: mteb/amazon_reviews_multi - name: MTEB AmazonReviewsClassification (zh) - config: zh - split: test - revision: 1399c76144fd37290681b995c656ef9b2e06e26d - metrics: - - type: accuracy - value: 37.504 - - type: f1 - value: 36.62317273874278 - - task: - type: Retrieval - dataset: - type: arguana - name: MTEB ArguAna - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 19.061 - - type: map_at_10 - value: 31.703 - - type: map_at_100 - value: 32.967 - - type: map_at_1000 - value: 33.001000000000005 - - type: map_at_3 - value: 27.466 - - type: map_at_5 - value: 29.564 - - type: mrr_at_1 - value: 19.559 - - type: mrr_at_10 - value: 31.874999999999996 - - type: mrr_at_100 - value: 33.146 - - type: mrr_at_1000 - value: 33.18 - - type: mrr_at_3 - value: 27.667 - - type: mrr_at_5 - value: 29.74 - - type: ndcg_at_1 - value: 19.061 - - type: ndcg_at_10 - value: 39.062999999999995 - - type: ndcg_at_100 - value: 45.184000000000005 - - type: ndcg_at_1000 - value: 46.115 - - type: ndcg_at_3 - value: 30.203000000000003 - - type: ndcg_at_5 - value: 33.953 - - type: precision_at_1 - value: 19.061 - - type: precision_at_10 - value: 6.279999999999999 - - type: precision_at_100 - value: 0.9129999999999999 - - type: precision_at_1000 - value: 0.099 - - type: precision_at_3 - value: 12.706999999999999 - - type: precision_at_5 - value: 9.431000000000001 - - type: recall_at_1 - value: 19.061 - - type: recall_at_10 - value: 62.802 - - type: recall_at_100 - value: 91.323 - - type: recall_at_1000 - value: 98.72 - - type: recall_at_3 - value: 38.122 - - type: recall_at_5 - value: 47.155 - - task: - type: Clustering - dataset: - type: mteb/arxiv-clustering-p2p - name: MTEB ArxivClusteringP2P - config: default - split: test - revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d - metrics: - - type: v_measure - value: 39.22266660528253 - - task: - type: Clustering - dataset: - type: mteb/arxiv-clustering-s2s - name: MTEB ArxivClusteringS2S - config: default - split: test - revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 - metrics: - - type: v_measure - value: 30.79980849482483 - - task: - type: Reranking - dataset: - type: mteb/askubuntudupquestions-reranking - name: MTEB AskUbuntuDupQuestions - config: default - split: test - revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 - metrics: - - type: map - value: 57.8790068352054 - - type: mrr - value: 71.78791276436706 - - task: - type: STS - dataset: - type: mteb/biosses-sts - name: MTEB BIOSSES - config: default - split: test - revision: d3fb88f8f02e40887cd149695127462bbcf29b4a - metrics: - - type: cos_sim_pearson - value: 82.36328364043163 - - type: cos_sim_spearman - value: 82.26211536195868 - - type: euclidean_pearson - value: 80.3183865039173 - - type: euclidean_spearman - value: 79.88495276296132 - - type: manhattan_pearson - value: 80.14484480692127 - - type: manhattan_spearman - value: 80.39279565980743 - - task: - type: BitextMining - dataset: - type: mteb/bucc-bitext-mining - name: MTEB BUCC (de-en) - config: de-en - split: test - revision: d51519689f32196a32af33b075a01d0e7c51e252 - metrics: - - type: accuracy - value: 98.0375782881002 - - type: f1 - value: 97.86012526096033 - - type: precision - value: 97.77139874739039 - - type: recall - value: 98.0375782881002 - - task: - type: BitextMining - dataset: - type: mteb/bucc-bitext-mining - name: MTEB BUCC (fr-en) - config: fr-en - split: test - revision: d51519689f32196a32af33b075a01d0e7c51e252 - metrics: - - type: accuracy - value: 93.35241030156286 - - type: f1 - value: 92.66050333846944 - - type: precision - value: 92.3306919069631 - - type: recall - value: 93.35241030156286 - - task: - type: BitextMining - dataset: - type: mteb/bucc-bitext-mining - name: MTEB BUCC (ru-en) - config: ru-en - split: test - revision: d51519689f32196a32af33b075a01d0e7c51e252 - metrics: - - type: accuracy - value: 94.0699688257707 - - type: f1 - value: 93.50236693222492 - - type: precision - value: 93.22791825424315 - - type: recall - value: 94.0699688257707 - - task: - type: BitextMining - dataset: - type: mteb/bucc-bitext-mining - name: MTEB BUCC (zh-en) - config: zh-en - split: test - revision: d51519689f32196a32af33b075a01d0e7c51e252 - metrics: - - type: accuracy - value: 89.25750394944708 - - type: f1 - value: 88.79234684921889 - - type: precision - value: 88.57293312269616 - - type: recall - value: 89.25750394944708 - - task: - type: Classification - dataset: - type: mteb/banking77 - name: MTEB Banking77Classification - config: default - split: test - revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 - metrics: - - type: accuracy - value: 79.41558441558442 - - type: f1 - value: 79.25886487487219 - - task: - type: Clustering - dataset: - type: mteb/biorxiv-clustering-p2p - name: MTEB BiorxivClusteringP2P - config: default - split: test - revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 - metrics: - - type: v_measure - value: 35.747820820329736 - - task: - type: Clustering - dataset: - type: mteb/biorxiv-clustering-s2s - name: MTEB BiorxivClusteringS2S - config: default - split: test - revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 - metrics: - - type: v_measure - value: 27.045143830596146 - - task: - type: Retrieval - dataset: - type: BeIR/cqadupstack - name: MTEB CQADupstackRetrieval - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 24.252999999999997 - - type: map_at_10 - value: 31.655916666666666 - - type: map_at_100 - value: 32.680749999999996 - - type: map_at_1000 - value: 32.79483333333334 - - type: map_at_3 - value: 29.43691666666666 - - type: map_at_5 - value: 30.717416666666665 - - type: mrr_at_1 - value: 28.602750000000004 - - type: mrr_at_10 - value: 35.56875 - - type: mrr_at_100 - value: 36.3595 - - type: mrr_at_1000 - value: 36.427749999999996 - - type: mrr_at_3 - value: 33.586166666666664 - - type: mrr_at_5 - value: 34.73641666666666 - - type: ndcg_at_1 - value: 28.602750000000004 - - type: ndcg_at_10 - value: 36.06933333333334 - - type: ndcg_at_100 - value: 40.70141666666667 - - type: ndcg_at_1000 - value: 43.24341666666667 - - type: ndcg_at_3 - value: 32.307916666666664 - - type: ndcg_at_5 - value: 34.129999999999995 - - type: precision_at_1 - value: 28.602750000000004 - - type: precision_at_10 - value: 6.097666666666667 - - type: precision_at_100 - value: 0.9809166666666668 - - type: precision_at_1000 - value: 0.13766666666666663 - - type: precision_at_3 - value: 14.628166666666667 - - type: precision_at_5 - value: 10.266916666666667 - - type: recall_at_1 - value: 24.252999999999997 - - type: recall_at_10 - value: 45.31916666666667 - - type: recall_at_100 - value: 66.03575000000001 - - type: recall_at_1000 - value: 83.94708333333334 - - type: recall_at_3 - value: 34.71941666666666 - - type: recall_at_5 - value: 39.46358333333333 - - task: - type: Retrieval - dataset: - type: climate-fever - name: MTEB ClimateFEVER - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 9.024000000000001 - - type: map_at_10 - value: 15.644 - - type: map_at_100 - value: 17.154 - - type: map_at_1000 - value: 17.345 - - type: map_at_3 - value: 13.028 - - type: map_at_5 - value: 14.251 - - type: mrr_at_1 - value: 19.674 - - type: mrr_at_10 - value: 29.826999999999998 - - type: mrr_at_100 - value: 30.935000000000002 - - type: mrr_at_1000 - value: 30.987 - - type: mrr_at_3 - value: 26.645000000000003 - - type: mrr_at_5 - value: 28.29 - - type: ndcg_at_1 - value: 19.674 - - type: ndcg_at_10 - value: 22.545 - - type: ndcg_at_100 - value: 29.207 - - type: ndcg_at_1000 - value: 32.912 - - type: ndcg_at_3 - value: 17.952 - - type: ndcg_at_5 - value: 19.363 - - type: precision_at_1 - value: 19.674 - - type: precision_at_10 - value: 7.212000000000001 - - type: precision_at_100 - value: 1.435 - - type: precision_at_1000 - value: 0.212 - - type: precision_at_3 - value: 13.507 - - type: precision_at_5 - value: 10.397 - - type: recall_at_1 - value: 9.024000000000001 - - type: recall_at_10 - value: 28.077999999999996 - - type: recall_at_100 - value: 51.403 - - type: recall_at_1000 - value: 72.406 - - type: recall_at_3 - value: 16.768 - - type: recall_at_5 - value: 20.737 - - task: - type: Retrieval - dataset: - type: dbpedia-entity - name: MTEB DBPedia - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 8.012 - - type: map_at_10 - value: 17.138 - - type: map_at_100 - value: 24.146 - - type: map_at_1000 - value: 25.622 - - type: map_at_3 - value: 12.552 - - type: map_at_5 - value: 14.435 - - type: mrr_at_1 - value: 62.25000000000001 - - type: mrr_at_10 - value: 71.186 - - type: mrr_at_100 - value: 71.504 - - type: mrr_at_1000 - value: 71.514 - - type: mrr_at_3 - value: 69.333 - - type: mrr_at_5 - value: 70.408 - - type: ndcg_at_1 - value: 49.75 - - type: ndcg_at_10 - value: 37.76 - - type: ndcg_at_100 - value: 42.071 - - type: ndcg_at_1000 - value: 49.309 - - type: ndcg_at_3 - value: 41.644 - - type: ndcg_at_5 - value: 39.812999999999995 - - type: precision_at_1 - value: 62.25000000000001 - - type: precision_at_10 - value: 30.15 - - type: precision_at_100 - value: 9.753 - - type: precision_at_1000 - value: 1.9189999999999998 - - type: precision_at_3 - value: 45.667 - - type: precision_at_5 - value: 39.15 - - type: recall_at_1 - value: 8.012 - - type: recall_at_10 - value: 22.599 - - type: recall_at_100 - value: 48.068 - - type: recall_at_1000 - value: 71.328 - - type: recall_at_3 - value: 14.043 - - type: recall_at_5 - value: 17.124 - - task: - type: Classification - dataset: - type: mteb/emotion - name: MTEB EmotionClassification - config: default - split: test - revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 - metrics: - - type: accuracy - value: 42.455 - - type: f1 - value: 37.59462649781862 - - task: - type: Retrieval - dataset: - type: fever - name: MTEB FEVER - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 58.092 - - type: map_at_10 - value: 69.586 - - type: map_at_100 - value: 69.968 - - type: map_at_1000 - value: 69.982 - - type: map_at_3 - value: 67.48100000000001 - - type: map_at_5 - value: 68.915 - - type: mrr_at_1 - value: 62.166 - - type: mrr_at_10 - value: 73.588 - - type: mrr_at_100 - value: 73.86399999999999 - - type: mrr_at_1000 - value: 73.868 - - type: mrr_at_3 - value: 71.6 - - type: mrr_at_5 - value: 72.99 - - type: ndcg_at_1 - value: 62.166 - - type: ndcg_at_10 - value: 75.27199999999999 - - type: ndcg_at_100 - value: 76.816 - - type: ndcg_at_1000 - value: 77.09700000000001 - - type: ndcg_at_3 - value: 71.36 - - type: ndcg_at_5 - value: 73.785 - - type: precision_at_1 - value: 62.166 - - type: precision_at_10 - value: 9.716 - - type: precision_at_100 - value: 1.065 - - type: precision_at_1000 - value: 0.11 - - type: precision_at_3 - value: 28.278 - - type: precision_at_5 - value: 18.343999999999998 - - type: recall_at_1 - value: 58.092 - - type: recall_at_10 - value: 88.73400000000001 - - type: recall_at_100 - value: 95.195 - - type: recall_at_1000 - value: 97.04599999999999 - - type: recall_at_3 - value: 78.45 - - type: recall_at_5 - value: 84.316 - - task: - type: Retrieval - dataset: - type: fiqa - name: MTEB FiQA2018 - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 16.649 - - type: map_at_10 - value: 26.457000000000004 - - type: map_at_100 - value: 28.169 - - type: map_at_1000 - value: 28.352 - - type: map_at_3 - value: 23.305 - - type: map_at_5 - value: 25.169000000000004 - - type: mrr_at_1 - value: 32.407000000000004 - - type: mrr_at_10 - value: 40.922 - - type: mrr_at_100 - value: 41.931000000000004 - - type: mrr_at_1000 - value: 41.983 - - type: mrr_at_3 - value: 38.786 - - type: mrr_at_5 - value: 40.205999999999996 - - type: ndcg_at_1 - value: 32.407000000000004 - - type: ndcg_at_10 - value: 33.314 - - type: ndcg_at_100 - value: 40.312 - - type: ndcg_at_1000 - value: 43.685 - - type: ndcg_at_3 - value: 30.391000000000002 - - type: ndcg_at_5 - value: 31.525 - - type: precision_at_1 - value: 32.407000000000004 - - type: precision_at_10 - value: 8.966000000000001 - - type: precision_at_100 - value: 1.6019999999999999 - - type: precision_at_1000 - value: 0.22200000000000003 - - type: precision_at_3 - value: 20.165 - - type: precision_at_5 - value: 14.722 - - type: recall_at_1 - value: 16.649 - - type: recall_at_10 - value: 39.117000000000004 - - type: recall_at_100 - value: 65.726 - - type: recall_at_1000 - value: 85.784 - - type: recall_at_3 - value: 27.914 - - type: recall_at_5 - value: 33.289 - - task: - type: Retrieval - dataset: - type: hotpotqa - name: MTEB HotpotQA - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 36.253 - - type: map_at_10 - value: 56.16799999999999 - - type: map_at_100 - value: 57.06099999999999 - - type: map_at_1000 - value: 57.126 - - type: map_at_3 - value: 52.644999999999996 - - type: map_at_5 - value: 54.909 - - type: mrr_at_1 - value: 72.505 - - type: mrr_at_10 - value: 79.66 - - type: mrr_at_100 - value: 79.869 - - type: mrr_at_1000 - value: 79.88 - - type: mrr_at_3 - value: 78.411 - - type: mrr_at_5 - value: 79.19800000000001 - - type: ndcg_at_1 - value: 72.505 - - type: ndcg_at_10 - value: 65.094 - - type: ndcg_at_100 - value: 68.219 - - type: ndcg_at_1000 - value: 69.515 - - type: ndcg_at_3 - value: 59.99 - - type: ndcg_at_5 - value: 62.909000000000006 - - type: precision_at_1 - value: 72.505 - - type: precision_at_10 - value: 13.749 - - type: precision_at_100 - value: 1.619 - - type: precision_at_1000 - value: 0.179 - - type: precision_at_3 - value: 38.357 - - type: precision_at_5 - value: 25.313000000000002 - - type: recall_at_1 - value: 36.253 - - type: recall_at_10 - value: 68.744 - - type: recall_at_100 - value: 80.925 - - type: recall_at_1000 - value: 89.534 - - type: recall_at_3 - value: 57.535000000000004 - - type: recall_at_5 - value: 63.282000000000004 - - task: - type: Classification - dataset: - type: mteb/imdb - name: MTEB ImdbClassification - config: default - split: test - revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 - metrics: - - type: accuracy - value: 80.82239999999999 - - type: ap - value: 75.65895781725314 - - type: f1 - value: 80.75880969095746 - - task: - type: Retrieval - dataset: - type: msmarco - name: MTEB MSMARCO - config: default - split: dev - revision: None - metrics: - - type: map_at_1 - value: 21.624 - - type: map_at_10 - value: 34.075 - - type: map_at_100 - value: 35.229 - - type: map_at_1000 - value: 35.276999999999994 - - type: map_at_3 - value: 30.245 - - type: map_at_5 - value: 32.42 - - type: mrr_at_1 - value: 22.264 - - type: mrr_at_10 - value: 34.638000000000005 - - type: mrr_at_100 - value: 35.744 - - type: mrr_at_1000 - value: 35.787 - - type: mrr_at_3 - value: 30.891000000000002 - - type: mrr_at_5 - value: 33.042 - - type: ndcg_at_1 - value: 22.264 - - type: ndcg_at_10 - value: 40.991 - - type: ndcg_at_100 - value: 46.563 - - type: ndcg_at_1000 - value: 47.743 - - type: ndcg_at_3 - value: 33.198 - - type: ndcg_at_5 - value: 37.069 - - type: precision_at_1 - value: 22.264 - - type: precision_at_10 - value: 6.5089999999999995 - - type: precision_at_100 - value: 0.9299999999999999 - - type: precision_at_1000 - value: 0.10300000000000001 - - type: precision_at_3 - value: 14.216999999999999 - - type: precision_at_5 - value: 10.487 - - type: recall_at_1 - value: 21.624 - - type: recall_at_10 - value: 62.303 - - type: recall_at_100 - value: 88.124 - - type: recall_at_1000 - value: 97.08 - - type: recall_at_3 - value: 41.099999999999994 - - type: recall_at_5 - value: 50.381 - - task: - type: Classification - dataset: - type: mteb/mtop_domain - name: MTEB MTOPDomainClassification (en) - config: en - split: test - revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf - metrics: - - type: accuracy - value: 91.06703146374831 - - type: f1 - value: 90.86867815863172 - - task: - type: Classification - dataset: - type: mteb/mtop_domain - name: MTEB MTOPDomainClassification (de) - config: de - split: test - revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf - metrics: - - type: accuracy - value: 87.46970977740209 - - type: f1 - value: 86.36832872036588 - - task: - type: Classification - dataset: - type: mteb/mtop_domain - name: MTEB MTOPDomainClassification (es) - config: es - split: test - revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf - metrics: - - type: accuracy - value: 89.26951300867245 - - type: f1 - value: 88.93561193959502 - - task: - type: Classification - dataset: - type: mteb/mtop_domain - name: MTEB MTOPDomainClassification (fr) - config: fr - split: test - revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf - metrics: - - type: accuracy - value: 84.22799874725963 - - type: f1 - value: 84.30490069236556 - - task: - type: Classification - dataset: - type: mteb/mtop_domain - name: MTEB MTOPDomainClassification (hi) - config: hi - split: test - revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf - metrics: - - type: accuracy - value: 86.02007888131948 - - type: f1 - value: 85.39376041027991 - - task: - type: Classification - dataset: - type: mteb/mtop_domain - name: MTEB MTOPDomainClassification (th) - config: th - split: test - revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf - metrics: - - type: accuracy - value: 85.34900542495481 - - type: f1 - value: 85.39859673336713 - - task: - type: Classification - dataset: - type: mteb/mtop_intent - name: MTEB MTOPIntentClassification (en) - config: en - split: test - revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba - metrics: - - type: accuracy - value: 71.078431372549 - - type: f1 - value: 53.45071102002276 - - task: - type: Classification - dataset: - type: mteb/mtop_intent - name: MTEB MTOPIntentClassification (de) - config: de - split: test - revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba - metrics: - - type: accuracy - value: 65.85798816568047 - - type: f1 - value: 46.53112748993529 - - task: - type: Classification - dataset: - type: mteb/mtop_intent - name: MTEB MTOPIntentClassification (es) - config: es - split: test - revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba - metrics: - - type: accuracy - value: 67.96864576384256 - - type: f1 - value: 45.966703022829506 - - task: - type: Classification - dataset: - type: mteb/mtop_intent - name: MTEB MTOPIntentClassification (fr) - config: fr - split: test - revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba - metrics: - - type: accuracy - value: 61.31537738803633 - - type: f1 - value: 45.52601712835461 - - task: - type: Classification - dataset: - type: mteb/mtop_intent - name: MTEB MTOPIntentClassification (hi) - config: hi - split: test - revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba - metrics: - - type: accuracy - value: 66.29616349946218 - - type: f1 - value: 47.24166485726613 - - task: - type: Classification - dataset: - type: mteb/mtop_intent - name: MTEB MTOPIntentClassification (th) - config: th - split: test - revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba - metrics: - - type: accuracy - value: 67.51537070524412 - - type: f1 - value: 49.463476319014276 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (af) - config: af - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.06792199058508 - - type: f1 - value: 54.094921857502285 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (am) - config: am - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 51.960322797579025 - - type: f1 - value: 48.547371223370945 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ar) - config: ar - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 54.425016812373904 - - type: f1 - value: 50.47069202054312 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (az) - config: az - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 59.798251513113655 - - type: f1 - value: 57.05013069086648 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (bn) - config: bn - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 59.37794216543376 - - type: f1 - value: 56.3607992649805 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (cy) - config: cy - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 46.56018829858777 - - type: f1 - value: 43.87319715715134 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (da) - config: da - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 62.9724277067922 - - type: f1 - value: 59.36480066245562 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (de) - config: de - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 62.72696704774715 - - type: f1 - value: 59.143595966615855 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (el) - config: el - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 61.5971755211836 - - type: f1 - value: 59.169445724946726 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (en) - config: en - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 70.29589778076665 - - type: f1 - value: 67.7577001808977 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (es) - config: es - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 66.31136516476126 - - type: f1 - value: 64.52032955983242 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (fa) - config: fa - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 65.54472091459314 - - type: f1 - value: 61.47903120066317 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (fi) - config: fi - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 61.45595158036314 - - type: f1 - value: 58.0891846024637 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (fr) - config: fr - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 65.47074646940149 - - type: f1 - value: 62.84830858877575 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (he) - config: he - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 58.046402151983855 - - type: f1 - value: 55.269074430533195 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (hi) - config: hi - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 64.06523201075991 - - type: f1 - value: 61.35339643021369 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (hu) - config: hu - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 60.954942837928726 - - type: f1 - value: 57.07035922704846 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (hy) - config: hy - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.404169468728995 - - type: f1 - value: 53.94259011839138 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (id) - config: id - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 64.16610625420309 - - type: f1 - value: 61.337103431499365 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (is) - config: is - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 52.262945527908535 - - type: f1 - value: 49.7610691598921 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (it) - config: it - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 65.54472091459314 - - type: f1 - value: 63.469099018440154 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ja) - config: ja - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 68.22797579018157 - - type: f1 - value: 64.89098471083001 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (jv) - config: jv - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 50.847343644922674 - - type: f1 - value: 47.8536963168393 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ka) - config: ka - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 48.45326160053799 - - type: f1 - value: 46.370078045805556 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (km) - config: km - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 42.83120376597175 - - type: f1 - value: 39.68948521599982 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (kn) - config: kn - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.5084061869536 - - type: f1 - value: 53.961876160401545 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ko) - config: ko - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 63.7895090786819 - - type: f1 - value: 61.134223684676 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (lv) - config: lv - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 54.98991257565569 - - type: f1 - value: 52.579862862826296 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ml) - config: ml - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 61.90316072629456 - - type: f1 - value: 58.203024538290336 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (mn) - config: mn - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.09818426361802 - - type: f1 - value: 54.22718458445455 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ms) - config: ms - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 58.991257565568255 - - type: f1 - value: 55.84892781767421 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (my) - config: my - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 55.901143241425686 - - type: f1 - value: 52.25264332199797 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (nb) - config: nb - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 61.96368527236047 - - type: f1 - value: 58.927243876153454 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (nl) - config: nl - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 65.64223268325489 - - type: f1 - value: 62.340453718379706 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (pl) - config: pl - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 64.52589105581708 - - type: f1 - value: 61.661113187022174 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (pt) - config: pt - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 66.84599865501009 - - type: f1 - value: 64.59342572873005 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ro) - config: ro - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 60.81035642232684 - - type: f1 - value: 57.5169089806797 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ru) - config: ru - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 65.75991930060525 - - type: f1 - value: 62.89531115787938 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (sl) - config: sl - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 56.51647612642906 - - type: f1 - value: 54.33154780100043 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (sq) - config: sq - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.985877605917956 - - type: f1 - value: 54.46187524463802 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (sv) - config: sv - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 65.03026227303296 - - type: f1 - value: 62.34377392877748 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (sw) - config: sw - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 53.567585743106925 - - type: f1 - value: 50.73770655983206 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ta) - config: ta - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.2595830531271 - - type: f1 - value: 53.657327291708626 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (te) - config: te - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 57.82784129119032 - - type: f1 - value: 54.82518072665301 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (th) - config: th - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 64.06859448554137 - - type: f1 - value: 63.00185280500495 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (tl) - config: tl - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 58.91055817081371 - - type: f1 - value: 55.54116301224262 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (tr) - config: tr - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 63.54404841963686 - - type: f1 - value: 59.57650946030184 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (ur) - config: ur - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 59.27706792199059 - - type: f1 - value: 56.50010066083435 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (vi) - config: vi - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 64.0719569603228 - - type: f1 - value: 61.817075925647956 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (zh-CN) - config: zh-CN - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 68.23806321452591 - - type: f1 - value: 65.24917026029749 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_intent - name: MTEB MassiveIntentClassification (zh-TW) - config: zh-TW - split: test - revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 - metrics: - - type: accuracy - value: 62.53530598520511 - - type: f1 - value: 61.71131132295768 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (af) - config: af - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 63.04303967720243 - - type: f1 - value: 60.3950085685985 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (am) - config: am - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 56.83591123066578 - - type: f1 - value: 54.95059828830849 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ar) - config: ar - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 59.62340282447881 - - type: f1 - value: 59.525159996498225 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (az) - config: az - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 60.85406859448555 - - type: f1 - value: 59.129299095681276 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (bn) - config: bn - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 62.76731674512441 - - type: f1 - value: 61.159560612627715 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (cy) - config: cy - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 50.181573638197705 - - type: f1 - value: 46.98422176289957 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (da) - config: da - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.92737054472092 - - type: f1 - value: 67.69135611952979 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (de) - config: de - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 69.18964357767318 - - type: f1 - value: 68.46106138186214 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (el) - config: el - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 67.0712844653665 - - type: f1 - value: 66.75545422473901 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (en) - config: en - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 74.4754539340955 - - type: f1 - value: 74.38427146553252 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (es) - config: es - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 69.82515131136518 - - type: f1 - value: 69.63516462173847 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (fa) - config: fa - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.70880968392737 - - type: f1 - value: 67.45420662567926 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (fi) - config: fi - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 65.95494283792871 - - type: f1 - value: 65.06191009049222 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (fr) - config: fr - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.75924680564896 - - type: f1 - value: 68.30833379585945 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (he) - config: he - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 63.806321452589096 - - type: f1 - value: 63.273048243765054 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (hi) - config: hi - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 67.68997982515133 - - type: f1 - value: 66.54703855381324 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (hu) - config: hu - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 66.46940147948891 - - type: f1 - value: 65.91017343463396 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (hy) - config: hy - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 59.49899125756556 - - type: f1 - value: 57.90333469917769 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (id) - config: id - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 67.9219905850706 - - type: f1 - value: 67.23169403762938 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (is) - config: is - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 56.486213853396094 - - type: f1 - value: 54.85282355583758 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (it) - config: it - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 69.04169468728985 - - type: f1 - value: 68.83833333320462 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ja) - config: ja - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 73.88702084734365 - - type: f1 - value: 74.04474735232299 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (jv) - config: jv - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 56.63416274377943 - - type: f1 - value: 55.11332211687954 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ka) - config: ka - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 52.23604572965702 - - type: f1 - value: 50.86529813991055 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (km) - config: km - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 46.62407531943511 - - type: f1 - value: 43.63485467164535 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (kn) - config: kn - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 59.15601882985878 - - type: f1 - value: 57.522837510959924 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ko) - config: ko - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 69.84532616005382 - - type: f1 - value: 69.60021127179697 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (lv) - config: lv - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 56.65770006724949 - - type: f1 - value: 55.84219135523227 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ml) - config: ml - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 66.53665097511768 - - type: f1 - value: 65.09087787792639 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (mn) - config: mn - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 59.31405514458642 - - type: f1 - value: 58.06135303831491 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ms) - config: ms - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 64.88231338264964 - - type: f1 - value: 62.751099407787926 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (my) - config: my - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 58.86012104909213 - - type: f1 - value: 56.29118323058282 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (nb) - config: nb - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 67.37390719569602 - - type: f1 - value: 66.27922244885102 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (nl) - config: nl - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 70.8675184936113 - - type: f1 - value: 70.22146529932019 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (pl) - config: pl - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.2212508406187 - - type: f1 - value: 67.77454802056282 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (pt) - config: pt - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.18090114324143 - - type: f1 - value: 68.03737625431621 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ro) - config: ro - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 64.65030262273034 - - type: f1 - value: 63.792945486912856 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ru) - config: ru - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 69.48217888365838 - - type: f1 - value: 69.96028997292197 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (sl) - config: sl - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 60.17821116341627 - - type: f1 - value: 59.3935969827171 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (sq) - config: sq - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 62.86146603900471 - - type: f1 - value: 60.133692735032376 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (sv) - config: sv - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 70.89441829186282 - - type: f1 - value: 70.03064076194089 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (sw) - config: sw - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 58.15063887020847 - - type: f1 - value: 56.23326278499678 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ta) - config: ta - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 59.43846671149966 - - type: f1 - value: 57.70440450281974 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (te) - config: te - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 60.8507061197041 - - type: f1 - value: 59.22916396061171 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (th) - config: th - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 70.65568258238063 - - type: f1 - value: 69.90736239440633 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (tl) - config: tl - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 60.8843308675185 - - type: f1 - value: 59.30332663713599 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (tr) - config: tr - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.05312710154674 - - type: f1 - value: 67.44024062594775 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (ur) - config: ur - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 62.111634162743776 - - type: f1 - value: 60.89083013084519 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (vi) - config: vi - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 67.44115669132482 - - type: f1 - value: 67.92227541674552 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (zh-CN) - config: zh-CN - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 74.4687289845326 - - type: f1 - value: 74.16376793486025 - - task: - type: Classification - dataset: - type: mteb/amazon_massive_scenario - name: MTEB MassiveScenarioClassification (zh-TW) - config: zh-TW - split: test - revision: 7d571f92784cd94a019292a1f45445077d0ef634 - metrics: - - type: accuracy - value: 68.31876260928043 - - type: f1 - value: 68.5246745215607 - - task: - type: Clustering - dataset: - type: mteb/medrxiv-clustering-p2p - name: MTEB MedrxivClusteringP2P - config: default - split: test - revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 - metrics: - - type: v_measure - value: 30.90431696479766 - - task: - type: Clustering - dataset: - type: mteb/medrxiv-clustering-s2s - name: MTEB MedrxivClusteringS2S - config: default - split: test - revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 - metrics: - - type: v_measure - value: 27.259158476693774 - - task: - type: Reranking - dataset: - type: mteb/mind_small - name: MTEB MindSmallReranking - config: default - split: test - revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 - metrics: - - type: map - value: 30.28445330838555 - - type: mrr - value: 31.15758529581164 - - task: - type: Retrieval - dataset: - type: nfcorpus - name: MTEB NFCorpus - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 5.353 - - type: map_at_10 - value: 11.565 - - type: map_at_100 - value: 14.097000000000001 - - type: map_at_1000 - value: 15.354999999999999 - - type: map_at_3 - value: 8.749 - - type: map_at_5 - value: 9.974 - - type: mrr_at_1 - value: 42.105 - - type: mrr_at_10 - value: 50.589 - - type: mrr_at_100 - value: 51.187000000000005 - - type: mrr_at_1000 - value: 51.233 - - type: mrr_at_3 - value: 48.246 - - type: mrr_at_5 - value: 49.546 - - type: ndcg_at_1 - value: 40.402 - - type: ndcg_at_10 - value: 31.009999999999998 - - type: ndcg_at_100 - value: 28.026 - - type: ndcg_at_1000 - value: 36.905 - - type: ndcg_at_3 - value: 35.983 - - type: ndcg_at_5 - value: 33.764 - - type: precision_at_1 - value: 42.105 - - type: precision_at_10 - value: 22.786 - - type: precision_at_100 - value: 6.916 - - type: precision_at_1000 - value: 1.981 - - type: precision_at_3 - value: 33.333 - - type: precision_at_5 - value: 28.731 - - type: recall_at_1 - value: 5.353 - - type: recall_at_10 - value: 15.039 - - type: recall_at_100 - value: 27.348 - - type: recall_at_1000 - value: 59.453 - - type: recall_at_3 - value: 9.792 - - type: recall_at_5 - value: 11.882 - - task: - type: Retrieval - dataset: - type: nq - name: MTEB NQ - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 33.852 - - type: map_at_10 - value: 48.924 - - type: map_at_100 - value: 49.854 - - type: map_at_1000 - value: 49.886 - - type: map_at_3 - value: 44.9 - - type: map_at_5 - value: 47.387 - - type: mrr_at_1 - value: 38.035999999999994 - - type: mrr_at_10 - value: 51.644 - - type: mrr_at_100 - value: 52.339 - - type: mrr_at_1000 - value: 52.35999999999999 - - type: mrr_at_3 - value: 48.421 - - type: mrr_at_5 - value: 50.468999999999994 - - type: ndcg_at_1 - value: 38.007000000000005 - - type: ndcg_at_10 - value: 56.293000000000006 - - type: ndcg_at_100 - value: 60.167 - - type: ndcg_at_1000 - value: 60.916000000000004 - - type: ndcg_at_3 - value: 48.903999999999996 - - type: ndcg_at_5 - value: 52.978 - - type: precision_at_1 - value: 38.007000000000005 - - type: precision_at_10 - value: 9.041 - - type: precision_at_100 - value: 1.1199999999999999 - - type: precision_at_1000 - value: 0.11900000000000001 - - type: precision_at_3 - value: 22.084 - - type: precision_at_5 - value: 15.608 - - type: recall_at_1 - value: 33.852 - - type: recall_at_10 - value: 75.893 - - type: recall_at_100 - value: 92.589 - - type: recall_at_1000 - value: 98.153 - - type: recall_at_3 - value: 56.969 - - type: recall_at_5 - value: 66.283 - - task: - type: Retrieval - dataset: - type: quora - name: MTEB QuoraRetrieval - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 69.174 - - type: map_at_10 - value: 82.891 - - type: map_at_100 - value: 83.545 - - type: map_at_1000 - value: 83.56700000000001 - - type: map_at_3 - value: 79.944 - - type: map_at_5 - value: 81.812 - - type: mrr_at_1 - value: 79.67999999999999 - - type: mrr_at_10 - value: 86.279 - - type: mrr_at_100 - value: 86.39 - - type: mrr_at_1000 - value: 86.392 - - type: mrr_at_3 - value: 85.21 - - type: mrr_at_5 - value: 85.92999999999999 - - type: ndcg_at_1 - value: 79.69000000000001 - - type: ndcg_at_10 - value: 86.929 - - type: ndcg_at_100 - value: 88.266 - - type: ndcg_at_1000 - value: 88.428 - - type: ndcg_at_3 - value: 83.899 - - type: ndcg_at_5 - value: 85.56700000000001 - - type: precision_at_1 - value: 79.69000000000001 - - type: precision_at_10 - value: 13.161000000000001 - - type: precision_at_100 - value: 1.513 - - type: precision_at_1000 - value: 0.156 - - type: precision_at_3 - value: 36.603 - - type: precision_at_5 - value: 24.138 - - type: recall_at_1 - value: 69.174 - - type: recall_at_10 - value: 94.529 - - type: recall_at_100 - value: 99.15 - - type: recall_at_1000 - value: 99.925 - - type: recall_at_3 - value: 85.86200000000001 - - type: recall_at_5 - value: 90.501 - - task: - type: Clustering - dataset: - type: mteb/reddit-clustering - name: MTEB RedditClustering - config: default - split: test - revision: 24640382cdbf8abc73003fb0fa6d111a705499eb - metrics: - - type: v_measure - value: 39.13064340585255 - - task: - type: Clustering - dataset: - type: mteb/reddit-clustering-p2p - name: MTEB RedditClusteringP2P - config: default - split: test - revision: 282350215ef01743dc01b456c7f5241fa8937f16 - metrics: - - type: v_measure - value: 58.97884249325877 - - task: - type: Retrieval - dataset: - type: scidocs - name: MTEB SCIDOCS - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 3.4680000000000004 - - type: map_at_10 - value: 7.865 - - type: map_at_100 - value: 9.332 - - type: map_at_1000 - value: 9.587 - - type: map_at_3 - value: 5.800000000000001 - - type: map_at_5 - value: 6.8790000000000004 - - type: mrr_at_1 - value: 17.0 - - type: mrr_at_10 - value: 25.629 - - type: mrr_at_100 - value: 26.806 - - type: mrr_at_1000 - value: 26.889000000000003 - - type: mrr_at_3 - value: 22.8 - - type: mrr_at_5 - value: 24.26 - - type: ndcg_at_1 - value: 17.0 - - type: ndcg_at_10 - value: 13.895 - - type: ndcg_at_100 - value: 20.491999999999997 - - type: ndcg_at_1000 - value: 25.759999999999998 - - type: ndcg_at_3 - value: 13.347999999999999 - - type: ndcg_at_5 - value: 11.61 - - type: precision_at_1 - value: 17.0 - - type: precision_at_10 - value: 7.090000000000001 - - type: precision_at_100 - value: 1.669 - - type: precision_at_1000 - value: 0.294 - - type: precision_at_3 - value: 12.3 - - type: precision_at_5 - value: 10.02 - - type: recall_at_1 - value: 3.4680000000000004 - - type: recall_at_10 - value: 14.363000000000001 - - type: recall_at_100 - value: 33.875 - - type: recall_at_1000 - value: 59.711999999999996 - - type: recall_at_3 - value: 7.483 - - type: recall_at_5 - value: 10.173 - - task: - type: STS - dataset: - type: mteb/sickr-sts - name: MTEB SICK-R - config: default - split: test - revision: a6ea5a8cab320b040a23452cc28066d9beae2cee - metrics: - - type: cos_sim_pearson - value: 83.04084311714061 - - type: cos_sim_spearman - value: 77.51342467443078 - - type: euclidean_pearson - value: 80.0321166028479 - - type: euclidean_spearman - value: 77.29249114733226 - - type: manhattan_pearson - value: 80.03105964262431 - - type: manhattan_spearman - value: 77.22373689514794 - - task: - type: STS - dataset: - type: mteb/sts12-sts - name: MTEB STS12 - config: default - split: test - revision: a0d554a64d88156834ff5ae9920b964011b16384 - metrics: - - type: cos_sim_pearson - value: 84.1680158034387 - - type: cos_sim_spearman - value: 76.55983344071117 - - type: euclidean_pearson - value: 79.75266678300143 - - type: euclidean_spearman - value: 75.34516823467025 - - type: manhattan_pearson - value: 79.75959151517357 - - type: manhattan_spearman - value: 75.42330344141912 - - task: - type: STS - dataset: - type: mteb/sts13-sts - name: MTEB STS13 - config: default - split: test - revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca - metrics: - - type: cos_sim_pearson - value: 76.48898993209346 - - type: cos_sim_spearman - value: 76.96954120323366 - - type: euclidean_pearson - value: 76.94139109279668 - - type: euclidean_spearman - value: 76.85860283201711 - - type: manhattan_pearson - value: 76.6944095091912 - - type: manhattan_spearman - value: 76.61096912972553 - - task: - type: STS - dataset: - type: mteb/sts14-sts - name: MTEB STS14 - config: default - split: test - revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 - metrics: - - type: cos_sim_pearson - value: 77.85082366246944 - - type: cos_sim_spearman - value: 75.52053350101731 - - type: euclidean_pearson - value: 77.1165845070926 - - type: euclidean_spearman - value: 75.31216065884388 - - type: manhattan_pearson - value: 77.06193941833494 - - type: manhattan_spearman - value: 75.31003701700112 - - task: - type: STS - dataset: - type: mteb/sts15-sts - name: MTEB STS15 - config: default - split: test - revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 - metrics: - - type: cos_sim_pearson - value: 86.36305246526497 - - type: cos_sim_spearman - value: 87.11704613927415 - - type: euclidean_pearson - value: 86.04199125810939 - - type: euclidean_spearman - value: 86.51117572414263 - - type: manhattan_pearson - value: 86.0805106816633 - - type: manhattan_spearman - value: 86.52798366512229 - - task: - type: STS - dataset: - type: mteb/sts16-sts - name: MTEB STS16 - config: default - split: test - revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 - metrics: - - type: cos_sim_pearson - value: 82.18536255599724 - - type: cos_sim_spearman - value: 83.63377151025418 - - type: euclidean_pearson - value: 83.24657467993141 - - type: euclidean_spearman - value: 84.02751481993825 - - type: manhattan_pearson - value: 83.11941806582371 - - type: manhattan_spearman - value: 83.84251281019304 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (ko-ko) - config: ko-ko - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 78.95816528475514 - - type: cos_sim_spearman - value: 78.86607380120462 - - type: euclidean_pearson - value: 78.51268699230545 - - type: euclidean_spearman - value: 79.11649316502229 - - type: manhattan_pearson - value: 78.32367302808157 - - type: manhattan_spearman - value: 78.90277699624637 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (ar-ar) - config: ar-ar - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 72.89126914997624 - - type: cos_sim_spearman - value: 73.0296921832678 - - type: euclidean_pearson - value: 71.50385903677738 - - type: euclidean_spearman - value: 73.13368899716289 - - type: manhattan_pearson - value: 71.47421463379519 - - type: manhattan_spearman - value: 73.03383242946575 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (en-ar) - config: en-ar - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 59.22923684492637 - - type: cos_sim_spearman - value: 57.41013211368396 - - type: euclidean_pearson - value: 61.21107388080905 - - type: euclidean_spearman - value: 60.07620768697254 - - type: manhattan_pearson - value: 59.60157142786555 - - type: manhattan_spearman - value: 59.14069604103739 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (en-de) - config: en-de - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 76.24345978774299 - - type: cos_sim_spearman - value: 77.24225743830719 - - type: euclidean_pearson - value: 76.66226095469165 - - type: euclidean_spearman - value: 77.60708820493146 - - type: manhattan_pearson - value: 76.05303324760429 - - type: manhattan_spearman - value: 76.96353149912348 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (en-en) - config: en-en - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 85.50879160160852 - - type: cos_sim_spearman - value: 86.43594662965224 - - type: euclidean_pearson - value: 86.06846012826577 - - type: euclidean_spearman - value: 86.02041395794136 - - type: manhattan_pearson - value: 86.10916255616904 - - type: manhattan_spearman - value: 86.07346068198953 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (en-tr) - config: en-tr - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 58.39803698977196 - - type: cos_sim_spearman - value: 55.96910950423142 - - type: euclidean_pearson - value: 58.17941175613059 - - type: euclidean_spearman - value: 55.03019330522745 - - type: manhattan_pearson - value: 57.333358138183286 - - type: manhattan_spearman - value: 54.04614023149965 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (es-en) - config: es-en - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 70.98304089637197 - - type: cos_sim_spearman - value: 72.44071656215888 - - type: euclidean_pearson - value: 72.19224359033983 - - type: euclidean_spearman - value: 73.89871188913025 - - type: manhattan_pearson - value: 71.21098311547406 - - type: manhattan_spearman - value: 72.93405764824821 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (es-es) - config: es-es - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 85.99792397466308 - - type: cos_sim_spearman - value: 84.83824377879495 - - type: euclidean_pearson - value: 85.70043288694438 - - type: euclidean_spearman - value: 84.70627558703686 - - type: manhattan_pearson - value: 85.89570850150801 - - type: manhattan_spearman - value: 84.95806105313007 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (fr-en) - config: fr-en - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 72.21850322994712 - - type: cos_sim_spearman - value: 72.28669398117248 - - type: euclidean_pearson - value: 73.40082510412948 - - type: euclidean_spearman - value: 73.0326539281865 - - type: manhattan_pearson - value: 71.8659633964841 - - type: manhattan_spearman - value: 71.57817425823303 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (it-en) - config: it-en - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 75.80921368595645 - - type: cos_sim_spearman - value: 77.33209091229315 - - type: euclidean_pearson - value: 76.53159540154829 - - type: euclidean_spearman - value: 78.17960842810093 - - type: manhattan_pearson - value: 76.13530186637601 - - type: manhattan_spearman - value: 78.00701437666875 - - task: - type: STS - dataset: - type: mteb/sts17-crosslingual-sts - name: MTEB STS17 (nl-en) - config: nl-en - split: test - revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d - metrics: - - type: cos_sim_pearson - value: 74.74980608267349 - - type: cos_sim_spearman - value: 75.37597374318821 - - type: euclidean_pearson - value: 74.90506081911661 - - type: euclidean_spearman - value: 75.30151613124521 - - type: manhattan_pearson - value: 74.62642745918002 - - type: manhattan_spearman - value: 75.18619716592303 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (en) - config: en - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 59.632662289205584 - - type: cos_sim_spearman - value: 60.938543391610914 - - type: euclidean_pearson - value: 62.113200529767056 - - type: euclidean_spearman - value: 61.410312633261164 - - type: manhattan_pearson - value: 61.75494698945686 - - type: manhattan_spearman - value: 60.92726195322362 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (de) - config: de - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 45.283470551557244 - - type: cos_sim_spearman - value: 53.44833015864201 - - type: euclidean_pearson - value: 41.17892011120893 - - type: euclidean_spearman - value: 53.81441383126767 - - type: manhattan_pearson - value: 41.17482200420659 - - type: manhattan_spearman - value: 53.82180269276363 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (es) - config: es - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 60.5069165306236 - - type: cos_sim_spearman - value: 66.87803259033826 - - type: euclidean_pearson - value: 63.5428979418236 - - type: euclidean_spearman - value: 66.9293576586897 - - type: manhattan_pearson - value: 63.59789526178922 - - type: manhattan_spearman - value: 66.86555009875066 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (pl) - config: pl - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 28.23026196280264 - - type: cos_sim_spearman - value: 35.79397812652861 - - type: euclidean_pearson - value: 17.828102102767353 - - type: euclidean_spearman - value: 35.721501145568894 - - type: manhattan_pearson - value: 17.77134274219677 - - type: manhattan_spearman - value: 35.98107902846267 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (tr) - config: tr - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 56.51946541393812 - - type: cos_sim_spearman - value: 63.714686006214485 - - type: euclidean_pearson - value: 58.32104651305898 - - type: euclidean_spearman - value: 62.237110895702216 - - type: manhattan_pearson - value: 58.579416468759185 - - type: manhattan_spearman - value: 62.459738981727 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (ar) - config: ar - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 48.76009839569795 - - type: cos_sim_spearman - value: 56.65188431953149 - - type: euclidean_pearson - value: 50.997682160915595 - - type: euclidean_spearman - value: 55.99910008818135 - - type: manhattan_pearson - value: 50.76220659606342 - - type: manhattan_spearman - value: 55.517347595391456 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (ru) - config: ru - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 51.232731157702425 - - type: cos_sim_spearman - value: 59.89531877658345 - - type: euclidean_pearson - value: 49.937914570348376 - - type: euclidean_spearman - value: 60.220905659334036 - - type: manhattan_pearson - value: 50.00987996844193 - - type: manhattan_spearman - value: 60.081341480977926 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (zh) - config: zh - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 54.717524559088005 - - type: cos_sim_spearman - value: 66.83570886252286 - - type: euclidean_pearson - value: 58.41338625505467 - - type: euclidean_spearman - value: 66.68991427704938 - - type: manhattan_pearson - value: 58.78638572916807 - - type: manhattan_spearman - value: 66.58684161046335 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (fr) - config: fr - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 73.2962042954962 - - type: cos_sim_spearman - value: 76.58255504852025 - - type: euclidean_pearson - value: 75.70983192778257 - - type: euclidean_spearman - value: 77.4547684870542 - - type: manhattan_pearson - value: 75.75565853870485 - - type: manhattan_spearman - value: 76.90208974949428 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (de-en) - config: de-en - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 54.47396266924846 - - type: cos_sim_spearman - value: 56.492267162048606 - - type: euclidean_pearson - value: 55.998505203070195 - - type: euclidean_spearman - value: 56.46447012960222 - - type: manhattan_pearson - value: 54.873172394430995 - - type: manhattan_spearman - value: 56.58111534551218 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (es-en) - config: es-en - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 69.87177267688686 - - type: cos_sim_spearman - value: 74.57160943395763 - - type: euclidean_pearson - value: 70.88330406826788 - - type: euclidean_spearman - value: 74.29767636038422 - - type: manhattan_pearson - value: 71.38245248369536 - - type: manhattan_spearman - value: 74.53102232732175 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (it) - config: it - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 72.80225656959544 - - type: cos_sim_spearman - value: 76.52646173725735 - - type: euclidean_pearson - value: 73.95710720200799 - - type: euclidean_spearman - value: 76.54040031984111 - - type: manhattan_pearson - value: 73.89679971946774 - - type: manhattan_spearman - value: 76.60886958161574 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (pl-en) - config: pl-en - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 70.70844249898789 - - type: cos_sim_spearman - value: 72.68571783670241 - - type: euclidean_pearson - value: 72.38800772441031 - - type: euclidean_spearman - value: 72.86804422703312 - - type: manhattan_pearson - value: 71.29840508203515 - - type: manhattan_spearman - value: 71.86264441749513 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (zh-en) - config: zh-en - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 58.647478923935694 - - type: cos_sim_spearman - value: 63.74453623540931 - - type: euclidean_pearson - value: 59.60138032437505 - - type: euclidean_spearman - value: 63.947930832166065 - - type: manhattan_pearson - value: 58.59735509491861 - - type: manhattan_spearman - value: 62.082503844627404 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (es-it) - config: es-it - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 65.8722516867162 - - type: cos_sim_spearman - value: 71.81208592523012 - - type: euclidean_pearson - value: 67.95315252165956 - - type: euclidean_spearman - value: 73.00749822046009 - - type: manhattan_pearson - value: 68.07884688638924 - - type: manhattan_spearman - value: 72.34210325803069 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (de-fr) - config: de-fr - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 54.5405814240949 - - type: cos_sim_spearman - value: 60.56838649023775 - - type: euclidean_pearson - value: 53.011731611314104 - - type: euclidean_spearman - value: 58.533194841668426 - - type: manhattan_pearson - value: 53.623067729338494 - - type: manhattan_spearman - value: 58.018756154446926 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (de-pl) - config: de-pl - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 13.611046866216112 - - type: cos_sim_spearman - value: 28.238192909158492 - - type: euclidean_pearson - value: 22.16189199885129 - - type: euclidean_spearman - value: 35.012895679076564 - - type: manhattan_pearson - value: 21.969771178698387 - - type: manhattan_spearman - value: 32.456985088607475 - - task: - type: STS - dataset: - type: mteb/sts22-crosslingual-sts - name: MTEB STS22 (fr-pl) - config: fr-pl - split: test - revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 - metrics: - - type: cos_sim_pearson - value: 74.58077407011655 - - type: cos_sim_spearman - value: 84.51542547285167 - - type: euclidean_pearson - value: 74.64613843596234 - - type: euclidean_spearman - value: 84.51542547285167 - - type: manhattan_pearson - value: 75.15335973101396 - - type: manhattan_spearman - value: 84.51542547285167 - - task: - type: STS - dataset: - type: mteb/stsbenchmark-sts - name: MTEB STSBenchmark - config: default - split: test - revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 - metrics: - - type: cos_sim_pearson - value: 82.0739825531578 - - type: cos_sim_spearman - value: 84.01057479311115 - - type: euclidean_pearson - value: 83.85453227433344 - - type: euclidean_spearman - value: 84.01630226898655 - - type: manhattan_pearson - value: 83.75323603028978 - - type: manhattan_spearman - value: 83.89677983727685 - - task: - type: Reranking - dataset: - type: mteb/scidocs-reranking - name: MTEB SciDocsRR - config: default - split: test - revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab - metrics: - - type: map - value: 78.12945623123957 - - type: mrr - value: 93.87738713719106 - - task: - type: Retrieval - dataset: - type: scifact - name: MTEB SciFact - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 52.983000000000004 - - type: map_at_10 - value: 62.946000000000005 - - type: map_at_100 - value: 63.514 - - type: map_at_1000 - value: 63.554 - - type: map_at_3 - value: 60.183 - - type: map_at_5 - value: 61.672000000000004 - - type: mrr_at_1 - value: 55.667 - - type: mrr_at_10 - value: 64.522 - - type: mrr_at_100 - value: 64.957 - - type: mrr_at_1000 - value: 64.995 - - type: mrr_at_3 - value: 62.388999999999996 - - type: mrr_at_5 - value: 63.639 - - type: ndcg_at_1 - value: 55.667 - - type: ndcg_at_10 - value: 67.704 - - type: ndcg_at_100 - value: 70.299 - - type: ndcg_at_1000 - value: 71.241 - - type: ndcg_at_3 - value: 62.866 - - type: ndcg_at_5 - value: 65.16999999999999 - - type: precision_at_1 - value: 55.667 - - type: precision_at_10 - value: 9.033 - - type: precision_at_100 - value: 1.053 - - type: precision_at_1000 - value: 0.11299999999999999 - - type: precision_at_3 - value: 24.444 - - type: precision_at_5 - value: 16.133 - - type: recall_at_1 - value: 52.983000000000004 - - type: recall_at_10 - value: 80.656 - - type: recall_at_100 - value: 92.5 - - type: recall_at_1000 - value: 99.667 - - type: recall_at_3 - value: 67.744 - - type: recall_at_5 - value: 73.433 - - task: - type: PairClassification - dataset: - type: mteb/sprintduplicatequestions-pairclassification - name: MTEB SprintDuplicateQuestions - config: default - split: test - revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 - metrics: - - type: cos_sim_accuracy - value: 99.72772277227723 - - type: cos_sim_ap - value: 92.17845897992215 - - type: cos_sim_f1 - value: 85.9746835443038 - - type: cos_sim_precision - value: 87.07692307692308 - - type: cos_sim_recall - value: 84.89999999999999 - - type: dot_accuracy - value: 99.3039603960396 - - type: dot_ap - value: 60.70244020124878 - - type: dot_f1 - value: 59.92742353551063 - - type: dot_precision - value: 62.21743810548978 - - type: dot_recall - value: 57.8 - - type: euclidean_accuracy - value: 99.71683168316832 - - type: euclidean_ap - value: 91.53997039964659 - - type: euclidean_f1 - value: 84.88372093023257 - - type: euclidean_precision - value: 90.02242152466367 - - type: euclidean_recall - value: 80.30000000000001 - - type: manhattan_accuracy - value: 99.72376237623763 - - type: manhattan_ap - value: 91.80756777790289 - - type: manhattan_f1 - value: 85.48468106479157 - - type: manhattan_precision - value: 85.8728557013118 - - type: manhattan_recall - value: 85.1 - - type: max_accuracy - value: 99.72772277227723 - - type: max_ap - value: 92.17845897992215 - - type: max_f1 - value: 85.9746835443038 - - task: - type: Clustering - dataset: - type: mteb/stackexchange-clustering - name: MTEB StackExchangeClustering - config: default - split: test - revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 - metrics: - - type: v_measure - value: 53.52464042600003 - - task: - type: Clustering - dataset: - type: mteb/stackexchange-clustering-p2p - name: MTEB StackExchangeClusteringP2P - config: default - split: test - revision: 815ca46b2622cec33ccafc3735d572c266efdb44 - metrics: - - type: v_measure - value: 32.071631948736 - - task: - type: Reranking - dataset: - type: mteb/stackoverflowdupquestions-reranking - name: MTEB StackOverflowDupQuestions - config: default - split: test - revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 - metrics: - - type: map - value: 49.19552407604654 - - type: mrr - value: 49.95269130379425 - - task: - type: Summarization - dataset: - type: mteb/summeval - name: MTEB SummEval - config: default - split: test - revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c - metrics: - - type: cos_sim_pearson - value: 29.345293033095427 - - type: cos_sim_spearman - value: 29.976931423258403 - - type: dot_pearson - value: 27.047078008958408 - - type: dot_spearman - value: 27.75894368380218 - - task: - type: Retrieval - dataset: - type: trec-covid - name: MTEB TRECCOVID - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 0.22 - - type: map_at_10 - value: 1.706 - - type: map_at_100 - value: 9.634 - - type: map_at_1000 - value: 23.665 - - type: map_at_3 - value: 0.5950000000000001 - - type: map_at_5 - value: 0.95 - - type: mrr_at_1 - value: 86.0 - - type: mrr_at_10 - value: 91.8 - - type: mrr_at_100 - value: 91.8 - - type: mrr_at_1000 - value: 91.8 - - type: mrr_at_3 - value: 91.0 - - type: mrr_at_5 - value: 91.8 - - type: ndcg_at_1 - value: 80.0 - - type: ndcg_at_10 - value: 72.573 - - type: ndcg_at_100 - value: 53.954 - - type: ndcg_at_1000 - value: 47.760999999999996 - - type: ndcg_at_3 - value: 76.173 - - type: ndcg_at_5 - value: 75.264 - - type: precision_at_1 - value: 86.0 - - type: precision_at_10 - value: 76.4 - - type: precision_at_100 - value: 55.50000000000001 - - type: precision_at_1000 - value: 21.802 - - type: precision_at_3 - value: 81.333 - - type: precision_at_5 - value: 80.4 - - type: recall_at_1 - value: 0.22 - - type: recall_at_10 - value: 1.925 - - type: recall_at_100 - value: 12.762 - - type: recall_at_1000 - value: 44.946000000000005 - - type: recall_at_3 - value: 0.634 - - type: recall_at_5 - value: 1.051 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (sqi-eng) - config: sqi-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 91.0 - - type: f1 - value: 88.55666666666666 - - type: precision - value: 87.46166666666667 - - type: recall - value: 91.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (fry-eng) - config: fry-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 57.22543352601156 - - type: f1 - value: 51.03220478943021 - - type: precision - value: 48.8150289017341 - - type: recall - value: 57.22543352601156 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (kur-eng) - config: kur-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 46.58536585365854 - - type: f1 - value: 39.66870798578116 - - type: precision - value: 37.416085946573745 - - type: recall - value: 46.58536585365854 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tur-eng) - config: tur-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 89.7 - - type: f1 - value: 86.77999999999999 - - type: precision - value: 85.45333333333332 - - type: recall - value: 89.7 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (deu-eng) - config: deu-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 97.39999999999999 - - type: f1 - value: 96.58333333333331 - - type: precision - value: 96.2 - - type: recall - value: 97.39999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (nld-eng) - config: nld-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 92.4 - - type: f1 - value: 90.3 - - type: precision - value: 89.31666666666668 - - type: recall - value: 92.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ron-eng) - config: ron-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 86.9 - - type: f1 - value: 83.67190476190476 - - type: precision - value: 82.23333333333332 - - type: recall - value: 86.9 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ang-eng) - config: ang-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 50.0 - - type: f1 - value: 42.23229092632078 - - type: precision - value: 39.851634683724235 - - type: recall - value: 50.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ido-eng) - config: ido-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 76.3 - - type: f1 - value: 70.86190476190477 - - type: precision - value: 68.68777777777777 - - type: recall - value: 76.3 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (jav-eng) - config: jav-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 57.073170731707314 - - type: f1 - value: 50.658958927251604 - - type: precision - value: 48.26480836236933 - - type: recall - value: 57.073170731707314 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (isl-eng) - config: isl-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 68.2 - - type: f1 - value: 62.156507936507936 - - type: precision - value: 59.84964285714286 - - type: recall - value: 68.2 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (slv-eng) - config: slv-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 77.52126366950182 - - type: f1 - value: 72.8496210148701 - - type: precision - value: 70.92171498003819 - - type: recall - value: 77.52126366950182 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (cym-eng) - config: cym-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 70.78260869565217 - - type: f1 - value: 65.32422360248447 - - type: precision - value: 63.063067367415194 - - type: recall - value: 70.78260869565217 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (kaz-eng) - config: kaz-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 78.43478260869566 - - type: f1 - value: 73.02608695652172 - - type: precision - value: 70.63768115942028 - - type: recall - value: 78.43478260869566 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (est-eng) - config: est-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 60.9 - - type: f1 - value: 55.309753694581275 - - type: precision - value: 53.130476190476195 - - type: recall - value: 60.9 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (heb-eng) - config: heb-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 72.89999999999999 - - type: f1 - value: 67.92023809523809 - - type: precision - value: 65.82595238095237 - - type: recall - value: 72.89999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (gla-eng) - config: gla-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 46.80337756332931 - - type: f1 - value: 39.42174900558496 - - type: precision - value: 36.97101116280851 - - type: recall - value: 46.80337756332931 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (mar-eng) - config: mar-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 89.8 - - type: f1 - value: 86.79 - - type: precision - value: 85.375 - - type: recall - value: 89.8 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (lat-eng) - config: lat-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 47.199999999999996 - - type: f1 - value: 39.95484348984349 - - type: precision - value: 37.561071428571424 - - type: recall - value: 47.199999999999996 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (bel-eng) - config: bel-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 87.8 - - type: f1 - value: 84.68190476190475 - - type: precision - value: 83.275 - - type: recall - value: 87.8 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (pms-eng) - config: pms-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 48.76190476190476 - - type: f1 - value: 42.14965986394558 - - type: precision - value: 39.96743626743626 - - type: recall - value: 48.76190476190476 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (gle-eng) - config: gle-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 66.10000000000001 - - type: f1 - value: 59.58580086580086 - - type: precision - value: 57.150238095238095 - - type: recall - value: 66.10000000000001 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (pes-eng) - config: pes-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 87.3 - - type: f1 - value: 84.0 - - type: precision - value: 82.48666666666666 - - type: recall - value: 87.3 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (nob-eng) - config: nob-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 90.4 - - type: f1 - value: 87.79523809523809 - - type: precision - value: 86.6 - - type: recall - value: 90.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (bul-eng) - config: bul-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 87.0 - - type: f1 - value: 83.81 - - type: precision - value: 82.36666666666666 - - type: recall - value: 87.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (cbk-eng) - config: cbk-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 63.9 - - type: f1 - value: 57.76533189033189 - - type: precision - value: 55.50595238095239 - - type: recall - value: 63.9 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (hun-eng) - config: hun-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 76.1 - - type: f1 - value: 71.83690476190478 - - type: precision - value: 70.04928571428573 - - type: recall - value: 76.1 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (uig-eng) - config: uig-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 66.3 - - type: f1 - value: 59.32626984126984 - - type: precision - value: 56.62535714285713 - - type: recall - value: 66.3 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (rus-eng) - config: rus-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 90.60000000000001 - - type: f1 - value: 87.96333333333334 - - type: precision - value: 86.73333333333333 - - type: recall - value: 90.60000000000001 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (spa-eng) - config: spa-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 93.10000000000001 - - type: f1 - value: 91.10000000000001 - - type: precision - value: 90.16666666666666 - - type: recall - value: 93.10000000000001 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (hye-eng) - config: hye-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 85.71428571428571 - - type: f1 - value: 82.29142600436403 - - type: precision - value: 80.8076626877166 - - type: recall - value: 85.71428571428571 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tel-eng) - config: tel-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 88.88888888888889 - - type: f1 - value: 85.7834757834758 - - type: precision - value: 84.43732193732193 - - type: recall - value: 88.88888888888889 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (afr-eng) - config: afr-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 88.5 - - type: f1 - value: 85.67190476190476 - - type: precision - value: 84.43333333333332 - - type: recall - value: 88.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (mon-eng) - config: mon-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 82.72727272727273 - - type: f1 - value: 78.21969696969695 - - type: precision - value: 76.18181818181819 - - type: recall - value: 82.72727272727273 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (arz-eng) - config: arz-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 61.0062893081761 - - type: f1 - value: 55.13976240391334 - - type: precision - value: 52.92112499659669 - - type: recall - value: 61.0062893081761 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (hrv-eng) - config: hrv-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 89.5 - - type: f1 - value: 86.86666666666666 - - type: precision - value: 85.69166666666668 - - type: recall - value: 89.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (nov-eng) - config: nov-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 73.54085603112841 - - type: f1 - value: 68.56031128404669 - - type: precision - value: 66.53047989623866 - - type: recall - value: 73.54085603112841 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (gsw-eng) - config: gsw-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 43.58974358974359 - - type: f1 - value: 36.45299145299145 - - type: precision - value: 33.81155881155882 - - type: recall - value: 43.58974358974359 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (nds-eng) - config: nds-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 59.599999999999994 - - type: f1 - value: 53.264689754689755 - - type: precision - value: 50.869166666666665 - - type: recall - value: 59.599999999999994 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ukr-eng) - config: ukr-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 85.2 - - type: f1 - value: 81.61666666666665 - - type: precision - value: 80.02833333333335 - - type: recall - value: 85.2 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (uzb-eng) - config: uzb-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 63.78504672897196 - - type: f1 - value: 58.00029669188548 - - type: precision - value: 55.815809968847354 - - type: recall - value: 63.78504672897196 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (lit-eng) - config: lit-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 66.5 - - type: f1 - value: 61.518333333333345 - - type: precision - value: 59.622363699102834 - - type: recall - value: 66.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ina-eng) - config: ina-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 88.6 - - type: f1 - value: 85.60222222222221 - - type: precision - value: 84.27916666666665 - - type: recall - value: 88.6 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (lfn-eng) - config: lfn-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 58.699999999999996 - - type: f1 - value: 52.732375957375965 - - type: precision - value: 50.63214035964035 - - type: recall - value: 58.699999999999996 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (zsm-eng) - config: zsm-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 92.10000000000001 - - type: f1 - value: 89.99666666666667 - - type: precision - value: 89.03333333333333 - - type: recall - value: 92.10000000000001 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ita-eng) - config: ita-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 90.10000000000001 - - type: f1 - value: 87.55666666666667 - - type: precision - value: 86.36166666666668 - - type: recall - value: 90.10000000000001 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (cmn-eng) - config: cmn-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 91.4 - - type: f1 - value: 88.89000000000001 - - type: precision - value: 87.71166666666666 - - type: recall - value: 91.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (lvs-eng) - config: lvs-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 65.7 - - type: f1 - value: 60.67427750410509 - - type: precision - value: 58.71785714285714 - - type: recall - value: 65.7 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (glg-eng) - config: glg-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 85.39999999999999 - - type: f1 - value: 81.93190476190475 - - type: precision - value: 80.37833333333333 - - type: recall - value: 85.39999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ceb-eng) - config: ceb-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 47.833333333333336 - - type: f1 - value: 42.006625781625786 - - type: precision - value: 40.077380952380956 - - type: recall - value: 47.833333333333336 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (bre-eng) - config: bre-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 10.4 - - type: f1 - value: 8.24465007215007 - - type: precision - value: 7.664597069597071 - - type: recall - value: 10.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ben-eng) - config: ben-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 82.6 - - type: f1 - value: 77.76333333333334 - - type: precision - value: 75.57833333333332 - - type: recall - value: 82.6 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (swg-eng) - config: swg-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 52.67857142857143 - - type: f1 - value: 44.302721088435376 - - type: precision - value: 41.49801587301587 - - type: recall - value: 52.67857142857143 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (arq-eng) - config: arq-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 28.3205268935236 - - type: f1 - value: 22.426666605171157 - - type: precision - value: 20.685900116470915 - - type: recall - value: 28.3205268935236 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (kab-eng) - config: kab-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 22.7 - - type: f1 - value: 17.833970473970474 - - type: precision - value: 16.407335164835164 - - type: recall - value: 22.7 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (fra-eng) - config: fra-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 92.2 - - type: f1 - value: 89.92999999999999 - - type: precision - value: 88.87 - - type: recall - value: 92.2 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (por-eng) - config: por-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 91.4 - - type: f1 - value: 89.25 - - type: precision - value: 88.21666666666667 - - type: recall - value: 91.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tat-eng) - config: tat-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 69.19999999999999 - - type: f1 - value: 63.38269841269841 - - type: precision - value: 61.14773809523809 - - type: recall - value: 69.19999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (oci-eng) - config: oci-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 48.8 - - type: f1 - value: 42.839915639915645 - - type: precision - value: 40.770287114845935 - - type: recall - value: 48.8 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (pol-eng) - config: pol-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 88.8 - - type: f1 - value: 85.90666666666668 - - type: precision - value: 84.54166666666666 - - type: recall - value: 88.8 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (war-eng) - config: war-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 46.6 - - type: f1 - value: 40.85892920804686 - - type: precision - value: 38.838223114604695 - - type: recall - value: 46.6 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (aze-eng) - config: aze-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 84.0 - - type: f1 - value: 80.14190476190475 - - type: precision - value: 78.45333333333333 - - type: recall - value: 84.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (vie-eng) - config: vie-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 90.5 - - type: f1 - value: 87.78333333333333 - - type: precision - value: 86.5 - - type: recall - value: 90.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (nno-eng) - config: nno-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 74.5 - - type: f1 - value: 69.48397546897547 - - type: precision - value: 67.51869047619049 - - type: recall - value: 74.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (cha-eng) - config: cha-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 32.846715328467155 - - type: f1 - value: 27.828177499710343 - - type: precision - value: 26.63451511991658 - - type: recall - value: 32.846715328467155 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (mhr-eng) - config: mhr-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 8.0 - - type: f1 - value: 6.07664116764988 - - type: precision - value: 5.544177607179943 - - type: recall - value: 8.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (dan-eng) - config: dan-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 87.6 - - type: f1 - value: 84.38555555555554 - - type: precision - value: 82.91583333333334 - - type: recall - value: 87.6 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ell-eng) - config: ell-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 87.5 - - type: f1 - value: 84.08333333333331 - - type: precision - value: 82.47333333333333 - - type: recall - value: 87.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (amh-eng) - config: amh-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 80.95238095238095 - - type: f1 - value: 76.13095238095238 - - type: precision - value: 74.05753968253967 - - type: recall - value: 80.95238095238095 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (pam-eng) - config: pam-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 8.799999999999999 - - type: f1 - value: 6.971422975172975 - - type: precision - value: 6.557814916172301 - - type: recall - value: 8.799999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (hsb-eng) - config: hsb-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 44.099378881987576 - - type: f1 - value: 37.01649742022413 - - type: precision - value: 34.69420618488942 - - type: recall - value: 44.099378881987576 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (srp-eng) - config: srp-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 84.3 - - type: f1 - value: 80.32666666666667 - - type: precision - value: 78.60666666666665 - - type: recall - value: 84.3 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (epo-eng) - config: epo-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 92.5 - - type: f1 - value: 90.49666666666666 - - type: precision - value: 89.56666666666668 - - type: recall - value: 92.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (kzj-eng) - config: kzj-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 10.0 - - type: f1 - value: 8.268423529875141 - - type: precision - value: 7.878118605532398 - - type: recall - value: 10.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (awa-eng) - config: awa-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 79.22077922077922 - - type: f1 - value: 74.27128427128426 - - type: precision - value: 72.28715728715729 - - type: recall - value: 79.22077922077922 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (fao-eng) - config: fao-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 65.64885496183206 - - type: f1 - value: 58.87495456197747 - - type: precision - value: 55.992366412213734 - - type: recall - value: 65.64885496183206 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (mal-eng) - config: mal-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 96.06986899563319 - - type: f1 - value: 94.78408539543909 - - type: precision - value: 94.15332362930616 - - type: recall - value: 96.06986899563319 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ile-eng) - config: ile-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 77.2 - - type: f1 - value: 71.72571428571428 - - type: precision - value: 69.41000000000001 - - type: recall - value: 77.2 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (bos-eng) - config: bos-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 86.4406779661017 - - type: f1 - value: 83.2391713747646 - - type: precision - value: 81.74199623352166 - - type: recall - value: 86.4406779661017 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (cor-eng) - config: cor-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 8.4 - - type: f1 - value: 6.017828743398003 - - type: precision - value: 5.4829865484756795 - - type: recall - value: 8.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (cat-eng) - config: cat-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 83.5 - - type: f1 - value: 79.74833333333333 - - type: precision - value: 78.04837662337664 - - type: recall - value: 83.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (eus-eng) - config: eus-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 60.4 - - type: f1 - value: 54.467301587301584 - - type: precision - value: 52.23242424242424 - - type: recall - value: 60.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (yue-eng) - config: yue-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 74.9 - - type: f1 - value: 69.68699134199134 - - type: precision - value: 67.59873015873016 - - type: recall - value: 74.9 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (swe-eng) - config: swe-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 88.0 - - type: f1 - value: 84.9652380952381 - - type: precision - value: 83.66166666666666 - - type: recall - value: 88.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (dtp-eng) - config: dtp-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 9.1 - - type: f1 - value: 7.681244588744588 - - type: precision - value: 7.370043290043291 - - type: recall - value: 9.1 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (kat-eng) - config: kat-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 80.9651474530831 - - type: f1 - value: 76.84220605132133 - - type: precision - value: 75.19606398962966 - - type: recall - value: 80.9651474530831 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (jpn-eng) - config: jpn-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 86.9 - - type: f1 - value: 83.705 - - type: precision - value: 82.3120634920635 - - type: recall - value: 86.9 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (csb-eng) - config: csb-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 29.64426877470356 - - type: f1 - value: 23.98763072676116 - - type: precision - value: 22.506399397703746 - - type: recall - value: 29.64426877470356 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (xho-eng) - config: xho-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 70.4225352112676 - - type: f1 - value: 62.84037558685445 - - type: precision - value: 59.56572769953053 - - type: recall - value: 70.4225352112676 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (orv-eng) - config: orv-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 19.64071856287425 - - type: f1 - value: 15.125271011207756 - - type: precision - value: 13.865019261197494 - - type: recall - value: 19.64071856287425 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ind-eng) - config: ind-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 90.2 - - type: f1 - value: 87.80666666666666 - - type: precision - value: 86.70833333333331 - - type: recall - value: 90.2 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tuk-eng) - config: tuk-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 23.15270935960591 - - type: f1 - value: 18.407224958949097 - - type: precision - value: 16.982385430661292 - - type: recall - value: 23.15270935960591 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (max-eng) - config: max-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 55.98591549295775 - - type: f1 - value: 49.94718309859154 - - type: precision - value: 47.77864154624717 - - type: recall - value: 55.98591549295775 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (swh-eng) - config: swh-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 73.07692307692307 - - type: f1 - value: 66.74358974358974 - - type: precision - value: 64.06837606837607 - - type: recall - value: 73.07692307692307 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (hin-eng) - config: hin-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 94.89999999999999 - - type: f1 - value: 93.25 - - type: precision - value: 92.43333333333332 - - type: recall - value: 94.89999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (dsb-eng) - config: dsb-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 37.78705636743215 - - type: f1 - value: 31.63899658680452 - - type: precision - value: 29.72264397629742 - - type: recall - value: 37.78705636743215 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ber-eng) - config: ber-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 21.6 - - type: f1 - value: 16.91697302697303 - - type: precision - value: 15.71225147075147 - - type: recall - value: 21.6 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tam-eng) - config: tam-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 85.01628664495115 - - type: f1 - value: 81.38514037536838 - - type: precision - value: 79.83170466883823 - - type: recall - value: 85.01628664495115 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (slk-eng) - config: slk-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 83.39999999999999 - - type: f1 - value: 79.96380952380952 - - type: precision - value: 78.48333333333333 - - type: recall - value: 83.39999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tgl-eng) - config: tgl-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 83.2 - - type: f1 - value: 79.26190476190476 - - type: precision - value: 77.58833333333334 - - type: recall - value: 83.2 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ast-eng) - config: ast-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 75.59055118110236 - - type: f1 - value: 71.66854143232096 - - type: precision - value: 70.30183727034121 - - type: recall - value: 75.59055118110236 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (mkd-eng) - config: mkd-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 65.5 - - type: f1 - value: 59.26095238095238 - - type: precision - value: 56.81909090909092 - - type: recall - value: 65.5 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (khm-eng) - config: khm-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 55.26315789473685 - - type: f1 - value: 47.986523325858506 - - type: precision - value: 45.33950006595436 - - type: recall - value: 55.26315789473685 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ces-eng) - config: ces-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 82.89999999999999 - - type: f1 - value: 78.835 - - type: precision - value: 77.04761904761905 - - type: recall - value: 82.89999999999999 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tzl-eng) - config: tzl-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 43.269230769230774 - - type: f1 - value: 36.20421245421245 - - type: precision - value: 33.57371794871795 - - type: recall - value: 43.269230769230774 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (urd-eng) - config: urd-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 88.0 - - type: f1 - value: 84.70666666666666 - - type: precision - value: 83.23166666666665 - - type: recall - value: 88.0 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (ara-eng) - config: ara-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 77.4 - - type: f1 - value: 72.54666666666667 - - type: precision - value: 70.54318181818181 - - type: recall - value: 77.4 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (kor-eng) - config: kor-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 78.60000000000001 - - type: f1 - value: 74.1588888888889 - - type: precision - value: 72.30250000000001 - - type: recall - value: 78.60000000000001 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (yid-eng) - config: yid-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 72.40566037735849 - - type: f1 - value: 66.82587328813744 - - type: precision - value: 64.75039308176099 - - type: recall - value: 72.40566037735849 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (fin-eng) - config: fin-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 73.8 - - type: f1 - value: 68.56357142857144 - - type: precision - value: 66.3178822055138 - - type: recall - value: 73.8 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (tha-eng) - config: tha-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 91.78832116788321 - - type: f1 - value: 89.3552311435523 - - type: precision - value: 88.20559610705597 - - type: recall - value: 91.78832116788321 - - task: - type: BitextMining - dataset: - type: mteb/tatoeba-bitext-mining - name: MTEB Tatoeba (wuu-eng) - config: wuu-eng - split: test - revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 - metrics: - - type: accuracy - value: 74.3 - - type: f1 - value: 69.05085581085581 - - type: precision - value: 66.955 - - type: recall - value: 74.3 - - task: - type: Retrieval - dataset: - type: webis-touche2020 - name: MTEB Touche2020 - config: default - split: test - revision: None - metrics: - - type: map_at_1 - value: 2.896 - - type: map_at_10 - value: 8.993 - - type: map_at_100 - value: 14.133999999999999 - - type: map_at_1000 - value: 15.668000000000001 - - type: map_at_3 - value: 5.862 - - type: map_at_5 - value: 7.17 - - type: mrr_at_1 - value: 34.694 - - type: mrr_at_10 - value: 42.931000000000004 - - type: mrr_at_100 - value: 44.81 - - type: mrr_at_1000 - value: 44.81 - - type: mrr_at_3 - value: 38.435 - - type: mrr_at_5 - value: 41.701 - - type: ndcg_at_1 - value: 31.633 - - type: ndcg_at_10 - value: 21.163 - - type: ndcg_at_100 - value: 33.306000000000004 - - type: ndcg_at_1000 - value: 45.275999999999996 - - type: ndcg_at_3 - value: 25.685999999999996 - - type: ndcg_at_5 - value: 23.732 - - type: precision_at_1 - value: 34.694 - - type: precision_at_10 - value: 17.755000000000003 - - type: precision_at_100 - value: 6.938999999999999 - - type: precision_at_1000 - value: 1.48 - - type: precision_at_3 - value: 25.85 - - type: precision_at_5 - value: 23.265 - - type: recall_at_1 - value: 2.896 - - type: recall_at_10 - value: 13.333999999999998 - - type: recall_at_100 - value: 43.517 - - type: recall_at_1000 - value: 79.836 - - type: recall_at_3 - value: 6.306000000000001 - - type: recall_at_5 - value: 8.825 - - task: - type: Classification - dataset: - type: mteb/toxic_conversations_50k - name: MTEB ToxicConversationsClassification - config: default - split: test - revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c - metrics: - - type: accuracy - value: 69.3874 - - type: ap - value: 13.829909072469423 - - type: f1 - value: 53.54534203543492 - - task: - type: Classification - dataset: - type: mteb/tweet_sentiment_extraction - name: MTEB TweetSentimentExtractionClassification - config: default - split: test - revision: d604517c81ca91fe16a244d1248fc021f9ecee7a - metrics: - - type: accuracy - value: 62.62026032823995 - - type: f1 - value: 62.85251350485221 - - task: - type: Clustering - dataset: - type: mteb/twentynewsgroups-clustering - name: MTEB TwentyNewsgroupsClustering - config: default - split: test - revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 - metrics: - - type: v_measure - value: 33.21527881409797 - - task: - type: PairClassification - dataset: - type: mteb/twittersemeval2015-pairclassification - name: MTEB TwitterSemEval2015 - config: default - split: test - revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 - metrics: - - type: cos_sim_accuracy - value: 84.97943613280086 - - type: cos_sim_ap - value: 70.75454316885921 - - type: cos_sim_f1 - value: 65.38274012676743 - - type: cos_sim_precision - value: 60.761214318078835 - - type: cos_sim_recall - value: 70.76517150395777 - - type: dot_accuracy - value: 79.0546581629612 - - type: dot_ap - value: 47.3197121792147 - - type: dot_f1 - value: 49.20106524633821 - - type: dot_precision - value: 42.45499808502489 - - type: dot_recall - value: 58.49604221635884 - - type: euclidean_accuracy - value: 85.08076533349228 - - type: euclidean_ap - value: 70.95016106374474 - - type: euclidean_f1 - value: 65.43987900176455 - - type: euclidean_precision - value: 62.64478764478765 - - type: euclidean_recall - value: 68.49604221635884 - - type: manhattan_accuracy - value: 84.93771234428085 - - type: manhattan_ap - value: 70.63668388755362 - - type: manhattan_f1 - value: 65.23895401262398 - - type: manhattan_precision - value: 56.946084218811485 - - type: manhattan_recall - value: 76.35883905013192 - - type: max_accuracy - value: 85.08076533349228 - - type: max_ap - value: 70.95016106374474 - - type: max_f1 - value: 65.43987900176455 - - task: - type: PairClassification - dataset: - type: mteb/twitterurlcorpus-pairclassification - name: MTEB TwitterURLCorpus - config: default - split: test - revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf - metrics: - - type: cos_sim_accuracy - value: 88.69096130709822 - - type: cos_sim_ap - value: 84.82526278228542 - - type: cos_sim_f1 - value: 77.65485060585536 - - type: cos_sim_precision - value: 75.94582658619167 - - type: cos_sim_recall - value: 79.44256236526024 - - type: dot_accuracy - value: 80.97954748321496 - - type: dot_ap - value: 64.81642914145866 - - type: dot_f1 - value: 60.631996987229975 - - type: dot_precision - value: 54.5897293631712 - - type: dot_recall - value: 68.17831844779796 - - type: euclidean_accuracy - value: 88.6987231730508 - - type: euclidean_ap - value: 84.80003825477253 - - type: euclidean_f1 - value: 77.67194179854496 - - type: euclidean_precision - value: 75.7128235122094 - - type: euclidean_recall - value: 79.73514012935017 - - type: manhattan_accuracy - value: 88.62692591298949 - - type: manhattan_ap - value: 84.80451408255276 - - type: manhattan_f1 - value: 77.69888949572183 - - type: manhattan_precision - value: 73.70311528631622 - - type: manhattan_recall - value: 82.15275639051433 - - type: max_accuracy - value: 88.6987231730508 - - type: max_ap - value: 84.82526278228542 - - type: max_f1 - value: 77.69888949572183 -language: -- multilingual -- af -- am -- ar -- as -- az -- be -- bg -- bn -- br -- bs -- ca -- cs -- cy -- da -- de -- el -- en -- eo -- es -- et -- eu -- fa -- fi -- fr -- fy -- ga -- gd -- gl -- gu -- ha -- he -- hi -- hr -- hu -- hy -- id -- is -- it -- ja -- jv -- ka -- kk -- km -- kn -- ko -- ku -- ky -- la -- lo -- lt -- lv -- mg -- mk -- ml -- mn -- mr -- ms -- my -- ne -- nl -- 'no' -- om -- or -- pa -- pl -- ps -- pt -- ro -- ru -- sa -- sd -- si -- sk -- sl -- so -- sq -- sr -- su -- sv -- sw -- ta -- te -- th -- tl -- tr -- ug -- uk -- ur -- uz -- vi -- xh -- yi -- zh -license: mit ---- - -## Multilingual-E5-small - -[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). -Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 - -This model has 12 layers and the embedding size is 384. - -## Usage - -Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. - -```python -import torch.nn.functional as F - -from torch import Tensor -from transformers import AutoTokenizer, AutoModel - - -def average_pool(last_hidden_states: Tensor, - attention_mask: Tensor) -> Tensor: - last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) - return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] - - -# Each input text should start with "query: " or "passage: ", even for non-English texts. -# For tasks other than retrieval, you can simply use the "query: " prefix. -input_texts = ['query: how much protein should a female eat', - 'query: 南瓜的家常做法', - "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", - "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] - -tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small') -model = AutoModel.from_pretrained('intfloat/multilingual-e5-small') - -# Tokenize the input texts -batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') - -outputs = model(**batch_dict) -embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) - -# normalize embeddings -embeddings = F.normalize(embeddings, p=2, dim=1) -scores = (embeddings[:2] @ embeddings[2:].T) * 100 -print(scores.tolist()) -``` - -## Supported Languages - -This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) -and continually trained on a mixture of multilingual datasets. -It supports 100 languages from xlm-roberta, -but low-resource languages may see performance degradation. - -## Training Details - -**Initialization**: [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) - -**First stage**: contrastive pre-training with weak supervision - -| Dataset | Weak supervision | # of text pairs | -|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| -| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | -| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | -| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | -| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | -| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | -| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | -| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | -| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | -| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | - -**Second stage**: supervised fine-tuning - -| Dataset | Language | # of text pairs | -|----------------------------------------------------------------------------------------|--------------|-----------------| -| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | -| [NQ](https://github.com/facebookresearch/DPR) | English | 70k | -| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | -| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | -| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | -| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | -| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | -| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | -| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | -| [Quora](https://huggingface.co/datasets/quora) | English | 150k | -| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | -| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | - -For all labeled datasets, we only use its training set for fine-tuning. - -For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). - -## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) - -| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | -|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | -| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | -| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | -| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | -| | | -| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | -| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | -| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | - -## MTEB Benchmark Evaluation - -Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results -on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). - -## Support for Sentence Transformers - -Below is an example for usage with sentence_transformers. -```python -from sentence_transformers import SentenceTransformer -model = SentenceTransformer('intfloat/multilingual-e5-small') -input_texts = [ - 'query: how much protein should a female eat', - 'query: 南瓜的家常做法', - "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", - "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" -] -embeddings = model.encode(input_texts, normalize_embeddings=True) -``` - -Package requirements - -`pip install sentence_transformers~=2.2.2` - -Contributors: [michaelfeil](https://huggingface.co/michaelfeil) - -## FAQ - -**1. Do I need to add the prefix "query: " and "passage: " to input texts?** - -Yes, this is how the model is trained, otherwise you will see a performance degradation. - -Here are some rules of thumb: -- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - -- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - -- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. - -**2. Why are my reproduced results slightly different from reported in the model card?** - -Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. - -**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** - -This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. - -For text embedding tasks like text retrieval or semantic similarity, -what matters is the relative order of the scores instead of the absolute values, -so this should not be an issue. - -## Citation - -If you find our paper or models helpful, please consider cite as follows: - -``` -@article{wang2022text, - title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, - author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, - journal={arXiv preprint arXiv:2212.03533}, - year={2022} -} -``` - -## Limitations - -Long texts will be truncated to at most 512 tokens. \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/pre.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/pre.py deleted file mode 100644 index 17fd0f710153bfb71b717678998a853e364c8cd8..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/pre.py +++ /dev/null @@ -1,76 +0,0 @@ -from synthesizer.preprocess import create_embeddings -from utils.argutils import print_args -from pathlib import Path -import argparse - -from synthesizer.preprocess import preprocess_dataset -from synthesizer.hparams import hparams -from utils.argutils import print_args -from pathlib import Path -import argparse - -recognized_datasets = [ - "aidatatang_200zh", - "magicdata", - "aishell3", - "data_aishell" -] - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms " - "and writes them to the disk. Audio files are also saved, to be used by the " - "vocoder for training.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms, the audios and the " - "embeds. Defaults to /SV2TTS/synthesizer/") - parser.add_argument("-n", "--n_processes", type=int, default=1, help=\ - "Number of processes in parallel.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to overwrite existing files with the same name. Useful if the preprocessing was " - "interrupted. ") - parser.add_argument("--hparams", type=str, default="", help=\ - "Hyperparameter overrides as a comma-separated list of name-value pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--no_alignments", action="store_true", help=\ - "Use this option when dataset does not include alignments\ - (these are used to split long audio files into sub-utterances.)") - parser.add_argument("-d", "--dataset", type=str, default="aidatatang_200zh", help=\ - "Name of the dataset to process, allowing values: magicdata, aidatatang_200zh, aishell3, data_aishell.") - parser.add_argument("-e", "--encoder_model_fpath", type=Path, default="encoder/saved_models/pretrained.pt", help=\ - "Path your trained encoder model.") - parser.add_argument("-ne", "--n_processes_embed", type=int, default=1, help=\ - "Number of processes in parallel.An encoder is created for each, so you may need to lower " - "this value on GPUs with low memory. Set it to 1 if CUDA is unhappy") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "synthesizer") - assert args.dataset in recognized_datasets, 'is not supported, please vote for it in https://github.com/babysor/MockingBird/issues/10' - # Create directories - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - encoder_model_fpath = args.encoder_model_fpath - del args.no_trim, args.encoder_model_fpath - - args.hparams = hparams.parse(args.hparams) - n_processes_embed = args.n_processes_embed - del args.n_processes_embed - preprocess_dataset(**vars(args)) - - create_embeddings(synthesizer_root=args.out_dir, n_processes=n_processes_embed, encoder_model_fpath=encoder_model_fpath) diff --git a/spaces/Kieranm/britishmus_plate_material_classifier_space/README.md b/spaces/Kieranm/britishmus_plate_material_classifier_space/README.md deleted file mode 100644 index 1ad49b3d10c9f88169003f015a7f7898b7986ffd..0000000000000000000000000000000000000000 --- a/spaces/Kieranm/britishmus_plate_material_classifier_space/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Britishmus Plate Material Classifier Space -emoji: 📚 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/loader.py b/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/loader.py deleted file mode 100644 index b1304f90e8cb354c3c88628069e77c98672073d3..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/loader.py +++ /dev/null @@ -1,35 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -from indicnlp import common -from indicnlp.script import indic_scripts -from indicnlp.script import english_script -from indicnlp.transliterate import unicode_transliterate - -def load(): - """ - Initializes the Indic NLP library. Clients should call this method before using the library. - - Any module requiring initialization should have a init() method, to which a call must be made from this method - """ - - ### Order of intialization may matter - - # Common has to be loaded first to get access to resources - common.init() - - ## Initialization of Indic scripts module - indic_scripts.init() - - ## Initialization of English scripts module - english_script.init() - - ## Initialization of unicode_transliterate module - unicode_transliterate.init() - - diff --git a/spaces/Kreaols/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/Kreaols/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/KyanChen/FunSR/tools/data_tools/get_train_val_list.py b/spaces/KyanChen/FunSR/tools/data_tools/get_train_val_list.py deleted file mode 100644 index 4316bbe4223a3fcf7ca52e1626b9dd6851ea28d5..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/tools/data_tools/get_train_val_list.py +++ /dev/null @@ -1,49 +0,0 @@ -import glob -import json -import os - -import numpy as np -import pickle -import sys -import tqdm -import shutil -from skimage import io - -pre_path = '/Users/kyanchen/Documents/AID/AID' -sub_folder_list = glob.glob(pre_path +'/*') -# train_val_frac = [0.6, 0.2] -train_val_frac = [0.8, 0.2] - -train_list = [] -val_list = [] -test_list = [] -for sub_folder in sub_folder_list: - img_list = glob.glob(sub_folder+'/*') - # img_list = [x for x in img_list if 0 < io.imread(x).shape[0] < 60] - np.random.shuffle(img_list) - np.random.shuffle(img_list) - np.random.shuffle(img_list) - # img_list = img_list - - # for UC datasets - # num_train_samps = int(len(img_list) * train_val_frac[0]) - # num_val_samps = int(len(img_list) * train_val_frac[1]) - # train_list += img_list[:num_train_samps] - # val_list += img_list[num_train_samps:num_train_samps+num_val_samps] - # test_list += img_list[num_train_samps+num_val_samps:] - - # for AID datasets - num_train_samps = int(len(img_list) * train_val_frac[0]) - 10 - num_val_samps = 10 - - train_list += img_list[:num_train_samps] - val_list += img_list[num_train_samps:num_train_samps + num_val_samps] - test_list += img_list[num_train_samps + num_val_samps:] - -data = {} -folder = pre_path + f'/..' -os.makedirs(folder, exist_ok=True) -for phase in ['train_list', 'val_list', 'test_list']: - data[phase.split('_')[0]] = [os.path.basename(os.path.dirname(file)) + '/' + os.path.basename(file) for file in eval(phase)] - -json.dump(data, open(folder+'/AID_split.json', 'w')) \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centernet_update_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centernet_update_head.py deleted file mode 100644 index 2eb44edaf8bf811e0e257e7ff2bd42872b19efe4..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centernet_update_head.py +++ /dev/null @@ -1,624 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, List, Optional, Sequence, Tuple - -import torch -import torch.nn as nn -from mmcv.cnn import Scale -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures.bbox import bbox2distance -from mmdet.utils import (ConfigType, InstanceList, OptConfigType, - OptInstanceList, reduce_mean) -from ..utils import multi_apply -from .anchor_free_head import AnchorFreeHead - -INF = 1000000000 -RangeType = Sequence[Tuple[int, int]] - - -def _transpose(tensor_list: List[Tensor], - num_point_list: list) -> List[Tensor]: - """This function is used to transpose image first tensors to level first - ones.""" - for img_idx in range(len(tensor_list)): - tensor_list[img_idx] = torch.split( - tensor_list[img_idx], num_point_list, dim=0) - - tensors_level_first = [] - for targets_per_level in zip(*tensor_list): - tensors_level_first.append(torch.cat(targets_per_level, dim=0)) - return tensors_level_first - - -@MODELS.register_module() -class CenterNetUpdateHead(AnchorFreeHead): - """CenterNetUpdateHead is an improved version of CenterNet in CenterNet2. - Paper link ``_. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channel in the input feature map. - regress_ranges (Sequence[Tuple[int, int]]): Regress range of multiple - level points. - hm_min_radius (int): Heatmap target minimum radius of cls branch. - Defaults to 4. - hm_min_overlap (float): Heatmap target minimum overlap of cls branch. - Defaults to 0.8. - more_pos_thresh (float): The filtering threshold when the cls branch - adds more positive samples. Defaults to 0.2. - more_pos_topk (int): The maximum number of additional positive samples - added to each gt. Defaults to 9. - soft_weight_on_reg (bool): Whether to use the soft target of the - cls branch as the soft weight of the bbox branch. - Defaults to False. - loss_cls (:obj:`ConfigDict` or dict): Config of cls loss. Defaults to - dict(type='GaussianFocalLoss', loss_weight=1.0) - loss_bbox (:obj:`ConfigDict` or dict): Config of bbox loss. Defaults to - dict(type='GIoULoss', loss_weight=2.0). - norm_cfg (:obj:`ConfigDict` or dict, optional): dictionary to construct - and config norm layer. Defaults to - ``norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)``. - train_cfg (:obj:`ConfigDict` or dict, optional): Training config. - Unused in CenterNet. Reserved for compatibility with - SingleStageDetector. - test_cfg (:obj:`ConfigDict` or dict, optional): Testing config - of CenterNet. - """ - - def __init__(self, - num_classes: int, - in_channels: int, - regress_ranges: RangeType = ((0, 80), (64, 160), (128, 320), - (256, 640), (512, INF)), - hm_min_radius: int = 4, - hm_min_overlap: float = 0.8, - more_pos_thresh: float = 0.2, - more_pos_topk: int = 9, - soft_weight_on_reg: bool = False, - loss_cls: ConfigType = dict( - type='GaussianFocalLoss', - pos_weight=0.25, - neg_weight=0.75, - loss_weight=1.0), - loss_bbox: ConfigType = dict( - type='GIoULoss', loss_weight=2.0), - norm_cfg: OptConfigType = dict( - type='GN', num_groups=32, requires_grad=True), - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - **kwargs) -> None: - super().__init__( - num_classes=num_classes, - in_channels=in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - norm_cfg=norm_cfg, - train_cfg=train_cfg, - test_cfg=test_cfg, - **kwargs) - self.soft_weight_on_reg = soft_weight_on_reg - self.hm_min_radius = hm_min_radius - self.more_pos_thresh = more_pos_thresh - self.more_pos_topk = more_pos_topk - self.delta = (1 - hm_min_overlap) / (1 + hm_min_overlap) - self.sigmoid_clamp = 0.0001 - - # GaussianFocalLoss must be sigmoid mode - self.use_sigmoid_cls = True - self.cls_out_channels = num_classes - - self.regress_ranges = regress_ranges - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - def _init_predictor(self) -> None: - """Initialize predictor layers of the head.""" - self.conv_cls = nn.Conv2d( - self.feat_channels, self.num_classes, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - def forward(self, x: Tuple[Tensor]) -> Tuple[List[Tensor], List[Tensor]]: - """Forward features from the upstream network. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of each level outputs. - - - cls_scores (list[Tensor]): Box scores for each scale level, \ - each is a 4D-tensor, the channel number is num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for each \ - scale level, each is a 4D-tensor, the channel number is 4. - """ - return multi_apply(self.forward_single, x, self.scales, self.strides) - - def forward_single(self, x: Tensor, scale: Scale, - stride: int) -> Tuple[Tensor, Tensor]: - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - scale (:obj:`mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - stride (int): The corresponding stride for feature maps. - - Returns: - tuple: scores for each class, bbox predictions of - input feature maps. - """ - cls_score, bbox_pred, _, _ = super().forward_single(x) - # scale the bbox_pred of different level - # float to avoid overflow when enabling FP16 - bbox_pred = scale(bbox_pred).float() - # bbox_pred needed for gradient computation has been modified - # by F.relu(bbox_pred) when run with PyTorch 1.10. So replace - # F.relu(bbox_pred) with bbox_pred.clamp(min=0) - bbox_pred = bbox_pred.clamp(min=0) - if not self.training: - bbox_pred *= stride - return cls_score, bbox_pred - - def loss_by_feat( - self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None - ) -> Dict[str, Tensor]: - """Calculate the loss based on the features extracted by the detection - head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is 4. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - num_imgs = cls_scores[0].size(0) - assert len(cls_scores) == len(bbox_preds) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - all_level_points = self.prior_generator.grid_priors( - featmap_sizes, - dtype=bbox_preds[0].dtype, - device=bbox_preds[0].device) - - # 1 flatten outputs - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - - # repeat points to align with bbox_preds - flatten_points = torch.cat( - [points.repeat(num_imgs, 1) for points in all_level_points]) - - assert (torch.isfinite(flatten_bbox_preds).all().item()) - - # 2 calc reg and cls branch targets - cls_targets, bbox_targets = self.get_targets(all_level_points, - batch_gt_instances) - - # 3 add more pos index for cls branch - featmap_sizes = flatten_points.new_tensor(featmap_sizes) - pos_inds, cls_labels = self.add_cls_pos_inds(flatten_points, - flatten_bbox_preds, - featmap_sizes, - batch_gt_instances) - - # 4 calc cls loss - if pos_inds is None: - # num_gts=0 - num_pos_cls = bbox_preds[0].new_tensor(0, dtype=torch.float) - else: - num_pos_cls = bbox_preds[0].new_tensor( - len(pos_inds), dtype=torch.float) - num_pos_cls = max(reduce_mean(num_pos_cls), 1.0) - flatten_cls_scores = flatten_cls_scores.sigmoid().clamp( - min=self.sigmoid_clamp, max=1 - self.sigmoid_clamp) - cls_loss = self.loss_cls( - flatten_cls_scores, - cls_targets, - pos_inds=pos_inds, - pos_labels=cls_labels, - avg_factor=num_pos_cls) - - # 5 calc reg loss - pos_bbox_inds = torch.nonzero( - bbox_targets.max(dim=1)[0] >= 0).squeeze(1) - pos_bbox_preds = flatten_bbox_preds[pos_bbox_inds] - pos_bbox_targets = bbox_targets[pos_bbox_inds] - - bbox_weight_map = cls_targets.max(dim=1)[0] - bbox_weight_map = bbox_weight_map[pos_bbox_inds] - bbox_weight_map = bbox_weight_map if self.soft_weight_on_reg \ - else torch.ones_like(bbox_weight_map) - num_pos_bbox = max(reduce_mean(bbox_weight_map.sum()), 1.0) - - if len(pos_bbox_inds) > 0: - pos_points = flatten_points[pos_bbox_inds] - pos_decoded_bbox_preds = self.bbox_coder.decode( - pos_points, pos_bbox_preds) - pos_decoded_target_preds = self.bbox_coder.decode( - pos_points, pos_bbox_targets) - bbox_loss = self.loss_bbox( - pos_decoded_bbox_preds, - pos_decoded_target_preds, - weight=bbox_weight_map, - avg_factor=num_pos_bbox) - else: - bbox_loss = flatten_bbox_preds.sum() * 0 - - return dict(loss_cls=cls_loss, loss_bbox=bbox_loss) - - def get_targets( - self, - points: List[Tensor], - batch_gt_instances: InstanceList, - ) -> Tuple[Tensor, Tensor]: - """Compute classification and bbox targets for points in multiple - images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - - Returns: - tuple: Targets of each level. - - - concat_lvl_labels (Tensor): Labels of all level and batch. - - concat_lvl_bbox_targets (Tensor): BBox targets of all \ - level and batch. - """ - assert len(points) == len(self.regress_ranges) - - num_levels = len(points) - # the number of points per img, per lvl - num_points = [center.size(0) for center in points] - - # expand regress ranges to align with points - expanded_regress_ranges = [ - points[i].new_tensor(self.regress_ranges[i])[None].expand_as( - points[i]) for i in range(num_levels) - ] - # concat all levels points and regress ranges - concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0) - concat_points = torch.cat(points, dim=0) - concat_strides = torch.cat([ - concat_points.new_ones(num_points[i]) * self.strides[i] - for i in range(num_levels) - ]) - - # get labels and bbox_targets of each image - cls_targets_list, bbox_targets_list = multi_apply( - self._get_targets_single, - batch_gt_instances, - points=concat_points, - regress_ranges=concat_regress_ranges, - strides=concat_strides) - - bbox_targets_list = _transpose(bbox_targets_list, num_points) - cls_targets_list = _transpose(cls_targets_list, num_points) - concat_lvl_bbox_targets = torch.cat(bbox_targets_list, 0) - concat_lvl_cls_targets = torch.cat(cls_targets_list, dim=0) - return concat_lvl_cls_targets, concat_lvl_bbox_targets - - def _get_targets_single(self, gt_instances: InstanceData, points: Tensor, - regress_ranges: Tensor, - strides: Tensor) -> Tuple[Tensor, Tensor]: - """Compute classification and bbox targets for a single image.""" - num_points = points.size(0) - num_gts = len(gt_instances) - gt_bboxes = gt_instances.bboxes - gt_labels = gt_instances.labels - - if num_gts == 0: - return gt_labels.new_full((num_points, - self.num_classes), - self.num_classes), \ - gt_bboxes.new_full((num_points, 4), -1) - - # Calculate the regression tblr target corresponding to all points - points = points[:, None].expand(num_points, num_gts, 2) - gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4) - strides = strides[:, None, None].expand(num_points, num_gts, 2) - - bbox_target = bbox2distance(points, gt_bboxes) # M x N x 4 - - # condition1: inside a gt bbox - inside_gt_bbox_mask = bbox_target.min(dim=2)[0] > 0 # M x N - - # condition2: Calculate the nearest points from - # the upper, lower, left and right ranges from - # the center of the gt bbox - centers = ((gt_bboxes[..., [0, 1]] + gt_bboxes[..., [2, 3]]) / 2) - centers_discret = ((centers / strides).int() * strides).float() + \ - strides / 2 - - centers_discret_dist = points - centers_discret - dist_x = centers_discret_dist[..., 0].abs() - dist_y = centers_discret_dist[..., 1].abs() - inside_gt_center3x3_mask = (dist_x <= strides[..., 0]) & \ - (dist_y <= strides[..., 0]) - - # condition3: limit the regression range for each location - bbox_target_wh = bbox_target[..., :2] + bbox_target[..., 2:] - crit = (bbox_target_wh**2).sum(dim=2)**0.5 / 2 - inside_fpn_level_mask = (crit >= regress_ranges[:, [0]]) & \ - (crit <= regress_ranges[:, [1]]) - bbox_target_mask = inside_gt_bbox_mask & \ - inside_gt_center3x3_mask & \ - inside_fpn_level_mask - - # Calculate the distance weight map - gt_center_peak_mask = ((centers_discret_dist**2).sum(dim=2) == 0) - weighted_dist = ((points - centers)**2).sum(dim=2) # M x N - weighted_dist[gt_center_peak_mask] = 0 - - areas = (gt_bboxes[..., 2] - gt_bboxes[..., 0]) * ( - gt_bboxes[..., 3] - gt_bboxes[..., 1]) - radius = self.delta**2 * 2 * areas - radius = torch.clamp(radius, min=self.hm_min_radius**2) - weighted_dist = weighted_dist / radius - - # Calculate bbox_target - bbox_weighted_dist = weighted_dist.clone() - bbox_weighted_dist[bbox_target_mask == 0] = INF * 1.0 - min_dist, min_inds = bbox_weighted_dist.min(dim=1) - bbox_target = bbox_target[range(len(bbox_target)), - min_inds] # M x N x 4 --> M x 4 - bbox_target[min_dist == INF] = -INF - - # Convert to feature map scale - bbox_target /= strides[:, 0, :].repeat(1, 2) - - # Calculate cls_target - cls_target = self._create_heatmaps_from_dist(weighted_dist, gt_labels) - - return cls_target, bbox_target - - @torch.no_grad() - def add_cls_pos_inds( - self, flatten_points: Tensor, flatten_bbox_preds: Tensor, - featmap_sizes: Tensor, batch_gt_instances: InstanceList - ) -> Tuple[Optional[Tensor], Optional[Tensor]]: - """Provide additional adaptive positive samples to the classification - branch. - - Args: - flatten_points (Tensor): The point after flatten, including - batch image and all levels. The shape is (N, 2). - flatten_bbox_preds (Tensor): The bbox predicts after flatten, - including batch image and all levels. The shape is (N, 4). - featmap_sizes (Tensor): Feature map size of all layers. - The shape is (5, 2). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - - Returns: - tuple: - - - pos_inds (Tensor): Adaptively selected positive sample index. - - cls_labels (Tensor): Corresponding positive class label. - """ - outputs = self._get_center3x3_region_index_targets( - batch_gt_instances, featmap_sizes) - cls_labels, fpn_level_masks, center3x3_inds, \ - center3x3_bbox_targets, center3x3_masks = outputs - - num_gts, total_level, K = cls_labels.shape[0], len( - self.strides), center3x3_masks.shape[-1] - - if num_gts == 0: - return None, None - - # The out-of-bounds index is forcibly set to 0 - # to prevent loss calculation errors - center3x3_inds[center3x3_masks == 0] = 0 - reg_pred_center3x3 = flatten_bbox_preds[center3x3_inds] - center3x3_points = flatten_points[center3x3_inds].view(-1, 2) - - center3x3_bbox_targets_expand = center3x3_bbox_targets.view( - -1, 4).clamp(min=0) - - pos_decoded_bbox_preds = self.bbox_coder.decode( - center3x3_points, reg_pred_center3x3.view(-1, 4)) - pos_decoded_target_preds = self.bbox_coder.decode( - center3x3_points, center3x3_bbox_targets_expand) - center3x3_bbox_loss = self.loss_bbox( - pos_decoded_bbox_preds, - pos_decoded_target_preds, - None, - reduction_override='none').view(num_gts, total_level, - K) / self.loss_bbox.loss_weight - - # Invalid index Loss set to infinity - center3x3_bbox_loss[center3x3_masks == 0] = INF - - # 4 is the center point of the sampled 9 points, the center point - # of gt bbox after discretization. - # The center point of gt bbox after discretization - # must be a positive sample, so we force its loss to be set to 0. - center3x3_bbox_loss.view(-1, K)[fpn_level_masks.view(-1), 4] = 0 - center3x3_bbox_loss = center3x3_bbox_loss.view(num_gts, -1) - - loss_thr = torch.kthvalue( - center3x3_bbox_loss, self.more_pos_topk, dim=1)[0] - - loss_thr[loss_thr > self.more_pos_thresh] = self.more_pos_thresh - new_pos = center3x3_bbox_loss < loss_thr.view(num_gts, 1) - pos_inds = center3x3_inds.view(num_gts, -1)[new_pos] - cls_labels = cls_labels.view(num_gts, - 1).expand(num_gts, - total_level * K)[new_pos] - return pos_inds, cls_labels - - def _create_heatmaps_from_dist(self, weighted_dist: Tensor, - cls_labels: Tensor) -> Tensor: - """Generate heatmaps of classification branch based on weighted - distance map.""" - heatmaps = weighted_dist.new_zeros( - (weighted_dist.shape[0], self.num_classes)) - for c in range(self.num_classes): - inds = (cls_labels == c) # N - if inds.int().sum() == 0: - continue - heatmaps[:, c] = torch.exp(-weighted_dist[:, inds].min(dim=1)[0]) - zeros = heatmaps[:, c] < 1e-4 - heatmaps[zeros, c] = 0 - return heatmaps - - def _get_center3x3_region_index_targets(self, - bacth_gt_instances: InstanceList, - shapes_per_level: Tensor) -> tuple: - """Get the center (and the 3x3 region near center) locations and target - of each objects.""" - cls_labels = [] - inside_fpn_level_masks = [] - center3x3_inds = [] - center3x3_masks = [] - center3x3_bbox_targets = [] - - total_levels = len(self.strides) - batch = len(bacth_gt_instances) - - shapes_per_level = shapes_per_level.long() - area_per_level = (shapes_per_level[:, 0] * shapes_per_level[:, 1]) - - # Select a total of 9 positions of 3x3 in the center of the gt bbox - # as candidate positive samples - K = 9 - dx = shapes_per_level.new_tensor([-1, 0, 1, -1, 0, 1, -1, 0, - 1]).view(1, 1, K) - dy = shapes_per_level.new_tensor([-1, -1, -1, 0, 0, 0, 1, 1, - 1]).view(1, 1, K) - - regress_ranges = shapes_per_level.new_tensor(self.regress_ranges).view( - len(self.regress_ranges), 2) # L x 2 - strides = shapes_per_level.new_tensor(self.strides) - - start_coord_pre_level = [] - _start = 0 - for level in range(total_levels): - start_coord_pre_level.append(_start) - _start = _start + batch * area_per_level[level] - start_coord_pre_level = shapes_per_level.new_tensor( - start_coord_pre_level).view(1, total_levels, 1) - area_per_level = area_per_level.view(1, total_levels, 1) - - for im_i in range(batch): - gt_instance = bacth_gt_instances[im_i] - gt_bboxes = gt_instance.bboxes - gt_labels = gt_instance.labels - num_gts = gt_bboxes.shape[0] - if num_gts == 0: - continue - - cls_labels.append(gt_labels) - - gt_bboxes = gt_bboxes[:, None].expand(num_gts, total_levels, 4) - expanded_strides = strides[None, :, - None].expand(num_gts, total_levels, 2) - expanded_regress_ranges = regress_ranges[None].expand( - num_gts, total_levels, 2) - expanded_shapes_per_level = shapes_per_level[None].expand( - num_gts, total_levels, 2) - - # calc reg_target - centers = ((gt_bboxes[..., [0, 1]] + gt_bboxes[..., [2, 3]]) / 2) - centers_inds = (centers / expanded_strides).long() - centers_discret = centers_inds * expanded_strides \ - + expanded_strides // 2 - - bbox_target = bbox2distance(centers_discret, - gt_bboxes) # M x N x 4 - - # calc inside_fpn_level_mask - bbox_target_wh = bbox_target[..., :2] + bbox_target[..., 2:] - crit = (bbox_target_wh**2).sum(dim=2)**0.5 / 2 - inside_fpn_level_mask = \ - (crit >= expanded_regress_ranges[..., 0]) & \ - (crit <= expanded_regress_ranges[..., 1]) - - inside_gt_bbox_mask = bbox_target.min(dim=2)[0] >= 0 - inside_fpn_level_mask = inside_gt_bbox_mask & inside_fpn_level_mask - inside_fpn_level_masks.append(inside_fpn_level_mask) - - # calc center3x3_ind and mask - expand_ws = expanded_shapes_per_level[..., 1:2].expand( - num_gts, total_levels, K) - expand_hs = expanded_shapes_per_level[..., 0:1].expand( - num_gts, total_levels, K) - centers_inds_x = centers_inds[..., 0:1] - centers_inds_y = centers_inds[..., 1:2] - - center3x3_idx = start_coord_pre_level + \ - im_i * area_per_level + \ - (centers_inds_y + dy) * expand_ws + \ - (centers_inds_x + dx) - center3x3_mask = \ - ((centers_inds_y + dy) < expand_hs) & \ - ((centers_inds_y + dy) >= 0) & \ - ((centers_inds_x + dx) < expand_ws) & \ - ((centers_inds_x + dx) >= 0) - - # recalc center3x3 region reg target - bbox_target = bbox_target / expanded_strides.repeat(1, 1, 2) - center3x3_bbox_target = bbox_target[..., None, :].expand( - num_gts, total_levels, K, 4).clone() - center3x3_bbox_target[..., 0] += dx - center3x3_bbox_target[..., 1] += dy - center3x3_bbox_target[..., 2] -= dx - center3x3_bbox_target[..., 3] -= dy - # update center3x3_mask - center3x3_mask = center3x3_mask & ( - center3x3_bbox_target.min(dim=3)[0] >= 0) # n x L x K - - center3x3_inds.append(center3x3_idx) - center3x3_masks.append(center3x3_mask) - center3x3_bbox_targets.append(center3x3_bbox_target) - - if len(inside_fpn_level_masks) > 0: - cls_labels = torch.cat(cls_labels, dim=0) - inside_fpn_level_masks = torch.cat(inside_fpn_level_masks, dim=0) - center3x3_inds = torch.cat(center3x3_inds, dim=0).long() - center3x3_bbox_targets = torch.cat(center3x3_bbox_targets, dim=0) - center3x3_masks = torch.cat(center3x3_masks, dim=0) - else: - cls_labels = shapes_per_level.new_zeros(0).long() - inside_fpn_level_masks = shapes_per_level.new_zeros( - (0, total_levels)).bool() - center3x3_inds = shapes_per_level.new_zeros( - (0, total_levels, K)).long() - center3x3_bbox_targets = shapes_per_level.new_zeros( - (0, total_levels, K, 4)).float() - center3x3_masks = shapes_per_level.new_zeros( - (0, total_levels, K)).bool() - return cls_labels, inside_fpn_level_masks, center3x3_inds, \ - center3x3_bbox_targets, center3x3_masks diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/yolof.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/yolof.py deleted file mode 100644 index c6d98b9134a7f422fa7ea1f1a1e0d548d36603e8..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/yolof.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .single_stage import SingleStageDetector - - -@MODELS.register_module() -class YOLOF(SingleStageDetector): - r"""Implementation of `You Only Look One-level Feature - `_ - - Args: - backbone (:obj:`ConfigDict` or dict): The backbone module. - neck (:obj:`ConfigDict` or dict): The neck module. - bbox_head (:obj:`ConfigDict` or dict): The bbox head module. - train_cfg (:obj:`ConfigDict` or dict, optional): The training config - of YOLOF. Defaults to None. - test_cfg (:obj:`ConfigDict` or dict, optional): The testing config - of YOLOF. Defaults to None. - data_preprocessor (:obj:`ConfigDict` or dict, optional): - Model preprocessing config for processing the input data. - it usually includes ``to_rgb``, ``pad_size_divisor``, - ``pad_value``, ``mean`` and ``std``. Defaults to None. - init_cfg (:obj:`ConfigDict` or dict, optional): the config to control - the initialization. Defaults to None. - """ - - def __init__(self, - backbone: ConfigType, - neck: ConfigType, - bbox_head: ConfigType, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/prior_generators/anchor_generator.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/prior_generators/anchor_generator.py deleted file mode 100644 index 2757697ce2283ec8b46ba89325e63fad0be4a7e8..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/prior_generators/anchor_generator.py +++ /dev/null @@ -1,848 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch -from mmengine.utils import is_tuple_of -from torch import Tensor -from torch.nn.modules.utils import _pair - -from mmdet.registry import TASK_UTILS -from mmdet.structures.bbox import HorizontalBoxes - -DeviceType = Union[str, torch.device] - - -@TASK_UTILS.register_module() -class AnchorGenerator: - """Standard anchor generator for 2D anchor-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int], Optional): Anchor scales for anchors - in a single level. It cannot be set at the same time - if `octave_base_scale` and `scales_per_octave` are set. - base_sizes (list[int], Optional): The basic sizes - of anchors in multiple levels. - If None is given, strides will be used as base_sizes. - (If strides are non square, the shortest stride is taken.) - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int, Optional): The base scale of octave. - scales_per_octave (int, Optional): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float]], Optional): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. If a list of tuple of - float is given, they will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0 in V2.0. - use_box_type (bool): Whether to warp anchors with the box type data - structure. Defaults to False. - - Examples: - >>> from mmdet.models.task_modules. - ... prior_generators import AnchorGenerator - >>> self = AnchorGenerator([16], [1.], [1.], [9]) - >>> all_anchors = self.grid_priors([(2, 2)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]])] - >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) - >>> all_anchors = self.grid_priors([(2, 2), (1, 1)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]]), \ - tensor([[-9., -9., 9., 9.]])] - """ - - def __init__(self, - strides: Union[List[int], List[Tuple[int, int]]], - ratios: List[float], - scales: Optional[List[int]] = None, - base_sizes: Optional[List[int]] = None, - scale_major: bool = True, - octave_base_scale: Optional[int] = None, - scales_per_octave: Optional[int] = None, - centers: Optional[List[Tuple[float, float]]] = None, - center_offset: float = 0., - use_box_type: bool = False) -> None: - # check center and center_offset - if center_offset != 0: - assert centers is None, 'center cannot be set when center_offset' \ - f'!=0, {centers} is given.' - if not (0 <= center_offset <= 1): - raise ValueError('center_offset should be in range [0, 1], ' - f'{center_offset} is given.') - if centers is not None: - assert len(centers) == len(strides), \ - 'The number of strides should be the same as centers, got ' \ - f'{strides} and {centers}' - - # calculate base sizes of anchors - self.strides = [_pair(stride) for stride in strides] - self.base_sizes = [min(stride) for stride in self.strides - ] if base_sizes is None else base_sizes - assert len(self.base_sizes) == len(self.strides), \ - 'The number of strides should be the same as base sizes, got ' \ - f'{self.strides} and {self.base_sizes}' - - # calculate scales of anchors - assert ((octave_base_scale is not None - and scales_per_octave is not None) ^ (scales is not None)), \ - 'scales and octave_base_scale with scales_per_octave cannot' \ - ' be set at the same time' - if scales is not None: - self.scales = torch.Tensor(scales) - elif octave_base_scale is not None and scales_per_octave is not None: - octave_scales = np.array( - [2**(i / scales_per_octave) for i in range(scales_per_octave)]) - scales = octave_scales * octave_base_scale - self.scales = torch.Tensor(scales) - else: - raise ValueError('Either scales or octave_base_scale with ' - 'scales_per_octave should be set') - - self.octave_base_scale = octave_base_scale - self.scales_per_octave = scales_per_octave - self.ratios = torch.Tensor(ratios) - self.scale_major = scale_major - self.centers = centers - self.center_offset = center_offset - self.base_anchors = self.gen_base_anchors() - self.use_box_type = use_box_type - - @property - def num_base_anchors(self) -> List[int]: - """list[int]: total number of base anchors in a feature grid""" - return self.num_base_priors - - @property - def num_base_priors(self) -> List[int]: - """list[int]: The number of priors (anchors) at a point - on the feature grid""" - return [base_anchors.size(0) for base_anchors in self.base_anchors] - - @property - def num_levels(self) -> int: - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - def gen_base_anchors(self) -> List[Tensor]: - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors( - base_size, - scales=self.scales, - ratios=self.ratios, - center=center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_size: Union[int, float], - scales: Tensor, - ratios: Tensor, - center: Optional[Tuple[float]] = None) \ - -> Tensor: - """Generate base anchors of a single level. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between the height - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * w - y_center = self.center_offset * h - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws, - y_center + 0.5 * hs - ] - base_anchors = torch.stack(base_anchors, dim=-1) - - return base_anchors - - def _meshgrid(self, - x: Tensor, - y: Tensor, - row_major: bool = True) -> Tuple[Tensor]: - """Generate mesh grid of x and y. - - Args: - x (torch.Tensor): Grids of x dimension. - y (torch.Tensor): Grids of y dimension. - row_major (bool): Whether to return y grids first. - Defaults to True. - - Returns: - tuple[torch.Tensor]: The mesh grids of x and y. - """ - # use shape instead of len to keep tracing while exporting to onnx - xx = x.repeat(y.shape[0]) - yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_priors(self, - featmap_sizes: List[Tuple], - dtype: torch.dtype = torch.float32, - device: DeviceType = 'cuda') -> List[Tensor]: - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - dtype (:obj:`torch.dtype`): Dtype of priors. - Defaults to torch.float32. - device (str | torch.device): The device where the anchors - will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_priors( - featmap_sizes[i], level_idx=i, dtype=dtype, device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_priors(self, - featmap_size: Tuple[int, int], - level_idx: int, - dtype: torch.dtype = torch.float32, - device: DeviceType = 'cuda') -> Tensor: - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_priors``. - - Args: - featmap_size (tuple[int, int]): Size of the feature maps. - level_idx (int): The index of corresponding feature map level. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (str | torch.device): The device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - base_anchors = self.base_anchors[level_idx].to(device).to(dtype) - feat_h, feat_w = featmap_size - stride_w, stride_h = self.strides[level_idx] - # First create Range with the default dtype, than convert to - # target `dtype` for onnx exporting. - shift_x = torch.arange(0, feat_w, device=device).to(dtype) * stride_w - shift_y = torch.arange(0, feat_h, device=device).to(dtype) * stride_h - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - if self.use_box_type: - all_anchors = HorizontalBoxes(all_anchors) - return all_anchors - - def sparse_priors(self, - prior_idxs: Tensor, - featmap_size: Tuple[int, int], - level_idx: int, - dtype: torch.dtype = torch.float32, - device: DeviceType = 'cuda') -> Tensor: - """Generate sparse anchors according to the ``prior_idxs``. - - Args: - prior_idxs (Tensor): The index of corresponding anchors - in the feature map. - featmap_size (tuple[int, int]): feature map size arrange as (h, w). - level_idx (int): The level index of corresponding feature - map. - dtype (obj:`torch.dtype`): Date type of points.Defaults to - ``torch.float32``. - device (str | torch.device): The device where the points is - located. - Returns: - Tensor: Anchor with shape (N, 4), N should be equal to - the length of ``prior_idxs``. - """ - - height, width = featmap_size - num_base_anchors = self.num_base_anchors[level_idx] - base_anchor_id = prior_idxs % num_base_anchors - x = (prior_idxs // - num_base_anchors) % width * self.strides[level_idx][0] - y = (prior_idxs // width // - num_base_anchors) % height * self.strides[level_idx][1] - priors = torch.stack([x, y, x, y], 1).to(dtype).to(device) + \ - self.base_anchors[level_idx][base_anchor_id, :].to(device) - - return priors - - def grid_anchors(self, - featmap_sizes: List[Tuple], - device: DeviceType = 'cuda') -> List[Tensor]: - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str | torch.device): Device where the anchors will be - put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - warnings.warn('``grid_anchors`` would be deprecated soon. ' - 'Please use ``grid_priors`` ') - - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - self.base_anchors[i].to(device), - featmap_sizes[i], - self.strides[i], - device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, - base_anchors: Tensor, - featmap_size: Tuple[int, int], - stride: Tuple[int, int] = (16, 16), - device: DeviceType = 'cuda') -> Tensor: - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_anchors``. - - Args: - base_anchors (torch.Tensor): The base anchors of a feature grid. - featmap_size (tuple[int]): Size of the feature maps. - stride (tuple[int, int]): Stride of the feature map in order - (w, h). Defaults to (16, 16). - device (str | torch.device): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - - warnings.warn( - '``single_level_grid_anchors`` would be deprecated soon. ' - 'Please use ``single_level_grid_priors`` ') - - # keep featmap_size as Tensor instead of int, so that we - # can convert to ONNX correctly - feat_h, feat_w = featmap_size - shift_x = torch.arange(0, feat_w, device=device) * stride[0] - shift_y = torch.arange(0, feat_h, device=device) * stride[1] - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - shifts = shifts.type_as(base_anchors) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def valid_flags(self, - featmap_sizes: List[Tuple[int, int]], - pad_shape: Tuple, - device: DeviceType = 'cuda') -> List[Tensor]: - """Generate valid flags of anchors in multiple feature levels. - - Args: - featmap_sizes (list(tuple[int, int])): List of feature map sizes in - multiple feature levels. - pad_shape (tuple): The padded shape of the image. - device (str | torch.device): Device where the anchors will be - put on. - - Return: - list(torch.Tensor): Valid flags of anchors in multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - self.num_base_anchors[i], - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size: Tuple[int, int], - valid_size: Tuple[int, int], - num_base_anchors: int, - device: DeviceType = 'cuda') -> Tensor: - """Generate the valid flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps, arrange - as (h, w). - valid_size (tuple[int]): The valid size of the feature maps. - num_base_anchors (int): The number of base anchors. - device (str | torch.device): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - valid = valid[:, None].expand(valid.size(0), - num_base_anchors).contiguous().view(-1) - return valid - - def __repr__(self) -> str: - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}octave_base_scale=' - repr_str += f'{self.octave_base_scale},\n' - repr_str += f'{indent_str}scales_per_octave=' - repr_str += f'{self.scales_per_octave},\n' - repr_str += f'{indent_str}num_levels={self.num_levels}\n' - repr_str += f'{indent_str}centers={self.centers},\n' - repr_str += f'{indent_str}center_offset={self.center_offset})' - return repr_str - - -@TASK_UTILS.register_module() -class SSDAnchorGenerator(AnchorGenerator): - """Anchor generator for SSD. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - min_sizes (list[float]): The list of minimum anchor sizes on each - level. - max_sizes (list[float]): The list of maximum anchor sizes on each - level. - basesize_ratio_range (tuple(float)): Ratio range of anchors. Being - used when not setting min_sizes and max_sizes. - input_size (int): Size of feature map, 300 for SSD300, 512 for - SSD512. Being used when not setting min_sizes and max_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. It is always set to be False in SSD. - use_box_type (bool): Whether to warp anchors with the box type data - structure. Defaults to False. - """ - - def __init__(self, - strides: Union[List[int], List[Tuple[int, int]]], - ratios: List[float], - min_sizes: Optional[List[float]] = None, - max_sizes: Optional[List[float]] = None, - basesize_ratio_range: Tuple[float] = (0.15, 0.9), - input_size: int = 300, - scale_major: bool = True, - use_box_type: bool = False) -> None: - assert len(strides) == len(ratios) - assert not (min_sizes is None) ^ (max_sizes is None) - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - - if min_sizes is None and max_sizes is None: - # use hard code to generate SSD anchors - self.input_size = input_size - assert is_tuple_of(basesize_ratio_range, float) - self.basesize_ratio_range = basesize_ratio_range - # calculate anchor ratios and sizes - min_ratio, max_ratio = basesize_ratio_range - min_ratio = int(min_ratio * 100) - max_ratio = int(max_ratio * 100) - step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2)) - min_sizes = [] - max_sizes = [] - for ratio in range(int(min_ratio), int(max_ratio) + 1, step): - min_sizes.append(int(self.input_size * ratio / 100)) - max_sizes.append(int(self.input_size * (ratio + step) / 100)) - if self.input_size == 300: - if basesize_ratio_range[0] == 0.15: # SSD300 COCO - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - elif basesize_ratio_range[0] == 0.2: # SSD300 VOC - min_sizes.insert(0, int(self.input_size * 10 / 100)) - max_sizes.insert(0, int(self.input_size * 20 / 100)) - else: - raise ValueError( - 'basesize_ratio_range[0] should be either 0.15' - 'or 0.2 when input_size is 300, got ' - f'{basesize_ratio_range[0]}.') - elif self.input_size == 512: - if basesize_ratio_range[0] == 0.1: # SSD512 COCO - min_sizes.insert(0, int(self.input_size * 4 / 100)) - max_sizes.insert(0, int(self.input_size * 10 / 100)) - elif basesize_ratio_range[0] == 0.15: # SSD512 VOC - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - else: - raise ValueError( - 'When not setting min_sizes and max_sizes,' - 'basesize_ratio_range[0] should be either 0.1' - 'or 0.15 when input_size is 512, got' - f' {basesize_ratio_range[0]}.') - else: - raise ValueError( - 'Only support 300 or 512 in SSDAnchorGenerator when ' - 'not setting min_sizes and max_sizes, ' - f'got {self.input_size}.') - - assert len(min_sizes) == len(max_sizes) == len(strides) - - anchor_ratios = [] - anchor_scales = [] - for k in range(len(self.strides)): - scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])] - anchor_ratio = [1.] - for r in ratios[k]: - anchor_ratio += [1 / r, r] # 4 or 6 ratio - anchor_ratios.append(torch.Tensor(anchor_ratio)) - anchor_scales.append(torch.Tensor(scales)) - - self.base_sizes = min_sizes - self.scales = anchor_scales - self.ratios = anchor_ratios - self.scale_major = scale_major - self.center_offset = 0 - self.base_anchors = self.gen_base_anchors() - self.use_box_type = use_box_type - - def gen_base_anchors(self) -> List[Tensor]: - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - base_anchors = self.gen_single_level_base_anchors( - base_size, - scales=self.scales[i], - ratios=self.ratios[i], - center=self.centers[i]) - indices = list(range(len(self.ratios[i]))) - indices.insert(1, len(indices)) - base_anchors = torch.index_select(base_anchors, 0, - torch.LongTensor(indices)) - multi_level_base_anchors.append(base_anchors) - return multi_level_base_anchors - - def __repr__(self) -> str: - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}input_size={self.input_size},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}num_levels={self.num_levels},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}basesize_ratio_range=' - repr_str += f'{self.basesize_ratio_range})' - return repr_str - - -@TASK_UTILS.register_module() -class LegacyAnchorGenerator(AnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - Note: - Difference to the V2.0 anchor generator: - - 1. The center offset of V1.x anchors are set to be 0.5 rather than 0. - 2. The width/height are minused by 1 when calculating the anchors' \ - centers and corners to meet the V1.x coordinate system. - 3. The anchors' corners are quantized. - - Args: - strides (list[int] | list[tuple[int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int]): The basic sizes of anchors in multiple levels. - If None is given, strides will be used to generate base_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. It a list of float - is given, this list will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0.5 in V2.0 but it should be 0.5 - in v1.x models. - use_box_type (bool): Whether to warp anchors with the box type data - structure. Defaults to False. - - Examples: - >>> from mmdet.models.task_modules. - ... prior_generators import LegacyAnchorGenerator - >>> self = LegacyAnchorGenerator( - >>> [16], [1.], [1.], [9], center_offset=0.5) - >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') - >>> print(all_anchors) - [tensor([[ 0., 0., 8., 8.], - [16., 0., 24., 8.], - [ 0., 16., 8., 24.], - [16., 16., 24., 24.]])] - """ - - def gen_single_level_base_anchors(self, - base_size: Union[int, float], - scales: Tensor, - ratios: Tensor, - center: Optional[Tuple[float]] = None) \ - -> Tensor: - """Generate base anchors of a single level. - - Note: - The width/height of anchors are minused by 1 when calculating \ - the centers and corners to meet the V1.x coordinate system. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between the height. - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature map. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * (w - 1) - y_center = self.center_offset * (h - 1) - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1), - x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1) - ] - base_anchors = torch.stack(base_anchors, dim=-1).round() - - return base_anchors - - -@TASK_UTILS.register_module() -class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator` - can be found in `LegacyAnchorGenerator`. - """ - - def __init__(self, - strides: Union[List[int], List[Tuple[int, int]]], - ratios: List[float], - basesize_ratio_range: Tuple[float], - input_size: int = 300, - scale_major: bool = True, - use_box_type: bool = False) -> None: - super(LegacySSDAnchorGenerator, self).__init__( - strides=strides, - ratios=ratios, - basesize_ratio_range=basesize_ratio_range, - input_size=input_size, - scale_major=scale_major, - use_box_type=use_box_type) - self.centers = [((stride - 1) / 2., (stride - 1) / 2.) - for stride in strides] - self.base_anchors = self.gen_base_anchors() - - -@TASK_UTILS.register_module() -class YOLOAnchorGenerator(AnchorGenerator): - """Anchor generator for YOLO. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - base_sizes (list[list[tuple[int, int]]]): The basic sizes - of anchors in multiple levels. - """ - - def __init__(self, - strides: Union[List[int], List[Tuple[int, int]]], - base_sizes: List[List[Tuple[int, int]]], - use_box_type: bool = False) -> None: - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.base_sizes = [] - num_anchor_per_level = len(base_sizes[0]) - for base_sizes_per_level in base_sizes: - assert num_anchor_per_level == len(base_sizes_per_level) - self.base_sizes.append( - [_pair(base_size) for base_size in base_sizes_per_level]) - self.base_anchors = self.gen_base_anchors() - self.use_box_type = use_box_type - - @property - def num_levels(self) -> int: - """int: number of feature levels that the generator will be applied""" - return len(self.base_sizes) - - def gen_base_anchors(self) -> List[Tensor]: - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_sizes_per_level in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors(base_sizes_per_level, - center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_sizes_per_level: List[Tuple[int]], - center: Optional[Tuple[float]] = None) \ - -> Tensor: - """Generate base anchors of a single level. - - Args: - base_sizes_per_level (list[tuple[int]]): Basic sizes of - anchors. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - x_center, y_center = center - base_anchors = [] - for base_size in base_sizes_per_level: - w, h = base_size - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchor = torch.Tensor([ - x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w, - y_center + 0.5 * h - ]) - base_anchors.append(base_anchor) - base_anchors = torch.stack(base_anchors, dim=0) - - return base_anchors diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/coco_retrieval.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/coco_retrieval.py deleted file mode 100644 index 60d1586ad8672a4b57fcdc62740b3e08c3e2e20e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/coco_retrieval.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -from collections import OrderedDict -from typing import List - -from mmengine import get_file_backend - -from mmpretrain.registry import DATASETS -from .base_dataset import BaseDataset - - -@DATASETS.register_module() -class COCORetrieval(BaseDataset): - """COCO Retrieval dataset. - - Args: - ann_file (str): Annotation file path. - test_mode (bool): Whether dataset is used for evaluation. This will - decide the annotation format in data list annotations. - Defaults to False. - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. Defaults to ''. - data_prefix (str | dict): Prefix for training data. Defaults to ''. - pipeline (Sequence): Processing pipeline. Defaults to an empty tuple. - **kwargs: Other keyword arguments in :class:`BaseDataset`. - """ - - def load_data_list(self) -> List[dict]: - """Load data list.""" - # get file backend - img_prefix = self.data_prefix['img_path'] - file_backend = get_file_backend(img_prefix) - - anno_info = json.load(open(self.ann_file, 'r')) - # mapping img_id to img filename - img_dict = OrderedDict() - for idx, img in enumerate(anno_info['images']): - if img['id'] not in img_dict: - img_rel_path = img['coco_url'].rsplit('/', 2)[-2:] - img_path = file_backend.join_path(img_prefix, *img_rel_path) - - # create new idx for image - img_dict[img['id']] = dict( - ori_id=img['id'], - image_id=idx, # will be used for evaluation - img_path=img_path, - text=[], - gt_text_id=[], - gt_image_id=[], - ) - - train_list = [] - for idx, anno in enumerate(anno_info['annotations']): - anno['text'] = anno.pop('caption') - anno['ori_id'] = anno.pop('id') - anno['text_id'] = idx # will be used for evaluation - # 1. prepare train data list item - train_data = anno.copy() - train_image = img_dict[train_data['image_id']] - train_data['img_path'] = train_image['img_path'] - train_data['image_ori_id'] = train_image['ori_id'] - train_data['image_id'] = train_image['image_id'] - train_data['is_matched'] = True - train_list.append(train_data) - # 2. prepare eval data list item based on img dict - img_dict[anno['image_id']]['gt_text_id'].append(anno['text_id']) - img_dict[anno['image_id']]['text'].append(anno['text']) - img_dict[anno['image_id']]['gt_image_id'].append( - train_image['image_id']) - - self.img_size = len(img_dict) - self.text_size = len(anno_info['annotations']) - - # return needed format data list - if self.test_mode: - return list(img_dict.values()) - return train_list diff --git a/spaces/Kyo-Kai/Fsg-pp/Fsg_pp.py b/spaces/Kyo-Kai/Fsg-pp/Fsg_pp.py deleted file mode 100644 index 5c421fc29f6977d151cbd0caf5903a9c7a4e2801..0000000000000000000000000000000000000000 --- a/spaces/Kyo-Kai/Fsg-pp/Fsg_pp.py +++ /dev/null @@ -1,194 +0,0 @@ -import numpy as np -import gradio as gr -import os -import commands.exec_path as exec_path -import commands.driver_instance as driver_instance -import inspect - -from commands.universal import searchQuery -from ai.autocrop import autoCropImages -from sites.pixiv import getOrderedPixivImages -from sites.danbooru import getOrderedDanbooruImages -from sites.zerochan import getOrderedZerochanImages - - -def get_images(*args): - global_imgz = args[-1] - args = args[:-1] - driver = driver_instance.create_driver(exec_path.executable_path) - - global counter - if counter>=20: - os.system(f"rm -r ./Images") - os.makedirs("./Images") - counter = 0 - else: - counter += 1 - - if len(args) == len(inspect.signature(getOrderedPixivImages).parameters)-2: - print(global_imgz) - global_imgz = getOrderedPixivImages(driver, exec_path, *args) - return {imgz_global: global_imgz, pix_gallery: global_imgz} - - elif len(args) == len(inspect.signature(getOrderedDanbooruImages).parameters)-2: - global_imgz = getOrderedDanbooruImages(driver, exec_path, *args) - print(global_imgz) - return {imgz_global:global_imgz, danb_gallery:global_imgz} - - elif len(args) == len(inspect.signature(getOrderedZerochanImages).parameters)-2: - global_imgz = getOrderedZerochanImages(driver, exec_path, *args) - print(global_imgz) - return {imgz_global: global_imgz, zero_gallery: global_imgz} - -imageIndex = 0 -imgz_global = [] -counter = 0 - -def get_select_index(evt: gr.SelectData): - imageIndex=evt.index - return evt.index - -def send_number(indx,global_imgz): - imageIndex = indx - print(global_imgz[int(imageIndex)]) - return {imgz_global:global_imgz, image:global_imgz[int(imageIndex)], tabs:gr.Tabs.update(selected=0)} - -def cropImages(image,crop_scale_factor): - return autoCropImages(image,crop_scale_factor) - -with gr.Blocks(css='style.css') as demo: - imgz_global = gr.State([]) - - with gr.Tabs(selected=1) as tabs: - selected = gr.Number(label="Gallery Number",visible=False) - folder_input = gr.Textbox(value="./Images/", label="Enter Folder Path", visible=False) - - # Automatic Crop Tab - with gr.TabItem("Automatic Crop", id=0): - with gr.Row(): - with gr.Column(): - image = gr.Image(type="filepath") - crop_scale_factor = gr.Slider(0.5,3, value=1.2,step=0.1, label="Crop Scale Factor") - with gr.Column(): - outputImages = gr.Gallery(label="Cropped Image Preview") - outputImages.style(preview=True,object_fit="cover",container=True) - with gr.Row(): - green_btn = gr.Button(label="Cropping Button",value="Crop Image").style(size='sm') - green_btn.click(cropImages, [image,crop_scale_factor],outputs=outputImages) - with gr.Row(): - gr.HTML('''
        -

        You may experience lag due to the limitations of a free huggingface space

        -

        For the full experience, please check out the GitHub page:

        -

        Fsg-Pp - Finally Some Good Profile Pictures

        -
        ''') - - # Pixiv Tab - with gr.TabItem("Pixiv", id=1): - with gr.Row(): - with gr.Column(): - searchQuery = gr.Textbox(label="Search Query", placeholder="Suggested to use the char's full name") - with gr.Row(): - num_pics = gr.Slider(1,6, value=2, step=int, label="Number of Pictures") - with gr.Row(): - num_pages = gr.Slider(1,5, value=1, step=int, label="Number of Pages") - with gr.Row(): - with gr.Column(): - with gr.Row(): - searchTypes = gr.CheckboxGroup(["Premium Search","Freemium"], value=["Freemium"], label="Search Type", type="index", elem_id="pixiv") - with gr.Row(): - viewRestriction = gr.CheckboxGroup(["PG","R-18"],label="Viewing Restriction (Default: Account Settings)",type="index",elem_id="viewing-restrictions") - with gr.Row(elem_id='button-row'): - green_btn = gr.Button(label="Search", value="Search") - with gr.Row(): - imageControl = gr.CheckboxGroup(["Full Res", "Continue Search","Search by Oldest", "AI Classifier"], value=["Full Res"], label="Image Control", type="index",elem_id="pixiv-filters") - with gr.Row(): - with gr.Row(): - n_likes = gr.Number(value=0, label="Filter by Likes") - with gr.Row(): - n_bookmarks = gr.Number(value=0, label="Filter by Bookmarks") - with gr.Row(): - n_views = gr.Number(value=0, label="Filter by Views") - with gr.Row(): - start_date = gr.Textbox(label="Start date", placeholder=("2016-01-22 YEAR-MONTH-DAY")) - with gr.Row(): - end_date = gr.Textbox(label="End date", placeholder=("2022-09-22 YEAR-MONTH-DAY")) - with gr.Row(): - user_name = gr.Textbox(label="Email", type="email", placeholder=("Account email for pixiv login")) - with gr.Row(): - pass_word = gr.Textbox(label="Password", type="password",placeholder=("Account password for pixiv login")) - - with gr.Column(): - pix_gallery=gr.Gallery(label="Image Preview") - pix_gallery.style(preview=True,object_fit="cover",columns=5,container=True) - with gr.Row(): - blue_btn = gr.Button(label="Auto Crop",value="Crop Selected Image",variant='secondary') - blue_btn.click(fn=send_number,inputs=[selected,imgz_global],outputs=[imgz_global, image, tabs]) - - pix_gallery.select(get_select_index, None, selected) - green_btn.click(get_images, [searchQuery, num_pics, num_pages,searchTypes,viewRestriction,imageControl,n_likes, n_bookmarks, n_views, - start_date,end_date, user_name, pass_word, imgz_global], outputs=[imgz_global,pix_gallery]) - - - - # Danbooru Tab - with gr.TabItem("Danbooru", id=2): - with gr.Row(): - with gr.Column(): - searchQuery = gr.Textbox(label="Search Query", placeholder="Suggested to use the char's full name") - with gr.Row(): - num_pics = gr.Slider(1,20, value=2, step=int, label="Number of Pictures") - with gr.Row(): - num_pages = gr.Slider(1,5, value=1, step=int, label="Number of Pages") - with gr.Row(): - filters = gr.CheckboxGroup(["Score", "Exact Match", "More PG", "Sensitive", "Strictly PG", "AI Classifier"], label="Filters", type="index", elem_id="filtering") - with gr.Row(): - imageControl = gr.CheckboxGroup(["Continue Search"], label="Image Control", type="index", elem_id="imageControl") - with gr.Row(): - bl_tags = gr.Textbox(label="Tags to Filter", placeholder=("Add stuff like typical undergarments etc to ensure complete pg friendliness"),lines=2) - with gr.Row(): - inc_tags = gr.Textbox(label="Tags to Include", placeholder=("1girl, 1boy for profile pictures")) - green_btn = gr.Button(label="Search", value="Search") - - with gr.Column(): - danb_gallery=gr.Gallery(label="Image Preview") - danb_gallery.style(preview=True,object_fit="cover",columns=5,container=True) - with gr.Row(): - blue_btn = gr.Button(label="Auto Crop",value="Crop Selected Image",variant='secondary') - blue_btn.click(fn=send_number,inputs=[selected,imgz_global],outputs=[imgz_global, image, tabs]) - - danb_gallery.select(get_select_index, None, selected) - green_btn.click(get_images, [searchQuery, num_pics, num_pages, filters, bl_tags, inc_tags,imageControl,imgz_global], outputs=[imgz_global,danb_gallery]) - - - # Zerochan Tab - with gr.TabItem("Zerochan", id=3): - with gr.Row(): - with gr.Column(): - searchQuery = gr.Textbox(label="Search Query", placeholder="Suggested to use the char's full name") - with gr.Row(): - num_pics = gr.Slider(1,30, value=2, step=int, label="Number of Pictures") - with gr.Row(): - num_pages = gr.Slider(1,5, value=1, step=int, label="Number of Pages") - with gr.Row(): - with gr.Row(): - n_likes = gr.Number(value=0, label="Filter by Likes") - with gr.Row(): - filters = gr.CheckboxGroup(["AI Classifier"], label="Filters", type="index",elem_id="zeroAIhover") - with gr.Column(): - imageControl = gr.CheckboxGroup(["Continue Search"], label="Image Control", type="index", elem_id="imageControl") - green_btn = gr.Button(label="Search", value="Search") - - with gr.Column(): - zero_gallery=gr.Gallery(label="Image Preview") - zero_gallery.style(preview=True,object_fit="cover",columns=5,container=True) - - with gr.Row(): - blue_btn = gr.Button(label="Auto Crop",value="Crop Selected Image",variant='secondary') - blue_btn.click(fn=send_number,inputs=[selected,imgz_global],outputs=[imgz_global, image, tabs]) - - zero_gallery.select(get_select_index, None, selected) - green_btn.click(get_images, [searchQuery, num_pics, num_pages, n_likes, filters,imageControl,imgz_global], outputs=[imgz_global,zero_gallery]) - - - -demo.launch(server_name="0.0.0.0", server_port=7860) \ No newline at end of file diff --git a/spaces/LanguageBind/LanguageBind/scripts/thermal_language/train.sh b/spaces/LanguageBind/LanguageBind/scripts/thermal_language/train.sh deleted file mode 100644 index 7d6efdc34528bc086bb8355d4c5c830c2d80cca2..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/scripts/thermal_language/train.sh +++ /dev/null @@ -1,24 +0,0 @@ - - -CACHE_DIR="path/to/pretrained/weight" -TRAIN_DATA="path/to/data" -# this script is for 1024 total batch_size (n(8) GPUs * batch_size(128) * accum_freq(1)) -cd /path/to/LanguageBind -TORCH_DISTRIBUTED_DEBUG=DETAIL HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 torchrun --nnodes=$HOST_NUM --node_rank=$INDEX --nproc_per_node $HOST_GPU_NUM --master_addr $CHIEF_IP \ - -m main \ - --train-data ${TRAIN_DATA} \ - --train-num-samples 3020000 \ - --clip-type "tl" \ - --do_train \ - --lock-text --lock-image --text-type "polish_mplug" \ - --init-temp 0.07 --learn-temp \ - --model "ViT-L-14" --cache-dir ${CACHE_DIR} \ - --convert_to_lora --lora_r 2 \ - --lr 1e-4 --coef-lr 1e-3 \ - --beta1 0.9 --beta2 0.98 --wd 0.2 --eps 1e-6 \ - --num-frames 1 --force-patch-dropout 0.5 \ - --epochs 1 --batch-size 128 --accum-freq 1 --warmup 200 \ - --precision "amp" --workers 10 --video-decode-backend "imgs" \ - --save-frequency 1 --log-every-n-steps 20 --report-to "tensorboard" --resume "latest" \ - --do_eval \ - --val_t_cls_data "LLVIP" "FLIRV1" "FLIRV2" "LSOTB" \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/feeds/sierrachart.py b/spaces/Lianjd/stock_dashboard/backtrader/feeds/sierrachart.py deleted file mode 100644 index 1c6da5974ee7e81b53cd292de2804258ee3cb195..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/feeds/sierrachart.py +++ /dev/null @@ -1,39 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -from . import GenericCSVData - - -class SierraChartCSVData(GenericCSVData): - ''' - Parses a `SierraChart `_ CSV exported file. - - Specific parameters (or specific meaning): - - - ``dataname``: The filename to parse or a file-like object - - - Uses GenericCSVData and simply modifies the dateformat (dtformat) to - ''' - - params = (('dtformat', '%Y/%m/%d'),) diff --git a/spaces/LuxOAI/ChatGpt-Web/README.md b/spaces/LuxOAI/ChatGpt-Web/README.md deleted file mode 100644 index 1d43191a0b2d9c30a9361af14b16741f4b36fbe8..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/README.md +++ /dev/null @@ -1,273 +0,0 @@ ---- -title: ChatGpt-Web -sdk: docker -emoji: 🚀 -colorFrom: red -colorTo: green -pinned: false -app_port: 3000 -duplicated_from: fengmuxi/ChatGpt-Web ---- -
        -icon - -

        ChatGPT Next Web

        - -English / [简体中文](./README_CN.md) - -One-Click to deploy well-designed ChatGPT web UI on Vercel. - -一键免费部署你的私人 ChatGPT 网页应用。 - -[Demo](https://chatgpt.nextweb.fun/) / [Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [Join Discord](https://discord.gg/zrhvHCr79N) / [Buy Me a Coffee](https://www.buymeacoffee.com/yidadaa) - -[演示](https://chatgpt.nextweb.fun/) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [QQ 群](https://user-images.githubusercontent.com/16968934/234462588-e8eff256-f5ca-46ef-8f5f-d7db6d28735a.jpg) / [打赏开发者](https://user-images.githubusercontent.com/16968934/227772541-5bcd52d8-61b7-488c-a203-0330d8006e2b.jpg) - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web) - -[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) - -![cover](./docs/images/cover.png) - -
        - -## Features - -- **Deploy for free with one-click** on Vercel in under 1 minute -- Privacy first, all data stored locally in the browser -- Responsive design, dark mode and PWA -- Fast first screen loading speed (~100kb), support streaming response -- New in v2: create, share and debug your chat tools with prompt templates (mask) -- Awesome prompts powered by [awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh) and [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts) -- Automatically compresses chat history to support long conversations while also saving your tokens -- One-click export all chat history with full Markdown support -- I18n supported - -## Roadmap - -- [x] System Prompt: pin a user defined prompt as system prompt [#138](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/138) -- [x] User Prompt: user can edit and save custom prompts to prompt list -- [x] Prompt Template: create a new chat with pre-defined in-context prompts [#993](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/993) -- [ ] Share as image, share to ShareGPT -- [ ] Desktop App with tauri -- [ ] Self-host Model: support llama, alpaca, ChatGLM, BELLE etc. -- [ ] Plugins: support network search, calculator, any other apis etc. [#165](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/165) - -### Not in Plan - -- User login, accounts, cloud sync -- UI text customize - -## What's New - -- 🚀 v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: [ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting](https://www.allabtai.com/prompt-engineering-tips-zero-one-and-few-shot-prompting/). - -## 主要功能 - -- 在 1 分钟内使用 Vercel **免费一键部署** -- 精心设计的 UI,响应式设计,支持深色模式,支持 PWA -- 极快的首屏加载速度(~100kb),支持流式响应 -- 隐私安全,所有数据保存在用户浏览器本地 -- 预制角色功能(面具),方便地创建、分享和调试你的个性化对话 -- 海量的内置 prompt 列表,来自[中文](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)和[英文](https://github.com/f/awesome-chatgpt-prompts) -- 自动压缩上下文聊天记录,在节省 Token 的同时支持超长对话 -- 一键导出聊天记录,完整的 Markdown 支持 -- 拥有自己的域名?好上加好,绑定后即可在任何地方**无障碍**快速访问 - -## 开发计划 - -- [x] 为每个对话设置系统 Prompt [#138](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/138) -- [x] 允许用户自行编辑内置 Prompt 列表 -- [x] 预制角色:使用预制角色快速定制新对话 [#993](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/993) -- [ ] 分享为图片,分享到 ShareGPT -- [ ] 使用 tauri 打包桌面应用 -- [ ] 支持自部署的大语言模型 -- [ ] 插件机制,支持联网搜索、计算器、调用其他平台 api [#165](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/165) - -### 不会开发的功能 - -- 界面文字自定义 -- 用户登录、账号管理、消息云同步 - -## 最新动态 - -- 🚀 v2.0 已经发布,现在你可以使用面具功能快速创建预制对话了! 了解更多: [ChatGPT 提示词高阶技能:零次、一次和少样本提示](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/138)。 - -## Get Started - -> [简体中文 > 如何开始使用](./README_CN.md#开始使用) - -1. Get [OpenAI API Key](https://platform.openai.com/account/api-keys); -2. Click - [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web), remember that `CODE` is your page password; -3. Enjoy :) - -## FAQ - -[简体中文 > 常见问题](./docs/faq-cn.md) - -[English > FAQ](./docs/faq-en.md) - -## Keep Updated - -> [简体中文 > 如何保持代码更新](./README_CN.md#保持更新) - -If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly. - -We recommend that you follow the steps below to re-deploy: - -- Delete the original repository; -- Use the fork button in the upper right corner of the page to fork this project; -- Choose and deploy in Vercel again, [please see the detailed tutorial](./docs/vercel-cn.md). - -### Enable Automatic Updates - -> If you encounter a failure of Upstream Sync execution, please manually sync fork once. - -After forking the project, due to the limitations imposed by GitHub, you need to manually enable Workflows and Upstream Sync Action on the Actions page of the forked project. Once enabled, automatic updates will be scheduled every hour: - -![Automatic Updates](./docs/images/enable-actions.jpg) - -![Enable Automatic Updates](./docs/images/enable-actions-sync.jpg) - -### Manually Updating Code - -If you want to update instantly, you can check out the [GitHub documentation](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork) to learn how to synchronize a forked project with upstream code. - -You can star or watch this project or follow author to get release notifictions in time. - -## Access Password - -> [简体中文 > 如何增加访问密码](./README_CN.md#配置页面访问密码) - -This project provides limited access control. Please add an environment variable named `CODE` on the vercel environment variables page. The value should be passwords separated by comma like this: - -``` -code1,code2,code3 -``` - -After adding or modifying this environment variable, please redeploy the project for the changes to take effect. - -## Environment Variables - -> [简体中文 > 如何配置 api key、访问密码、接口代理](./README_CN.md#环境变量) - -### `OPENAI_API_KEY` (required) - -Your openai api key. - -### `CODE` (optional) - -Access passsword, separated by comma. - -### `BASE_URL` (optional) - -> Default: `https://api.openai.com` - -> Examples: `http://your-openai-proxy.com` - -Override openai api request base url. - -### `OPENAI_ORG_ID` (optional) - -Specify OpenAI organization ID. - -### `HIDE_USER_API_KEY` (optional) - -> Default: Empty - -If you do not want users to input their own API key, set this environment variable to 1. - -## Development - -> [简体中文 > 如何进行二次开发](./README_CN.md#开发) - -[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) - -Before starting development, you must create a new `.env.local` file at project root, and place your api key into it: - -``` -OPENAI_API_KEY= -``` - -### Local Development - -```shell -# 1. install nodejs and yarn first -# 2. config local env vars in `.env.local` -# 3. run -yarn install -yarn dev -``` - -## Deployment - -> [简体中文 > 如何部署到私人服务器](./README_CN.md#部署) - -### Docker (Recommended) - -```shell -docker pull yidadaa/chatgpt-next-web - -docker run -d -p 3000:3000 \ - -e OPENAI_API_KEY="sk-xxxx" \ - -e CODE="your-password" \ - yidadaa/chatgpt-next-web -``` - -You can start service behind a proxy: - -```shell -docker run -d -p 3000:3000 \ - -e OPENAI_API_KEY="sk-xxxx" \ - -e CODE="your-password" \ - -e PROXY_URL="http://localhost:7890" \ - yidadaa/chatgpt-next-web -``` - -### Shell - -```shell -bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh) -``` - -## Screenshots - -![Settings](./docs/images/settings.png) - -![More](./docs/images/more.png) - -## Donation - -[Buy Me a Coffee](https://www.buymeacoffee.com/yidadaa) - -## Special Thanks - -### Sponsor - -> 仅列出捐赠金额 >= 100RMB 的用户。 - -[@mushan0x0](https://github.com/mushan0x0) -[@ClarenceDan](https://github.com/ClarenceDan) -[@zhangjia](https://github.com/zhangjia) -[@hoochanlon](https://github.com/hoochanlon) -[@relativequantum](https://github.com/relativequantum) -[@desenmeng](https://github.com/desenmeng) -[@webees](https://github.com/webees) -[@chazzhou](https://github.com/chazzhou) -[@hauy](https://github.com/hauy) -[@Corwin006](https://github.com/Corwin006) -[@yankunsong](https://github.com/yankunsong) -[@ypwhs](https://github.com/ypwhs) -[@fxxxchao](https://github.com/fxxxchao) -[@hotic](https://github.com/hotic) -[@WingCH](https://github.com/WingCH) -[@jtung4](https://github.com/jtung4) - -### Contributor - -[Contributors](https://github.com/Yidadaa/ChatGPT-Next-Web/graphs/contributors) - -## LICENSE - -[Anti 996 License](https://github.com/kattgu7/Anti-996-License/blob/master/LICENSE_CN_EN) \ No newline at end of file diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/transforms.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Marshalls/testmtd/models/moglow/modules.py b/spaces/Marshalls/testmtd/models/moglow/modules.py deleted file mode 100644 index f5dc786e78a9fc4753d126d76caadd6928988b17..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/models/moglow/modules.py +++ /dev/null @@ -1,551 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -import scipy.linalg -import scipy.special -from . import thops - -def nan_throw(tensor, name="tensor"): - stop = False - if ((tensor!=tensor).any()): - print(name + " has nans") - stop = True - if (torch.isinf(tensor).any()): - print(name + " has infs") - stop = True - if stop: - print(name + ": " + str(tensor)) - #raise ValueError(name + ' contains nans of infs') - -class _ActNorm(nn.Module): - """ - Activation Normalization - Initialize the bias and scale with a given minibatch, - so that the output per-channel have zero mean and unit variance for that. - - After initialization, `bias` and `logs` will be trained as parameters. - """ - - def __init__(self, num_features, scale=1.): - super().__init__() - # register mean and scale - size = [1, num_features, 1] - self.register_parameter("bias", nn.Parameter(torch.zeros(*size))) - self.register_parameter("logs", nn.Parameter(torch.zeros(*size))) - self.num_features = num_features - self.scale = float(scale) - # self.inited = False - self.register_buffer('is_initialized', torch.zeros(1)) - - def _check_input_dim(self, input): - return NotImplemented - - def initialize_parameters(self, input): - # print("HOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOo") - self._check_input_dim(input) - if not self.training: - return - assert input.device == self.bias.device - with torch.no_grad(): - bias = thops.mean(input.clone(), dim=[0, 2], keepdim=True) * -1.0 - vars = thops.mean((input.clone() + bias) ** 2, dim=[0, 2], keepdim=True) - logs = torch.log(self.scale/(torch.sqrt(vars)+1e-6)) - self.bias.data.copy_(bias.data) - self.logs.data.copy_(logs.data) - # self.inited = True - self.is_initialized += 1. - - def _center(self, input, reverse=False): - if not reverse: - return input + self.bias - else: - return input - self.bias - - def _scale(self, input, logdet=None, reverse=False): - logs = self.logs - if not reverse: - input = input * torch.exp(logs) - else: - input = input * torch.exp(-logs) - if logdet is not None: - """ - logs is log_std of `mean of channels` - so we need to multiply timesteps - """ - dlogdet = thops.sum(logs) * thops.timesteps(input) - if reverse: - dlogdet *= -1 - logdet = logdet + dlogdet - return input, logdet - - def forward(self, input, logdet=None, reverse=False): - if not self.is_initialized: - self.initialize_parameters(input) - self._check_input_dim(input) - # no need to permute dims as old version - if not reverse: - # center and scale - input = self._center(input, reverse) - input, logdet = self._scale(input, logdet, reverse) - else: - # scale and center - input, logdet = self._scale(input, logdet, reverse) - input = self._center(input, reverse) - return input, logdet - - -class ActNorm2d(_ActNorm): - def __init__(self, num_features, scale=1.): - super().__init__(num_features, scale) - - def _check_input_dim(self, input): - assert len(input.size()) == 3 - assert input.size(1) == self.num_features, ( - "[ActNorm]: input should be in shape as `BCT`," - " channels should be {} rather than {}".format( - self.num_features, input.size())) - - -class LinearZeros(nn.Linear): - def __init__(self, in_channels, out_channels, logscale_factor=3): - super().__init__(in_channels, out_channels) - self.logscale_factor = logscale_factor - # set logs parameter - self.register_parameter("logs", nn.Parameter(torch.zeros(out_channels))) - # init - self.weight.data.zero_() - self.bias.data.zero_() - - def forward(self, input): - output = super().forward(input) - return output * torch.exp(self.logs * self.logscale_factor) - - -class Conv2d(nn.Conv2d): - pad_dict = { - "same": lambda kernel, stride: [((k - 1) * s + 1) // 2 for k, s in zip(kernel, stride)], - "valid": lambda kernel, stride: [0 for _ in kernel] - } - - @staticmethod - def get_padding(padding, kernel_size, stride): - # make paddding - if isinstance(padding, str): - if isinstance(kernel_size, int): - kernel_size = [kernel_size, kernel_size] - if isinstance(stride, int): - stride = [stride, stride] - padding = padding.lower() - try: - padding = Conv2d.pad_dict[padding](kernel_size, stride) - except KeyError: - raise ValueError("{} is not supported".format(padding)) - return padding - - def __init__(self, in_channels, out_channels, - kernel_size=[3, 3], stride=[1, 1], - padding="same", do_actnorm=True, weight_std=0.05): - padding = Conv2d.get_padding(padding, kernel_size, stride) - super().__init__(in_channels, out_channels, kernel_size, stride, - padding, bias=(not do_actnorm)) - # init weight with std - self.weight.data.normal_(mean=0.0, std=weight_std) - if not do_actnorm: - self.bias.data.zero_() - else: - self.actnorm = ActNorm2d(out_channels) - self.do_actnorm = do_actnorm - - def forward(self, input): - x = super().forward(input) - if self.do_actnorm: - x, _ = self.actnorm(x) - return x - - -class Conv2dZeros(nn.Conv2d): - def __init__(self, in_channels, out_channels, - kernel_size=[3, 3], stride=[1, 1], - padding="same", logscale_factor=3): - padding = Conv2d.get_padding(padding, kernel_size, stride) - super().__init__(in_channels, out_channels, kernel_size, stride, padding) - # logscale_factor - self.logscale_factor = logscale_factor - self.register_parameter("logs", nn.Parameter(torch.zeros(out_channels, 1, 1))) - # init - self.weight.data.zero_() - self.bias.data.zero_() - - def forward(self, input): - output = super().forward(input) - return output * torch.exp(self.logs * self.logscale_factor) - -class LinearNormInit(nn.Linear): - def __init__(self, in_channels, out_channels, weight_std=0.05): - super().__init__(in_channels, out_channels) - # init - self.weight.data.normal_(mean=0.0, std=weight_std) - self.bias.data.zero_() - -class LinearZeroInit(nn.Linear): - def __init__(self, in_channels, out_channels): - super().__init__(in_channels, out_channels) - # init - self.weight.data.zero_() - self.bias.data.zero_() - -class Permute2d(nn.Module): - def __init__(self, num_channels, shuffle): - super().__init__() - self.num_channels = num_channels - print(num_channels) - self.indices = np.arange(self.num_channels - 1, -1,-1).astype(np.long) - self.indices_inverse = np.zeros((self.num_channels), dtype=np.long) - print(self.indices_inverse.shape) - for i in range(self.num_channels): - self.indices_inverse[self.indices[i]] = i - if shuffle: - self.reset_indices() - - def reset_indices(self): - np.random.shuffle(self.indices) - for i in range(self.num_channels): - self.indices_inverse[self.indices[i]] = i - - def forward(self, input, reverse=False): - assert len(input.size()) == 3 - if not reverse: - return input[:, self.indices, :] - else: - return input[:, self.indices_inverse, :] - - -class InvertibleConv1x1(nn.Module): - def __init__(self, num_channels, LU_decomposed=False): - super().__init__() - w_shape = [num_channels, num_channels] - w_init = np.linalg.qr(np.random.randn(*w_shape))[0].astype(np.float32) - if not LU_decomposed: - # Sample a random orthogonal matrix: - self.register_parameter("weight", nn.Parameter(torch.Tensor(w_init))) - else: - np_p, np_l, np_u = scipy.linalg.lu(w_init) - np_s = np.diag(np_u) - np_sign_s = np.sign(np_s) - np_log_s = np.log(np.abs(np_s)) - np_u = np.triu(np_u, k=1) - l_mask = np.tril(np.ones(w_shape, dtype=np.float32), -1) - eye = np.eye(*w_shape, dtype=np.float32) - - #self.p = torch.Tensor(np_p.astype(np.float32)) - #self.sign_s = torch.Tensor(np_sign_s.astype(np.float32)) - self.register_buffer('p', torch.Tensor(np_p.astype(np.float32))) - self.register_buffer('sign_s', torch.Tensor(np_sign_s.astype(np.float32))) - self.l = nn.Parameter(torch.Tensor(np_l.astype(np.float32))) - self.log_s = nn.Parameter(torch.Tensor(np_log_s.astype(np.float32))) - self.u = nn.Parameter(torch.Tensor(np_u.astype(np.float32))) - self.l_mask = torch.Tensor(l_mask) - self.eye = torch.Tensor(eye) - self.w_shape = w_shape - self.LU = LU_decomposed - self.first_pass = True - self.saved_weight = None - self.saved_dlogdet = None - - def get_weight(self, input, reverse): - w_shape = self.w_shape - if not self.LU: - timesteps = thops.timesteps(input) - dlogdet = torch.slogdet(self.weight)[1] * timesteps - if not reverse: - weight = self.weight.view(w_shape[0], w_shape[1], 1) - else: - weight = torch.inverse(self.weight.double()).float()\ - .view(w_shape[0], w_shape[1], 1) - return weight, dlogdet - else: - self.p = self.p.to(input.device) - self.sign_s = self.sign_s.to(input.device) - self.l_mask = self.l_mask.to(input.device) - self.eye = self.eye.to(input.device) - l = self.l * self.l_mask + self.eye - u = self.u * self.l_mask.transpose(0, 1).contiguous() + torch.diag(self.sign_s * torch.exp(self.log_s)) - dlogdet = thops.sum(self.log_s) * thops.timesteps(input) - if not reverse: - w = torch.matmul(self.p, torch.matmul(l, u)) - else: - l = torch.inverse(l.double()).float() - u = torch.inverse(u.double()).float() - w = torch.matmul(u, torch.matmul(l, self.p.inverse())) - return w.view(w_shape[0], w_shape[1], 1), dlogdet - - def forward(self, input, logdet=None, reverse=False): - """ - log-det = log|abs(|W|)| * timesteps - """ - # weight, dlogdet = self.get_weight(input, reverse) - if not reverse: - weight, dlogdet = self.get_weight(input, reverse) - else: - if self.first_pass: - weight, dlogdet = self.get_weight(input, reverse) - self.saved_weight = weight - if logdet is not None: - self.saved_dlogdet = dlogdet - self.first_pass = False - else: - weight = self.saved_weight - if logdet is not None: - dlogdet = self.saved_dlogdet - - nan_throw(weight, "weight") - nan_throw(dlogdet, "dlogdet") - - if not reverse: - z = F.conv1d(input, weight) - if logdet is not None: - logdet = logdet + dlogdet - return z, logdet - else: - nan_throw(input, "InConv input") - z = F.conv1d(input, weight) - nan_throw(z, "InConv z") - nan_throw(logdet, "InConv logdet") - if logdet is not None: - logdet = logdet - dlogdet - return z, logdet - -# Here we define our model as a class -class LSTM(nn.Module): - - def __init__(self, input_dim, hidden_dim, output_dim=1, num_layers=2, dropout=0.0): - super(LSTM, self).__init__() - self.input_dim = input_dim - self.hidden_dim = hidden_dim - self.num_layers = num_layers - - # Define the LSTM layer - self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers, batch_first=True) - - # Define the output layer - self.linear = LinearZeroInit(self.hidden_dim, output_dim) - - # do_init - self.do_init = True - - def init_hidden(self): - # This is what we'll initialise our hidden state as - self.do_init = True - - def forward(self, input): - # Forward pass through LSTM layer - # shape of lstm_out: [batch_size, input_size, hidden_dim] - # shape of self.hidden: (a, b), where a and b both - # have shape (batch_size, num_layers, hidden_dim). - if self.do_init: - lstm_out, self.hidden = self.lstm(input) - self.do_init = False - else: - lstm_out, self.hidden = self.lstm(input, self.hidden) - - #self.hidden = hidden[0].to(input.device), hidden[1].to(input.device) - - # Final layer - y_pred = self.linear(lstm_out) - return y_pred - -# Here we define our model as a class -class GRU(nn.Module): - - def __init__(self, input_dim, hidden_dim, output_dim=1, num_layers=2, dropout=0.0): - super(GRU, self).__init__() - self.input_dim = input_dim - self.hidden_dim = hidden_dim - self.num_layers = num_layers - - # Define the LSTM layer - self.gru = nn.GRU(self.input_dim, self.hidden_dim, self.num_layers, batch_first=True) - - # Define the output layer - self.linear = LinearZeroInit(self.hidden_dim, output_dim) - - # do_init - self.do_init = True - - def init_hidden(self): - # This is what we'll initialise our hidden state as - self.do_init = True - - def forward(self, input): - # Forward pass through LSTM layer - # shape of lstm_out: [batch_size, input_size, hidden_dim] - # shape of self.hidden: (a, b), where a and b both - # have shape (batch_size, num_layers, hidden_dim). - if self.do_init: - gru_out, self.hidden = self.gru(input) - self.do_init = False - else: - gru_out, self.hidden = self.gru(input, self.hidden) - - #self.hidden = hidden[0].to(input.device), hidden[1].to(input.device) - - # Final layer - y_pred = self.linear(gru_out) - return y_pred - -class GaussianDiag: - Log2PI = float(np.log(2 * np.pi)) - - @staticmethod - def likelihood(x): - """ - lnL = -1/2 * { ln|Var| + ((X - Mu)^T)(Var^-1)(X - Mu) + kln(2*PI) } - k = 1 (Independent) - Var = logs ** 2 - """ - return -0.5 * (((x) ** 2) + GaussianDiag.Log2PI) - - @staticmethod - def logp(x): - likelihood = GaussianDiag.likelihood(x) - return thops.sum(likelihood, dim=[1, 2]) - - @staticmethod - def sample(z_shape, eps_std=None, device=None): - eps_std = eps_std or 1 - eps = torch.normal(mean=torch.zeros(z_shape), - std=torch.ones(z_shape) * eps_std) - eps = eps.to(device) - return eps - -class StudentT: - - def __init__(self, df, d): - self.df=df - self.d=d - self.norm_const = scipy.special.loggamma(0.5*(df+d))-scipy.special.loggamma(0.5*df)-0.5*d*np.log(np.pi*df) - - def logp(self,x): - ''' - Multivariate t-student density: - output: - the sum density of the given element - ''' - #df=100 - #d=x.shape[1] - #norm_const = scipy.special.loggamma(0.5*(df+d))-scipy.special.loggamma(0.5*df)-0.5*d*np.log(np.pi*df) - #import pdb; pdb.set_trace() - x_norms = thops.sum(((x) ** 2), dim=[1]) - likelihood = self.norm_const-0.5*(self.df+self.d)*torch.log(1+(1/self.df)*x_norms) - return thops.sum(likelihood, dim=[1]) - - def sample(self,z_shape, eps_std=None, device=None): - '''generate random variables of multivariate t distribution - Parameters - ---------- - m : array_like - mean of random variable, length determines dimension of random variable - S : array_like - square array of covariance matrix - df : int or float - degrees of freedom - n : int - number of observations, return random array will be (n, len(m)) - Returns - ------- - rvs : ndarray, (n, len(m)) - each row is an independent draw of a multivariate t distributed - random variable - ''' - #df=100 - # import pdb; pdb.set_trace() - x_shape = torch.Size((z_shape[0], 1, z_shape[2])) - x = np.random.chisquare(self.df, x_shape)/self.df - x = np.tile(x, (1,z_shape[1],1)) - x = torch.Tensor(x.astype(np.float32)) - z = torch.normal(mean=torch.zeros(z_shape),std=torch.ones(z_shape) * eps_std) - - # import pdb; pdb.set_trace() - return (z/torch.sqrt(x)).to(device) - -class Split2d(nn.Module): - def __init__(self, num_channels): - super().__init__() - print("Split2d num_channels:" + str(num_channels)) - - self.num_channels = num_channels - self.conv = Conv2dZeros(num_channels // 2, num_channels) - - def split2d_prior(self, z): - h = self.conv(z) - return thops.split_feature(h, "cross") - - def forward(self, input, cond, logdet=0., reverse=False, eps_std=None): - if not reverse: - #print("forward Split2d input:" + str(input.shape)) - z1, z2 = thops.split_feature(input, "split") - #mean, logs = self.split2d_prior(z1) - logdet = GaussianDiag.logp(z2) + logdet - return z1, cond, logdet - else: - z1 = input - #print("reverse Split2d z1.shape:" + str(z1.shape)) - #mean, logs = self.split2d_prior(z1) - z2_shape = list(z1.shape) - z2_shape[1] = self.num_channels-z1.shape[1] - z2 = GaussianDiag.sample(z2_shape, eps_std, device=input.device) - z = thops.cat_feature(z1, z2) - return z, cond, logdet - -def squeeze2d(input, factor=2): - assert factor >= 1 and isinstance(factor, int) - if factor == 1: - return input - size = input.size() - B = size[0] - C = size[1] - H = size[2] - W = size[3] - assert H % factor == 0 , "{}".format((H, W)) - x = input.view(B, C, H // factor, factor, W, 1) - x = x.permute(0, 1, 3, 5, 2, 4).contiguous() - x = x.view(B, C * factor, H // factor, W) - return x - - -def unsqueeze2d(input, factor=2): - assert factor >= 1 and isinstance(factor, int) - #factor2 = factor ** 2 - if factor == 1: - return input - size = input.size() - B = size[0] - C = size[1] - H = size[2] - W = size[3] - assert C % (factor) == 0, "{}".format(C) - x = input.view(B, C // factor, factor, 1, H, W) - x = x.permute(0, 1, 4, 2, 5, 3).contiguous() - x = x.view(B, C // (factor), H * factor, W) - return x - - -class SqueezeLayer(nn.Module): - def __init__(self, factor): - super().__init__() - self.factor = factor - - def forward(self, input, cond = None, logdet=None, reverse=False): - if not reverse: - output = squeeze2d(input, self.factor) - cond_out = squeeze2d(cond, self.factor) - return output, cond_out, logdet - else: - output = unsqueeze2d(input, self.factor) - cond_output = unsqueeze2d(cond, self.factor) - return output, cond_output, logdet - - def squeeze_cond(self, cond): - cond_out = squeeze2d(cond, self.factor) - return cond_out diff --git a/spaces/Mecca/whisper-webui/src/hooks/subTaskProgressListener.py b/spaces/Mecca/whisper-webui/src/hooks/subTaskProgressListener.py deleted file mode 100644 index 9a8eaa876fcd18032875d67535e0558494842c60..0000000000000000000000000000000000000000 --- a/spaces/Mecca/whisper-webui/src/hooks/subTaskProgressListener.py +++ /dev/null @@ -1,37 +0,0 @@ -from src.hooks.progressListener import ProgressListener - -from typing import Union - -class SubTaskProgressListener(ProgressListener): - """ - A sub task listener that reports the progress of a sub task to a base task listener - Parameters - ---------- - base_task_listener : ProgressListener - The base progress listener to accumulate overall progress in. - base_task_total : float - The maximum total progress that will be reported to the base progress listener. - sub_task_start : float - The starting progress of a sub task, in respect to the base progress listener. - sub_task_total : float - The total amount of progress a sub task will report to the base progress listener. - """ - def __init__( - self, - base_task_listener: ProgressListener, - base_task_total: float, - sub_task_start: float, - sub_task_total: float, - ): - self.base_task_listener = base_task_listener - self.base_task_total = base_task_total - self.sub_task_start = sub_task_start - self.sub_task_total = sub_task_total - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - sub_task_progress_frac = current / total - sub_task_progress = self.sub_task_start + self.sub_task_total * sub_task_progress_frac - self.base_task_listener.on_progress(sub_task_progress, self.base_task_total) - - def on_finished(self): - self.base_task_listener.on_progress(self.sub_task_start + self.sub_task_total, self.base_task_total) \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/upscaling.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/upscaling.py deleted file mode 100644 index 03816662098ce1ffac79bd939b892e867ab91988..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/upscaling.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -from functools import partial - -from ldm.modules.diffusionmodules.util import extract_into_tensor, make_beta_schedule -from ldm.util import default - - -class AbstractLowScaleModel(nn.Module): - # for concatenating a downsampled image to the latent representation - def __init__(self, noise_schedule_config=None): - super(AbstractLowScaleModel, self).__init__() - if noise_schedule_config is not None: - self.register_schedule(**noise_schedule_config) - - def register_schedule(self, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def forward(self, x): - return x, None - - def decode(self, x): - return x - - -class SimpleImageConcat(AbstractLowScaleModel): - # no noise level conditioning - def __init__(self): - super(SimpleImageConcat, self).__init__(noise_schedule_config=None) - self.max_noise_level = 0 - - def forward(self, x): - # fix to constant noise level - return x, torch.zeros(x.shape[0], device=x.device).long() - - -class ImageConcatWithNoiseAugmentation(AbstractLowScaleModel): - def __init__(self, noise_schedule_config, max_noise_level=1000, to_cuda=False): - super().__init__(noise_schedule_config=noise_schedule_config) - self.max_noise_level = max_noise_level - - def forward(self, x, noise_level=None): - if noise_level is None: - noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long() - else: - assert isinstance(noise_level, torch.Tensor) - z = self.q_sample(x, noise_level) - return z, noise_level - - - diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/commands/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/commands/__init__.py deleted file mode 100644 index 64f8993e9b710c7150d16ee4361fc0d406d72f55..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/commands/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from importlib import import_module -from pathlib import Path -from pkgutil import iter_modules - -command_functions = [] - -package_dir = Path(__file__).resolve().parent -for _b, module_name, _p in iter_modules([str(package_dir)]): - module = import_module(f"{__name__}.{module_name}") - for attribute_name in dir(module): - attribute = getattr(module, attribute_name) - if attribute_name.endswith("_command"): - command_functions.append(attribute) diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/common/extract_kaist.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/common/extract_kaist.py deleted file mode 100644 index 76d2579ccbb59f9addc60bbbe9df9037fd543665..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/common/extract_kaist.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import shutil -import xml.etree.ElementTree as ET -import zipfile -from xml.etree.ElementTree import ParseError - - -def extract(root_path): - idx = 0 - for language in ['English', 'Korean', 'Mixed']: - for camera in ['Digital_Camera', 'Mobile_Phone']: - crt_path = osp.join(root_path, 'KAIST', language, camera) - zips = os.listdir(crt_path) - for zip in zips: - extracted_path = osp.join(root_path, 'tmp', zip) - extract_zipfile(osp.join(crt_path, zip), extracted_path) - for file in os.listdir(extracted_path): - if file.endswith('xml'): - src_ann = os.path.join(extracted_path, file) - # Filtering broken annotations - try: - ET.parse(src_ann) - except ParseError: - continue - src_img = None - img_names = [ - file.replace('xml', suffix) - for suffix in ['jpg', 'JPG'] - ] - for im in img_names: - img_path = osp.join(extracted_path, im) - if osp.exists(img_path): - src_img = img_path - if src_img: - shutil.move( - src_ann, - osp.join(root_path, 'annotations', - str(idx).zfill(5) + '.xml')) - shutil.move( - src_img, - osp.join(root_path, 'imgs', - str(idx).zfill(5) + '.jpg')) - idx += 1 - - -def extract_zipfile(zip_path, dst_dir, delete=True): - - files = zipfile.ZipFile(zip_path) - for file in files.namelist(): - files.extract(file, dst_dir) - if delete: - os.remove(zip_path) - - -def parse_args(): - parser = argparse.ArgumentParser(description='Extract KAIST zips') - parser.add_argument('root_path', help='Root path of KAIST') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - assert osp.exists(root_path) - extract(root_path) - shutil.rmtree(osp.join(args.root_path, 'tmp')) - shutil.rmtree(osp.join(args.root_path, 'KAIST')) - - -if __name__ == '__main__': - main() diff --git a/spaces/Nultx/VITS-TTS/modules.py b/spaces/Nultx/VITS-TTS/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/voice_encoder.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/voice_encoder.py deleted file mode 100644 index 88cdee2de76b72db58c5dd19a888597e0fe12fbb..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/voice_encoder.py +++ /dev/null @@ -1,173 +0,0 @@ -from speaker_encoder.hparams import * -from speaker_encoder import audio -from pathlib import Path -from typing import Union, List -from torch import nn -from time import perf_counter as timer -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, weights_fpath, device: Union[str, torch.device]=None, verbose=True): - """ - :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). - If None, defaults to cuda if it is available on your machine, otherwise the model will - run on cpu. Outputs are always returned on the cpu, as numpy arrays. - """ - super().__init__() - - # Define the network - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - # Get the target device - if device is None: - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - elif isinstance(device, str): - device = torch.device(device) - self.device = device - - # Load the pretrained model'speaker weights - # weights_fpath = Path(__file__).resolve().parent.joinpath("pretrained.pt") - # if not weights_fpath.exists(): - # raise Exception("Couldn't find the voice encoder pretrained model at %s." % - # weights_fpath) - - start = timer() - checkpoint = torch.load(weights_fpath, map_location="cpu") - - self.load_state_dict(checkpoint["model_state"], strict=False) - self.to(device) - - if verbose: - print("Loaded the voice encoder model on %s in %.2f seconds." % - (device.type, timer() - start)) - - def forward(self, mels: torch.FloatTensor): - """ - Computes the embeddings of a batch of utterance spectrograms. - :param mels: a batch of mel spectrograms of same duration as a float32 tensor of shape - (batch_size, n_frames, n_channels) - :return: the embeddings as a float 32 tensor of shape (batch_size, embedding_size). - Embeddings are positive and L2-normed, thus they lay in the range [0, 1]. - """ - # Pass the input through the LSTM layers and retrieve the final hidden state of the last - # layer. Apply a cutoff to 0 for negative values and L2 normalize the embeddings. - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - @staticmethod - def compute_partial_slices(n_samples: int, rate, min_coverage): - """ - Computes where to split an utterance waveform and its corresponding mel spectrogram to - obtain partial utterances of each. Both the waveform and the - mel spectrogram slices are returned, so as to make each partial utterance waveform - correspond to its spectrogram. - - The returned ranges may be indexing further than the length of the waveform. It is - recommended that you pad the waveform with zeros up to wav_slices[-1].stop. - - :param n_samples: the number of samples in the waveform - :param rate: how many partial utterances should occur per second. Partial utterances must - cover the span of the entire utterance, thus the rate should not be lower than the inverse - of the duration of a partial utterance. By default, partial utterances are 1.6s long and - the minimum rate is thus 0.625. - :param min_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered by zero-padding the audio. Otherwise, - it will be discarded. If there aren't enough frames for one partial utterance, - this parameter is ignored so that the function always returns at least one slice. - :return: the waveform slices and mel spectrogram slices as lists of array slices. Index - respectively the waveform and the mel spectrogram with these slices to obtain the partial - utterances. - """ - assert 0 < min_coverage <= 1 - - # Compute how many frames separate two partial utterances - samples_per_frame = int((sampling_rate * mel_window_step / 1000)) - n_frames = int(np.ceil((n_samples + 1) / samples_per_frame)) - frame_step = int(np.round((sampling_rate / rate) / samples_per_frame)) - assert 0 < frame_step, "The rate is too high" - assert frame_step <= partials_n_frames, "The rate is too low, it should be %f at least" % \ - (sampling_rate / (samples_per_frame * partials_n_frames)) - - # Compute the slices - wav_slices, mel_slices = [], [] - steps = max(1, n_frames - partials_n_frames + frame_step + 1) - for i in range(0, steps, frame_step): - mel_range = np.array([i, i + partials_n_frames]) - wav_range = mel_range * samples_per_frame - mel_slices.append(slice(*mel_range)) - wav_slices.append(slice(*wav_range)) - - # Evaluate whether extra padding is warranted or not - last_wav_range = wav_slices[-1] - coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start) - if coverage < min_coverage and len(mel_slices) > 1: - mel_slices = mel_slices[:-1] - wav_slices = wav_slices[:-1] - - return wav_slices, mel_slices - - def embed_utterance(self, wav: np.ndarray, return_partials=False, rate=1.3, min_coverage=0.75): - """ - Computes an embedding for a single utterance. The utterance is divided in partial - utterances and an embedding is computed for each. The complete utterance embedding is the - L2-normed average embedding of the partial utterances. - - TODO: independent batched version of this function - - :param wav: a preprocessed utterance waveform as a numpy array of float32 - :param return_partials: if True, the partial embeddings will also be returned along with - the wav slices corresponding to each partial utterance. - :param rate: how many partial utterances should occur per second. Partial utterances must - cover the span of the entire utterance, thus the rate should not be lower than the inverse - of the duration of a partial utterance. By default, partial utterances are 1.6s long and - the minimum rate is thus 0.625. - :param min_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered by zero-padding the audio. Otherwise, - it will be discarded. If there aren't enough frames for one partial utterance, - this parameter is ignored so that the function always returns at least one slice. - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If - is True, the partial utterances as a numpy array of float32 of shape - (n_partials, model_embedding_size) and the wav partials as a list of slices will also be - returned. - """ - # Compute where to split the utterance into partials and pad the waveform with zeros if - # the partial utterances cover a larger range. - wav_slices, mel_slices = self.compute_partial_slices(len(wav), rate, min_coverage) - max_wave_length = wav_slices[-1].stop - if max_wave_length >= len(wav): - wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant") - - # Split the utterance into partials and forward them through the model - mel = audio.wav_to_mel_spectrogram(wav) - mels = np.array([mel[s] for s in mel_slices]) - with torch.no_grad(): - mels = torch.from_numpy(mels).to(self.device) - partial_embeds = self(mels).cpu().numpy() - - # Compute the utterance embedding from the partial embeddings - raw_embed = np.mean(partial_embeds, axis=0) - embed = raw_embed / np.linalg.norm(raw_embed, 2) - - if return_partials: - return embed, partial_embeds, wav_slices - return embed - - def embed_speaker(self, wavs: List[np.ndarray], **kwargs): - """ - Compute the embedding of a collection of wavs (presumably from the same speaker) by - averaging their embedding and L2-normalizing it. - - :param wavs: list of wavs a numpy arrays of float32. - :param kwargs: extra arguments to embed_utterance() - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). - """ - raw_embed = np.mean([self.embed_utterance(wav, return_partials=False, **kwargs) \ - for wav in wavs], axis=0) - return raw_embed / np.linalg.norm(raw_embed, 2) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/evaluation/eval_asr.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/evaluation/eval_asr.py deleted file mode 100644 index 005a11bfb34ca477ad9e133acd60f249e66cda47..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/evaluation/eval_asr.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import editdistance -import re -import shutil -import soundfile as sf -import subprocess -from pathlib import Path - -from examples.speech_to_text.data_utils import load_tsv_to_dicts - - -def preprocess_text(text): - text = "|".join(re.sub(r"[^A-Z' ]", " ", text.upper()).split()) - text = " ".join(text) - return text - - -def prepare_w2v_data( - dict_dir, sample_rate, label, audio_paths, texts, split, data_dir -): - data_dir.mkdir(parents=True, exist_ok=True) - shutil.copyfile( - dict_dir / f"dict.{label}.txt", - data_dir / f"dict.{label}.txt" - ) - with open(data_dir / f"{split}.tsv", "w") as f: - f.write("/\n") - for audio_path in audio_paths: - wav, sr = sf.read(audio_path) - assert sr == sample_rate, f"{sr} != sample_rate" - nsample = len(wav) - f.write(f"{audio_path}\t{nsample}\n") - with open(data_dir / f"{split}.{label}", "w") as f: - for text in texts: - text = preprocess_text(text) - f.write(f"{text}\n") - - -def run_asr(asr_dir, split, w2v_ckpt, w2v_label, res_dir): - """ - results will be saved at - {res_dir}/{ref,hypo}.word-{w2v_ckpt.filename}-{split}.txt - """ - cmd = ["python", "-m", "examples.speech_recognition.infer"] - cmd += [str(asr_dir.resolve())] - cmd += ["--task", "audio_finetuning", "--nbest", "1", "--quiet"] - cmd += ["--w2l-decoder", "viterbi", "--criterion", "ctc"] - cmd += ["--post-process", "letter", "--max-tokens", "4000000"] - cmd += ["--path", str(w2v_ckpt.resolve()), "--labels", w2v_label] - cmd += ["--gen-subset", split, "--results-path", str(res_dir.resolve())] - - print(f"running cmd:\n{' '.join(cmd)}") - subprocess.run(cmd, check=True) - - -def compute_error_rate(hyp_wrd_path, ref_wrd_path, unit="word"): - """each line is " (None-)" """ - tokenize_line = { - "word": lambda x: re.sub(r" \(.*\)$", "", x.rstrip()).split(), - "char": lambda x: list(re.sub(r" \(.*\)$", "", x.rstrip())) - }.get(unit) - if tokenize_line is None: - raise ValueError(f"{unit} not supported") - - inds = [int(re.sub(r"\D*(\d*)\D*", r"\1", line)) - for line in open(hyp_wrd_path)] - hyps = [tokenize_line(line) for line in open(hyp_wrd_path)] - refs = [tokenize_line(line) for line in open(ref_wrd_path)] - assert(len(hyps) == len(refs)) - err_rates = [ - editdistance.eval(hyp, ref) / len(ref) for hyp, ref in zip(hyps, refs) - ] - ind_to_err_rates = {i: e for i, e in zip(inds, err_rates)} - return ind_to_err_rates - - -def main(args): - samples = load_tsv_to_dicts(args.raw_manifest) - ids = [ - sample[args.id_header] if args.id_header else "" for sample in samples - ] - audio_paths = [sample[args.audio_header] for sample in samples] - texts = [sample[args.text_header] for sample in samples] - - prepare_w2v_data( - args.w2v_dict_dir, - args.w2v_sample_rate, - args.w2v_label, - audio_paths, - texts, - args.split, - args.asr_dir - ) - run_asr(args.asr_dir, args.split, args.w2v_ckpt, args.w2v_label, args.asr_dir) - ind_to_err_rates = compute_error_rate( - args.asr_dir / f"hypo.word-{args.w2v_ckpt.name}-{args.split}.txt", - args.asr_dir / f"ref.word-{args.w2v_ckpt.name}-{args.split}.txt", - args.err_unit, - ) - - uer_path = args.asr_dir / f"uer_{args.err_unit}.{args.split}.tsv" - with open(uer_path, "w") as f: - f.write("id\taudio\tuer\n") - for ind, (id_, audio_path) in enumerate(zip(ids, audio_paths)): - f.write(f"{id_}\t{audio_path}\t{ind_to_err_rates[ind]:.4f}\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--raw-manifest", required=True, type=Path) - parser.add_argument("--asr-dir", required=True, type=Path) - parser.add_argument("--id-header", default="id", type=str) - parser.add_argument("--audio-header", default="audio", type=str) - parser.add_argument("--text-header", default="src_text", type=str) - parser.add_argument("--split", default="raw", type=str) - parser.add_argument("--w2v-ckpt", required=True, type=Path) - parser.add_argument("--w2v-dict-dir", required=True, type=Path) - parser.add_argument("--w2v-sample-rate", default=16000, type=int) - parser.add_argument("--w2v-label", default="ltr", type=str) - parser.add_argument("--err-unit", default="word", type=str) - args = parser.parse_args() - - main(args) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/utils/dedup.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/utils/dedup.py deleted file mode 100644 index d6fed8c695cf218d3502d6ed8d23015520c0e179..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/utils/dedup.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse - -def deup(src_file, tgt_file, src_file_out, tgt_file_out): - seen = set() - dup_count = 0 - with open(src_file, encoding='utf-8') as fsrc, \ - open(tgt_file, encoding='utf-8') as ftgt, \ - open(src_file_out, 'w', encoding='utf-8') as fsrc_out, \ - open(tgt_file_out, 'w', encoding='utf-8') as ftgt_out: - for s, t in zip(fsrc, ftgt): - if (s, t) not in seen: - fsrc_out.write(s) - ftgt_out.write(t) - seen.add((s, t)) - else: - dup_count += 1 - print(f'number of duplication: {dup_count}') - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--src-file", type=str, required=True, - help="src file") - parser.add_argument("--tgt-file", type=str, required=True, - help="tgt file") - parser.add_argument("--src-file-out", type=str, required=True, - help="src ouptut file") - parser.add_argument("--tgt-file-out", type=str, required=True, - help="tgt ouput file") - args = parser.parse_args() - deup(args.src_file, args.tgt_file, args.src_file_out, args.tgt_file_out) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/sacrebleu.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/sacrebleu.sh deleted file mode 100644 index c10bf2b76ea032deabab6f5c9d8a3e1e884f1642..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/sacrebleu.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -if [ $# -ne 4 ]; then - echo "usage: $0 TESTSET SRCLANG TGTLANG GEN" - exit 1 -fi - -TESTSET=$1 -SRCLANG=$2 -TGTLANG=$3 - -GEN=$4 - -if ! command -v sacremoses &> /dev/null -then - echo "sacremoses could not be found, please install with: pip install sacremoses" - exit -fi - -grep ^H $GEN \ -| sed 's/^H\-//' \ -| sort -n -k 1 \ -| cut -f 3 \ -| sacremoses detokenize \ -> $GEN.sorted.detok - -sacrebleu --test-set $TESTSET --language-pair "${SRCLANG}-${TGTLANG}" < $GEN.sorted.detok diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/display_pil.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/display_pil.py deleted file mode 100644 index a29b76f8c7502fb6b866f39ad4f9b13a9328f3c8..0000000000000000000000000000000000000000 --- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/display_pil.py +++ /dev/null @@ -1,43 +0,0 @@ -from PIL.JpegImagePlugin import JpegImageFile -from PIL import ImageDraw -from PIL import Image -from typing import * - -def display(images: List[JpegImageFile], labels: List[str], w: int = 300, h: int = 200, left_color: str = "white", right_color: str = "white"): - """Display a dual image - - Args: - images (List[JpegImageFile]): A list containing two images - labels (List[str]): The labels of the images - w (int, optional): The width. Defaults to 300. - h (int, optional): The height. Defaults to 200. - left_color (str, optional): The color of left label. Defaults to "white". - right_color (str, optional): The color of the right label. Defaults to "white". - - Returns: - PIL.Image: A pillow image - """ - - # define a grid - grid = Image.new('RGB', size=(w, h)) - - # draw the grid - draw = ImageDraw.Draw(grid, mode='RGB') - - # define the second box - box = (w // 2, 0) - - # define the size of the images - size = (w // 2, h) - - # add images to the grid - grid.paste(images[0].resize(size)) - - grid.paste(images[1].resize(size), box = box) - - # draw labels - draw.text((0, 0), labels[0], fill=left_color) - - draw.text(box, labels[1], fill=right_color) - - return grid diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/seg/sampler/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/seg/sampler/__init__.py deleted file mode 100644 index 332b242c03d1c5e80d4577df442a9a037b1816e1..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/seg/sampler/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .base_pixel_sampler import BasePixelSampler -from .ohem_pixel_sampler import OHEMPixelSampler - -__all__ = ['BasePixelSampler', 'OHEMPixelSampler'] diff --git a/spaces/Phips/upscale_demo/README.md b/spaces/Phips/upscale_demo/README.md deleted file mode 100644 index 2e5f09960b7d4c8d04820b9047a0018d029216d7..0000000000000000000000000000000000000000 --- a/spaces/Phips/upscale_demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Upscale Demo -emoji: 🔥 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RachAmm/Wav2vec-vs-Whisper/README.md b/spaces/RachAmm/Wav2vec-vs-Whisper/README.md deleted file mode 100644 index 2ba00d61e802eba339909c7ab38e3c9355842054..0000000000000000000000000000000000000000 --- a/spaces/RachAmm/Wav2vec-vs-Whisper/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wav2vec Vs Whisper -emoji: 💻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Rakot2223/faster-whisper-webui/src/vad.py b/spaces/Rakot2223/faster-whisper-webui/src/vad.py deleted file mode 100644 index 9b5ae606a9efdcc34dada47d0613bb8194d2f269..0000000000000000000000000000000000000000 --- a/spaces/Rakot2223/faster-whisper-webui/src/vad.py +++ /dev/null @@ -1,560 +0,0 @@ -from abc import ABC, abstractmethod -from collections import Counter, deque -import time - -from typing import Any, Deque, Iterator, List, Dict - -from pprint import pprint -from src.hooks.progressListener import ProgressListener -from src.hooks.subTaskProgressListener import SubTaskProgressListener -from src.hooks.whisperProgressHook import create_progress_listener_handle -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -from src.segments import merge_timestamps -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -# Workaround for https://github.com/tensorflow/tensorflow/issues/48797 -try: - import tensorflow as tf -except ModuleNotFoundError: - # Error handling - pass - -import torch - -import ffmpeg -import numpy as np - -from src.utils import format_timestamp -from enum import Enum - -class NonSpeechStrategy(Enum): - """ - Ignore non-speech frames segments. - """ - SKIP = 1 - """ - Just treat non-speech segments as speech. - """ - CREATE_SEGMENT = 2 - """ - Expand speech segments into subsequent non-speech segments. - """ - EXPAND_SEGMENT = 3 - -# Defaults for Silero -SPEECH_TRESHOLD = 0.3 - -# Minimum size of segments to process -MIN_SEGMENT_DURATION = 1 - -# The maximum time for texts from old segments to be used in the next segment -MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled) -PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this - -VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio - -class TranscriptionConfig(ABC): - def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - self.non_speech_strategy = non_speech_strategy - self.segment_padding_left = segment_padding_left - self.segment_padding_right = segment_padding_right - self.max_silent_period = max_silent_period - self.max_merge_size = max_merge_size - self.max_prompt_window = max_prompt_window - self.initial_segment_index = initial_segment_index - -class PeriodicTranscriptionConfig(TranscriptionConfig): - def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index) - self.periodic_duration = periodic_duration - -class AbstractTranscription(ABC): - def __init__(self, sampling_rate: int = 16000): - self.sampling_rate = sampling_rate - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - return load_audio(str, self.sampling_rate, start_time, duration) - - def is_transcribe_timestamps_fast(self): - """ - Determine if get_transcribe_timestamps is fast enough to not need parallelization. - """ - return False - - @abstractmethod - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - return - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method, - after merging the given segments using the specified configuration. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size, - config.segment_padding_left, config.segment_padding_right) - - if config.non_speech_strategy != NonSpeechStrategy.SKIP: - # Expand segments to include the gaps between them - if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT): - # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size - merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size) - elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT: - # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment) - merged = self.expand_gaps(merged, total_duration=total_duration) - else: - raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy)) - - print("Transcribing non-speech:") - pprint(merged) - return merged - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - progressListener: ProgressListener = None): - """ - Transcribe the given audo file. - - Parameters - ---------- - audio: str - The audio file. - whisperCallable: WhisperCallback - A callback object to call to transcribe each segment. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - - try: - max_audio_duration = self.get_audio_duration(audio, config) - timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration) - - # Get speech timestamps from full audio file - merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration) - - # A deque of transcribed segments that is passed to the next segment as a prompt - prompt_window = deque() - - print("Processing timestamps:") - pprint(merged) - - result = { - 'text': "", - 'segments': [], - 'language': "" - } - languageCounter = Counter() - detected_language = None - - segment_index = config.initial_segment_index - - # Calculate progress - progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0 - progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged]) - - # For each time segment, run whisper - for segment in merged: - segment_index += 1 - segment_start = segment['start'] - segment_end = segment['end'] - segment_expand_amount = segment.get('expand_amount', 0) - segment_gap = segment.get('gap', False) - - segment_duration = segment_end - segment_start - - if segment_duration < MIN_SEGMENT_DURATION: - continue - - # Audio to run on Whisper - segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration)) - # Previous segments to use as a prompt - segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None - - # Detected language - detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None - - print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ", - segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language) - - perf_start_time = time.perf_counter() - - scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration, - sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration) - segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener) - - perf_end_time = time.perf_counter() - print("Whisper took {} seconds".format(perf_end_time - perf_start_time)) - - adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration) - - # Propagate expand amount to the segments - if (segment_expand_amount > 0): - segment_without_expansion = segment_duration - segment_expand_amount - - for adjusted_segment in adjusted_segments: - adjusted_segment_end = adjusted_segment['end'] - - # Add expand amount if the segment got expanded - if (adjusted_segment_end > segment_without_expansion): - adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion - - # Append to output - result['text'] += segment_result['text'] - result['segments'].extend(adjusted_segments) - - # Increment detected language - if not segment_gap: - languageCounter[segment_result['language']] += 1 - - # Update prompt window - self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config) - - if detected_language is not None: - result['language'] = detected_language - finally: - # Notify progress listener that we are done - if progressListener is not None: - progressListener.on_finished() - return result - - def get_audio_duration(self, audio: str, config: TranscriptionConfig): - return get_audio_duration(audio) - - def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig): - if (config.max_prompt_window is not None and config.max_prompt_window > 0): - # Add segments to the current prompt window (unless it is a speech gap) - if not segment_gap: - for segment in adjusted_segments: - if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB: - prompt_window.append(segment) - - while (len(prompt_window) > 0): - first_end_time = prompt_window[0].get('end', 0) - # Time expanded in the segments should be discounted from the prompt window - first_expand_time = prompt_window[0].get('expand_amount', 0) - - if (first_end_time - first_expand_time < segment_end - config.max_prompt_window): - prompt_window.popleft() - else: - break - - def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float): - result = [] - last_end_time = 0 - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - if (last_end_time != segment_start): - delta = segment_start - last_end_time - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } ) - - last_end_time = segment_end - result.append(segment) - - # Also include total duration if specified - if (total_duration is not None and last_end_time < total_duration): - delta = total_duration - segment_start - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } ) - - return result - - # Expand the end time of each segment to the start of the next segment - def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - # Expand if the gap actually exists - if (delta >= 0): - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - - result.append(current_segment) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - if (last_segment['end'] < total_duration): - last_segment = last_segment.copy() - last_segment['end'] = total_duration - result[-1] = last_segment - - return result - - def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - expanded = False - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - if (max_expand_size is not None and delta <= max_expand_size): - # Just expand the current segment - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - expanded = True - - result.append(current_segment) - - # Add a gap to the next segment if needed - if (delta >= 0 and not expanded): - result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } ) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - delta = total_duration - last_segment['end'] - - if (delta > 0): - if (max_expand_size is not None and delta <= max_expand_size): - # Expand the last segment - last_segment = last_segment.copy() - last_segment['expand_amount'] = delta - last_segment['end'] = total_duration - result[-1] = last_segment - else: - result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } ) - - return result - - def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None): - result = [] - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - # Filter segments? - if (max_source_time is not None): - if (segment_start > max_source_time): - continue - segment_end = min(max_source_time, segment_end) - - new_segment = segment.copy() - - # Add to start and end - new_segment['start'] = segment_start + adjust_seconds - new_segment['end'] = segment_end + adjust_seconds - result.append(new_segment) - return result - - def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float): - result = [] - - for entry in timestamps: - start = entry['start'] - end = entry['end'] - - result.append({ - 'start': start * factor, - 'end': end * factor - }) - return result - - -class VadSileroTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None): - super().__init__(sampling_rate=sampling_rate) - self.model = None - self.cache = cache - self._initialize_model() - - def _initialize_model(self): - if (self.cache is not None): - model_key = "VadSileroTranscription" - self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model) - print("Loaded Silerio model from cache.") - else: - self.model, self.get_speech_timestamps = self._create_model() - print("Created Silerio model") - - def _create_model(self): - model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad') - - # Silero does not benefit from multi-threading - torch.set_num_threads(1) # JIT - (get_speech_timestamps, _, _, _, _) = utils - - return model, get_speech_timestamps - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - result = [] - - print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time)) - perf_start_time = time.perf_counter() - - # Divide procesisng of audio into chunks - chunk_start = start_time - - while (chunk_start < end_time): - chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK) - - print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration))) - wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration)) - - sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD) - seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate) - adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration) - - #pprint(adjusted) - - result.extend(adjusted) - chunk_start += chunk_duration - - perf_end_time = time.perf_counter() - print("VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - - return result - - def __getstate__(self): - # We only need the sampling rate - return { 'sampling_rate': self.sampling_rate } - - def __setstate__(self, state): - self.sampling_rate = state['sampling_rate'] - self.model = None - # Use the global cache - self.cache = GLOBAL_MODEL_CACHE - self._initialize_model() - -# A very simple VAD that just marks every N seconds as speech -class VadPeriodicTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def is_transcribe_timestamps_fast(self): - # This is a very fast VAD - no need to parallelize it - return True - - def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float): - result = [] - - # Generate a timestamp every N seconds - start_timestamp = start_time - - while (start_timestamp < end_time): - end_timestamp = min(start_timestamp + config.periodic_duration, end_time) - segment_duration = end_timestamp - start_timestamp - - # Minimum duration is 1 second - if (segment_duration >= 1): - result.append( { 'start': start_timestamp, 'end': end_timestamp } ) - - start_timestamp = end_timestamp - - return result - -def get_audio_duration(file: str): - return float(ffmpeg.probe(file)["format"]["duration"]) - -def load_audio(file: str, sample_rate: int = 16000, - start_time: str = None, duration: str = None): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - start_time: str - The start time, using the standard FFMPEG time duration syntax, or None to disable. - - duration: str - The duration, using the standard FFMPEG time duration syntax, or None to disable. - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - inputArgs = {'threads': 0} - - if (start_time is not None): - inputArgs['ss'] = start_time - if (duration is not None): - inputArgs['t'] = duration - - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, **inputArgs) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate) - .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/__init__.py deleted file mode 100644 index dbe6cb4ca471f146b431d2fbb558d47317a103f0..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -from functools import reduce -from typing import Any, Callable, Dict - -from . import formats -from .error_reporting import detailed_errors, ValidationError -from .extra_validations import EXTRA_VALIDATIONS -from .fastjsonschema_exceptions import JsonSchemaException, JsonSchemaValueException -from .fastjsonschema_validations import validate as _validate - -__all__ = [ - "validate", - "FORMAT_FUNCTIONS", - "EXTRA_VALIDATIONS", - "ValidationError", - "JsonSchemaException", - "JsonSchemaValueException", -] - - -FORMAT_FUNCTIONS: Dict[str, Callable[[str], bool]] = { - fn.__name__.replace("_", "-"): fn - for fn in formats.__dict__.values() - if callable(fn) and not fn.__name__.startswith("_") -} - - -def validate(data: Any) -> bool: - """Validate the given ``data`` object using JSON Schema - This function raises ``ValidationError`` if ``data`` is invalid. - """ - with detailed_errors(): - _validate(data, custom_formats=FORMAT_FUNCTIONS) - reduce(lambda acc, fn: fn(acc), EXTRA_VALIDATIONS, data) - return True diff --git a/spaces/Rbrq/DeticChatGPT/tools/dump_clip_features.py b/spaces/Rbrq/DeticChatGPT/tools/dump_clip_features.py deleted file mode 100644 index 127f8c2a86c2425611c8ec075006664f5e07df45..0000000000000000000000000000000000000000 --- a/spaces/Rbrq/DeticChatGPT/tools/dump_clip_features.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import torch -import numpy as np -import itertools -from nltk.corpus import wordnet -import sys - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/lvis/lvis_v1_val.json') - parser.add_argument('--out_path', default='') - parser.add_argument('--prompt', default='a') - parser.add_argument('--model', default='clip') - parser.add_argument('--clip_model', default="ViT-B/32") - parser.add_argument('--fix_space', action='store_true') - parser.add_argument('--use_underscore', action='store_true') - parser.add_argument('--avg_synonyms', action='store_true') - parser.add_argument('--use_wn_name', action='store_true') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - cat_names = [x['name'] for x in \ - sorted(data['categories'], key=lambda x: x['id'])] - if 'synonyms' in data['categories'][0]: - if args.use_wn_name: - synonyms = [ - [xx.name() for xx in wordnet.synset(x['synset']).lemmas()] \ - if x['synset'] != 'stop_sign.n.01' else ['stop_sign'] \ - for x in sorted(data['categories'], key=lambda x: x['id'])] - else: - synonyms = [x['synonyms'] for x in \ - sorted(data['categories'], key=lambda x: x['id'])] - else: - synonyms = [] - if args.fix_space: - cat_names = [x.replace('_', ' ') for x in cat_names] - if args.use_underscore: - cat_names = [x.strip().replace('/ ', '/').replace(' ', '_') for x in cat_names] - print('cat_names', cat_names) - device = "cuda" if torch.cuda.is_available() else "cpu" - - if args.prompt == 'a': - sentences = ['a ' + x for x in cat_names] - sentences_synonyms = [['a ' + xx for xx in x] for x in synonyms] - if args.prompt == 'none': - sentences = [x for x in cat_names] - sentences_synonyms = [[xx for xx in x] for x in synonyms] - elif args.prompt == 'photo': - sentences = ['a photo of a {}'.format(x) for x in cat_names] - sentences_synonyms = [['a photo of a {}'.format(xx) for xx in x] \ - for x in synonyms] - elif args.prompt == 'scene': - sentences = ['a photo of a {} in the scene'.format(x) for x in cat_names] - sentences_synonyms = [['a photo of a {} in the scene'.format(xx) for xx in x] \ - for x in synonyms] - - print('sentences_synonyms', len(sentences_synonyms), \ - sum(len(x) for x in sentences_synonyms)) - if args.model == 'clip': - import clip - print('Loading CLIP') - model, preprocess = clip.load(args.clip_model, device=device) - if args.avg_synonyms: - sentences = list(itertools.chain.from_iterable(sentences_synonyms)) - print('flattened_sentences', len(sentences)) - text = clip.tokenize(sentences).to(device) - with torch.no_grad(): - if len(text) > 10000: - text_features = torch.cat([ - model.encode_text(text[:len(text) // 2]), - model.encode_text(text[len(text) // 2:])], - dim=0) - else: - text_features = model.encode_text(text) - print('text_features.shape', text_features.shape) - if args.avg_synonyms: - synonyms_per_cat = [len(x) for x in sentences_synonyms] - text_features = text_features.split(synonyms_per_cat, dim=0) - text_features = [x.mean(dim=0) for x in text_features] - text_features = torch.stack(text_features, dim=0) - print('after stack', text_features.shape) - text_features = text_features.cpu().numpy() - elif args.model in ['bert', 'roberta']: - from transformers import AutoTokenizer, AutoModel - if args.model == 'bert': - model_name = 'bert-large-uncased' - if args.model == 'roberta': - model_name = 'roberta-large' - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = AutoModel.from_pretrained(model_name) - model.eval() - if args.avg_synonyms: - sentences = list(itertools.chain.from_iterable(sentences_synonyms)) - print('flattened_sentences', len(sentences)) - inputs = tokenizer(sentences, padding=True, return_tensors="pt") - with torch.no_grad(): - model_outputs = model(**inputs) - outputs = model_outputs.pooler_output - text_features = outputs.detach().cpu() - if args.avg_synonyms: - synonyms_per_cat = [len(x) for x in sentences_synonyms] - text_features = text_features.split(synonyms_per_cat, dim=0) - text_features = [x.mean(dim=0) for x in text_features] - text_features = torch.stack(text_features, dim=0) - print('after stack', text_features.shape) - text_features = text_features.numpy() - print('text_features.shape', text_features.shape) - else: - assert 0, args.model - if args.out_path != '': - print('saveing to', args.out_path) - np.save(open(args.out_path, 'wb'), text_features) - import pdb; pdb.set_trace() diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/nets/__init__.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/nets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/line_detection.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/line_detection.py deleted file mode 100644 index 8ff379a8de3ff5d54dc807b397f947ea8f361ef9..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/line_detection.py +++ /dev/null @@ -1,572 +0,0 @@ -""" -Implementation of the line segment detection module. -""" -import math -import numpy as np -import torch - - -class LineSegmentDetectionModule(object): - """Module extracting line segments from junctions and line heatmaps.""" - - def __init__( - self, - detect_thresh, - num_samples=64, - sampling_method="local_max", - inlier_thresh=0.0, - heatmap_low_thresh=0.15, - heatmap_high_thresh=0.2, - max_local_patch_radius=3, - lambda_radius=2.0, - use_candidate_suppression=False, - nms_dist_tolerance=3.0, - use_heatmap_refinement=False, - heatmap_refine_cfg=None, - use_junction_refinement=False, - junction_refine_cfg=None, - ): - """ - Parameters: - detect_thresh: The probability threshold for mean activation (0. ~ 1.) - num_samples: Number of sampling locations along the line segments. - sampling_method: Sampling method on locations ("bilinear" or "local_max"). - inlier_thresh: The min inlier ratio to satisfy (0. ~ 1.) => 0. means no threshold. - heatmap_low_thresh: The lowest threshold for the pixel to be considered as candidate in junction recovery. - heatmap_high_thresh: The higher threshold for NMS in junction recovery. - max_local_patch_radius: The max patch to be considered in local maximum search. - lambda_radius: The lambda factor in linear local maximum search formulation - use_candidate_suppression: Apply candidate suppression to break long segments into short sub-segments. - nms_dist_tolerance: The distance tolerance for nms. Decide whether the junctions are on the line. - use_heatmap_refinement: Use heatmap refinement method or not. - heatmap_refine_cfg: The configs for heatmap refinement methods. - use_junction_refinement: Use junction refinement method or not. - junction_refine_cfg: The configs for junction refinement methods. - """ - # Line detection parameters - self.detect_thresh = detect_thresh - - # Line sampling parameters - self.num_samples = num_samples - self.sampling_method = sampling_method - self.inlier_thresh = inlier_thresh - self.local_patch_radius = max_local_patch_radius - self.lambda_radius = lambda_radius - - # Detecting junctions on the boundary parameters - self.low_thresh = heatmap_low_thresh - self.high_thresh = heatmap_high_thresh - - # Pre-compute the linspace sampler - self.sampler = np.linspace(0, 1, self.num_samples) - self.torch_sampler = torch.linspace(0, 1, self.num_samples) - - # Long line segment suppression configuration - self.use_candidate_suppression = use_candidate_suppression - self.nms_dist_tolerance = nms_dist_tolerance - - # Heatmap refinement configuration - self.use_heatmap_refinement = use_heatmap_refinement - self.heatmap_refine_cfg = heatmap_refine_cfg - if self.use_heatmap_refinement and self.heatmap_refine_cfg is None: - raise ValueError("[Error] Missing heatmap refinement config.") - - # Junction refinement configuration - self.use_junction_refinement = use_junction_refinement - self.junction_refine_cfg = junction_refine_cfg - if self.use_junction_refinement and self.junction_refine_cfg is None: - raise ValueError("[Error] Missing junction refinement config.") - - def convert_inputs(self, inputs, device): - """Convert inputs to desired torch tensor.""" - if isinstance(inputs, np.ndarray): - outputs = torch.tensor(inputs, dtype=torch.float32, device=device) - elif isinstance(inputs, torch.Tensor): - outputs = inputs.to(torch.float32).to(device) - else: - raise ValueError( - "[Error] Inputs must either be torch tensor or numpy ndarray." - ) - - return outputs - - def detect(self, junctions, heatmap, device=torch.device("cpu")): - """Main function performing line segment detection.""" - # Convert inputs to torch tensor - junctions = self.convert_inputs(junctions, device=device) - heatmap = self.convert_inputs(heatmap, device=device) - - # Perform the heatmap refinement - if self.use_heatmap_refinement: - if self.heatmap_refine_cfg["mode"] == "global": - heatmap = self.refine_heatmap( - heatmap, - self.heatmap_refine_cfg["ratio"], - self.heatmap_refine_cfg["valid_thresh"], - ) - elif self.heatmap_refine_cfg["mode"] == "local": - heatmap = self.refine_heatmap_local( - heatmap, - self.heatmap_refine_cfg["num_blocks"], - self.heatmap_refine_cfg["overlap_ratio"], - self.heatmap_refine_cfg["ratio"], - self.heatmap_refine_cfg["valid_thresh"], - ) - - # Initialize empty line map - num_junctions = junctions.shape[0] - line_map_pred = torch.zeros( - [num_junctions, num_junctions], device=device, dtype=torch.int32 - ) - - # Stop if there are not enough junctions - if num_junctions < 2: - return line_map_pred, junctions, heatmap - - # Generate the candidate map - candidate_map = torch.triu( - torch.ones( - [num_junctions, num_junctions], device=device, dtype=torch.int32 - ), - diagonal=1, - ) - - # Fetch the image boundary - if len(heatmap.shape) > 2: - H, W, _ = heatmap.shape - else: - H, W = heatmap.shape - - # Optionally perform candidate filtering - if self.use_candidate_suppression: - candidate_map = self.candidate_suppression(junctions, candidate_map) - - # Fetch the candidates - candidate_index_map = torch.where(candidate_map) - candidate_index_map = torch.cat( - [candidate_index_map[0][..., None], candidate_index_map[1][..., None]], - dim=-1, - ) - - # Get the corresponding start and end junctions - candidate_junc_start = junctions[candidate_index_map[:, 0], :] - candidate_junc_end = junctions[candidate_index_map[:, 1], :] - - # Get the sampling locations (N x 64) - sampler = self.torch_sampler.to(device)[None, ...] - cand_samples_h = candidate_junc_start[:, 0:1] * sampler + candidate_junc_end[ - :, 0:1 - ] * (1 - sampler) - cand_samples_w = candidate_junc_start[:, 1:2] * sampler + candidate_junc_end[ - :, 1:2 - ] * (1 - sampler) - - # Clip to image boundary - cand_h = torch.clamp(cand_samples_h, min=0, max=H - 1) - cand_w = torch.clamp(cand_samples_w, min=0, max=W - 1) - - # Local maximum search - if self.sampling_method == "local_max": - # Compute normalized segment lengths - segments_length = torch.sqrt( - torch.sum( - ( - candidate_junc_start.to(torch.float32) - - candidate_junc_end.to(torch.float32) - ) - ** 2, - dim=-1, - ) - ) - normalized_seg_length = segments_length / (((H**2) + (W**2)) ** 0.5) - - # Perform local max search - num_cand = cand_h.shape[0] - group_size = 10000 - if num_cand > group_size: - num_iter = math.ceil(num_cand / group_size) - sampled_feat_lst = [] - for iter_idx in range(num_iter): - if not iter_idx == num_iter - 1: - cand_h_ = cand_h[ - iter_idx * group_size : (iter_idx + 1) * group_size, : - ] - cand_w_ = cand_w[ - iter_idx * group_size : (iter_idx + 1) * group_size, : - ] - normalized_seg_length_ = normalized_seg_length[ - iter_idx * group_size : (iter_idx + 1) * group_size - ] - else: - cand_h_ = cand_h[iter_idx * group_size :, :] - cand_w_ = cand_w[iter_idx * group_size :, :] - normalized_seg_length_ = normalized_seg_length[ - iter_idx * group_size : - ] - sampled_feat_ = self.detect_local_max( - heatmap, cand_h_, cand_w_, H, W, normalized_seg_length_, device - ) - sampled_feat_lst.append(sampled_feat_) - sampled_feat = torch.cat(sampled_feat_lst, dim=0) - else: - sampled_feat = self.detect_local_max( - heatmap, cand_h, cand_w, H, W, normalized_seg_length, device - ) - # Bilinear sampling - elif self.sampling_method == "bilinear": - # Perform bilinear sampling - sampled_feat = self.detect_bilinear(heatmap, cand_h, cand_w, H, W, device) - else: - raise ValueError("[Error] Unknown sampling method.") - - # [Simple threshold detection] - # detection_results is a mask over all candidates - detection_results = torch.mean(sampled_feat, dim=-1) > self.detect_thresh - - # [Inlier threshold detection] - if self.inlier_thresh > 0.0: - inlier_ratio = ( - torch.sum(sampled_feat > self.detect_thresh, dim=-1).to(torch.float32) - / self.num_samples - ) - detection_results_inlier = inlier_ratio >= self.inlier_thresh - detection_results = detection_results * detection_results_inlier - - # Convert detection results back to line_map_pred - detected_junc_indexes = candidate_index_map[detection_results, :] - line_map_pred[detected_junc_indexes[:, 0], detected_junc_indexes[:, 1]] = 1 - line_map_pred[detected_junc_indexes[:, 1], detected_junc_indexes[:, 0]] = 1 - - # Perform junction refinement - if self.use_junction_refinement and len(detected_junc_indexes) > 0: - junctions, line_map_pred = self.refine_junction_perturb( - junctions, line_map_pred, heatmap, H, W, device - ) - - return line_map_pred, junctions, heatmap - - def refine_heatmap(self, heatmap, ratio=0.2, valid_thresh=1e-2): - """Global heatmap refinement method.""" - # Grab the top 10% values - heatmap_values = heatmap[heatmap > valid_thresh] - sorted_values = torch.sort(heatmap_values, descending=True)[0] - top10_len = math.ceil(sorted_values.shape[0] * ratio) - max20 = torch.mean(sorted_values[:top10_len]) - heatmap = torch.clamp(heatmap / max20, min=0.0, max=1.0) - return heatmap - - def refine_heatmap_local( - self, heatmap, num_blocks=5, overlap_ratio=0.5, ratio=0.2, valid_thresh=2e-3 - ): - """Local heatmap refinement method.""" - # Get the shape of the heatmap - H, W = heatmap.shape - increase_ratio = 1 - overlap_ratio - h_block = round(H / (1 + (num_blocks - 1) * increase_ratio)) - w_block = round(W / (1 + (num_blocks - 1) * increase_ratio)) - - count_map = torch.zeros(heatmap.shape, dtype=torch.int, device=heatmap.device) - heatmap_output = torch.zeros( - heatmap.shape, dtype=torch.float, device=heatmap.device - ) - # Iterate through each block - for h_idx in range(num_blocks): - for w_idx in range(num_blocks): - # Fetch the heatmap - h_start = round(h_idx * h_block * increase_ratio) - w_start = round(w_idx * w_block * increase_ratio) - h_end = h_start + h_block if h_idx < num_blocks - 1 else H - w_end = w_start + w_block if w_idx < num_blocks - 1 else W - - subheatmap = heatmap[h_start:h_end, w_start:w_end] - if subheatmap.max() > valid_thresh: - subheatmap = self.refine_heatmap( - subheatmap, ratio, valid_thresh=valid_thresh - ) - - # Aggregate it to the final heatmap - heatmap_output[h_start:h_end, w_start:w_end] += subheatmap - count_map[h_start:h_end, w_start:w_end] += 1 - heatmap_output = torch.clamp(heatmap_output / count_map, max=1.0, min=0.0) - - return heatmap_output - - def candidate_suppression(self, junctions, candidate_map): - """Suppress overlapping long lines in the candidate segments.""" - # Define the distance tolerance - dist_tolerance = self.nms_dist_tolerance - - # Compute distance between junction pairs - # (num_junc x 1 x 2) - (1 x num_junc x 2) => num_junc x num_junc map - line_dist_map = ( - torch.sum( - (torch.unsqueeze(junctions, dim=1) - junctions[None, ...]) ** 2, dim=-1 - ) - ** 0.5 - ) - - # Fetch all the "detected lines" - seg_indexes = torch.where(torch.triu(candidate_map, diagonal=1)) - start_point_idxs = seg_indexes[0] - end_point_idxs = seg_indexes[1] - start_points = junctions[start_point_idxs, :] - end_points = junctions[end_point_idxs, :] - - # Fetch corresponding entries - line_dists = line_dist_map[start_point_idxs, end_point_idxs] - - # Check whether they are on the line - dir_vecs = (end_points - start_points) / torch.norm( - end_points - start_points, dim=-1 - )[..., None] - # Get the orthogonal distance - cand_vecs = junctions[None, ...] - start_points.unsqueeze(dim=1) - cand_vecs_norm = torch.norm(cand_vecs, dim=-1) - # Check whether they are projected directly onto the segment - proj = ( - torch.einsum("bij,bjk->bik", cand_vecs, dir_vecs[..., None]) - / line_dists[..., None, None] - ) - # proj is num_segs x num_junction x 1 - proj_mask = (proj >= 0) * (proj <= 1) - cand_angles = torch.acos( - torch.einsum("bij,bjk->bik", cand_vecs, dir_vecs[..., None]) - / cand_vecs_norm[..., None] - ) - cand_dists = cand_vecs_norm[..., None] * torch.sin(cand_angles) - junc_dist_mask = cand_dists <= dist_tolerance - junc_mask = junc_dist_mask * proj_mask - - # Minus starting points - num_segs = start_point_idxs.shape[0] - junc_counts = torch.sum(junc_mask, dim=[1, 2]) - junc_counts -= junc_mask[..., 0][ - torch.arange(0, num_segs), start_point_idxs - ].to(torch.int) - junc_counts -= junc_mask[..., 0][torch.arange(0, num_segs), end_point_idxs].to( - torch.int - ) - - # Get the invalid candidate mask - final_mask = junc_counts > 0 - candidate_map[start_point_idxs[final_mask], end_point_idxs[final_mask]] = 0 - - return candidate_map - - def refine_junction_perturb(self, junctions, line_map_pred, heatmap, H, W, device): - """Refine the line endpoints in a similar way as in LSD.""" - # Get the config - junction_refine_cfg = self.junction_refine_cfg - - # Fetch refinement parameters - num_perturbs = junction_refine_cfg["num_perturbs"] - perturb_interval = junction_refine_cfg["perturb_interval"] - side_perturbs = (num_perturbs - 1) // 2 - # Fetch the 2D perturb mat - perturb_vec = torch.arange( - start=-perturb_interval * side_perturbs, - end=perturb_interval * (side_perturbs + 1), - step=perturb_interval, - device=device, - ) - w1_grid, h1_grid, w2_grid, h2_grid = torch.meshgrid( - perturb_vec, perturb_vec, perturb_vec, perturb_vec - ) - perturb_tensor = torch.cat( - [ - w1_grid[..., None], - h1_grid[..., None], - w2_grid[..., None], - h2_grid[..., None], - ], - dim=-1, - ) - perturb_tensor_flat = perturb_tensor.view(-1, 2, 2) - - # Fetch the junctions and line_map - junctions = junctions.clone() - line_map = line_map_pred - - # Fetch all the detected lines - detected_seg_indexes = torch.where(torch.triu(line_map, diagonal=1)) - start_point_idxs = detected_seg_indexes[0] - end_point_idxs = detected_seg_indexes[1] - start_points = junctions[start_point_idxs, :] - end_points = junctions[end_point_idxs, :] - - line_segments = torch.cat( - [start_points.unsqueeze(dim=1), end_points.unsqueeze(dim=1)], dim=1 - ) - - line_segment_candidates = ( - line_segments.unsqueeze(dim=1) + perturb_tensor_flat[None, ...] - ) - # Clip the boundaries - line_segment_candidates[..., 0] = torch.clamp( - line_segment_candidates[..., 0], min=0, max=H - 1 - ) - line_segment_candidates[..., 1] = torch.clamp( - line_segment_candidates[..., 1], min=0, max=W - 1 - ) - - # Iterate through all the segments - refined_segment_lst = [] - num_segments = line_segments.shape[0] - for idx in range(num_segments): - segment = line_segment_candidates[idx, ...] - # Get the corresponding start and end junctions - candidate_junc_start = segment[:, 0, :] - candidate_junc_end = segment[:, 1, :] - - # Get the sampling locations (N x 64) - sampler = self.torch_sampler.to(device)[None, ...] - cand_samples_h = candidate_junc_start[ - :, 0:1 - ] * sampler + candidate_junc_end[:, 0:1] * (1 - sampler) - cand_samples_w = candidate_junc_start[ - :, 1:2 - ] * sampler + candidate_junc_end[:, 1:2] * (1 - sampler) - - # Clip to image boundary - cand_h = torch.clamp(cand_samples_h, min=0, max=H - 1) - cand_w = torch.clamp(cand_samples_w, min=0, max=W - 1) - - # Perform bilinear sampling - segment_feat = self.detect_bilinear(heatmap, cand_h, cand_w, H, W, device) - segment_results = torch.mean(segment_feat, dim=-1) - max_idx = torch.argmax(segment_results) - refined_segment_lst.append(segment[max_idx, ...][None, ...]) - - # Concatenate back to segments - refined_segments = torch.cat(refined_segment_lst, dim=0) - - # Convert back to junctions and line_map - junctions_new = torch.cat( - [refined_segments[:, 0, :], refined_segments[:, 1, :]], dim=0 - ) - junctions_new = torch.unique(junctions_new, dim=0) - line_map_new = self.segments_to_line_map(junctions_new, refined_segments) - - return junctions_new, line_map_new - - def segments_to_line_map(self, junctions, segments): - """Convert the list of segments to line map.""" - # Create empty line map - device = junctions.device - num_junctions = junctions.shape[0] - line_map = torch.zeros([num_junctions, num_junctions], device=device) - - # Iterate through every segment - for idx in range(segments.shape[0]): - # Get the junctions from a single segement - seg = segments[idx, ...] - junction1 = seg[0, :] - junction2 = seg[1, :] - - # Get index - idx_junction1 = torch.where((junctions == junction1).sum(axis=1) == 2)[0] - idx_junction2 = torch.where((junctions == junction2).sum(axis=1) == 2)[0] - - # label the corresponding entries - line_map[idx_junction1, idx_junction2] = 1 - line_map[idx_junction2, idx_junction1] = 1 - - return line_map - - def detect_bilinear(self, heatmap, cand_h, cand_w, H, W, device): - """Detection by bilinear sampling.""" - # Get the floor and ceiling locations - cand_h_floor = torch.floor(cand_h).to(torch.long) - cand_h_ceil = torch.ceil(cand_h).to(torch.long) - cand_w_floor = torch.floor(cand_w).to(torch.long) - cand_w_ceil = torch.ceil(cand_w).to(torch.long) - - # Perform the bilinear sampling - cand_samples_feat = ( - heatmap[cand_h_floor, cand_w_floor] - * (cand_h_ceil - cand_h) - * (cand_w_ceil - cand_w) - + heatmap[cand_h_floor, cand_w_ceil] - * (cand_h_ceil - cand_h) - * (cand_w - cand_w_floor) - + heatmap[cand_h_ceil, cand_w_floor] - * (cand_h - cand_h_floor) - * (cand_w_ceil - cand_w) - + heatmap[cand_h_ceil, cand_w_ceil] - * (cand_h - cand_h_floor) - * (cand_w - cand_w_floor) - ) - - return cand_samples_feat - - def detect_local_max( - self, heatmap, cand_h, cand_w, H, W, normalized_seg_length, device - ): - """Detection by local maximum search.""" - # Compute the distance threshold - dist_thresh = 0.5 * (2**0.5) + self.lambda_radius * normalized_seg_length - # Make it N x 64 - dist_thresh = torch.repeat_interleave( - dist_thresh[..., None], self.num_samples, dim=-1 - ) - - # Compute the candidate points - cand_points = torch.cat([cand_h[..., None], cand_w[..., None]], dim=-1) - cand_points_round = torch.round(cand_points) # N x 64 x 2 - - # Construct local patches 9x9 = 81 - patch_mask = torch.zeros( - [ - int(2 * self.local_patch_radius + 1), - int(2 * self.local_patch_radius + 1), - ], - device=device, - ) - patch_center = torch.tensor( - [[self.local_patch_radius, self.local_patch_radius]], - device=device, - dtype=torch.float32, - ) - H_patch_points, W_patch_points = torch.where(patch_mask >= 0) - patch_points = torch.cat( - [H_patch_points[..., None], W_patch_points[..., None]], dim=-1 - ) - # Fetch the circle region - patch_center_dist = torch.sqrt( - torch.sum((patch_points - patch_center) ** 2, dim=-1) - ) - patch_points = patch_points[patch_center_dist <= self.local_patch_radius, :] - # Shift [0, 0] to the center - patch_points = patch_points - self.local_patch_radius - - # Construct local patch mask - patch_points_shifted = ( - torch.unsqueeze(cand_points_round, dim=2) + patch_points[None, None, ...] - ) - patch_dist = torch.sqrt( - torch.sum( - (torch.unsqueeze(cand_points, dim=2) - patch_points_shifted) ** 2, - dim=-1, - ) - ) - patch_dist_mask = patch_dist < dist_thresh[..., None] - - # Get all points => num_points_center x num_patch_points x 2 - points_H = torch.clamp(patch_points_shifted[:, :, :, 0], min=0, max=H - 1).to( - torch.long - ) - points_W = torch.clamp(patch_points_shifted[:, :, :, 1], min=0, max=W - 1).to( - torch.long - ) - points = torch.cat([points_H[..., None], points_W[..., None]], dim=-1) - - # Sample the feature (N x 64 x 81) - sampled_feat = heatmap[points[:, :, :, 0], points[:, :, :, 1]] - # Filtering using the valid mask - sampled_feat = sampled_feat * patch_dist_mask.to(torch.float32) - if len(sampled_feat) == 0: - sampled_feat_lmax = torch.empty(0, 64) - else: - sampled_feat_lmax, _ = torch.max(sampled_feat, dim=-1) - - return sampled_feat_lmax diff --git a/spaces/Reeve/Ohayou_Face/training/loss.py b/spaces/Reeve/Ohayou_Face/training/loss.py deleted file mode 100644 index 5299e84a619eea15aaedbe05d8753522b358c720..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/training/loss.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import torch -from torch_utils import training_stats -from torch_utils import misc -from torch_utils.ops import conv2d_gradfix - -#---------------------------------------------------------------------------- - -class Loss: - def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, sync, gain): # to be overridden by subclass - raise NotImplementedError() - -#---------------------------------------------------------------------------- - -class StyleGAN2Loss(Loss): - def __init__(self, device, G, G_mapping, G_synthesis, D, augment_pipe=None, style_mixing_prob=0.9, r1_gamma=10, pl_batch_shrink=2, pl_decay=0.01, pl_weight=2, G_top_k = False, G_top_k_gamma = 0.9, G_top_k_frac = 0.5,): - super().__init__() - self.device = device - self.G = G - self.G_mapping = G_mapping - self.G_synthesis = G_synthesis - self.D = D - self.augment_pipe = augment_pipe - self.style_mixing_prob = style_mixing_prob - self.r1_gamma = r1_gamma - self.pl_batch_shrink = pl_batch_shrink - self.pl_decay = pl_decay - self.pl_weight = pl_weight - self.pl_mean = torch.zeros([], device=device) - self.G_top_k = G_top_k - self.G_top_k_gamma = G_top_k_gamma - self.G_top_k_frac = G_top_k_frac - - - def run_G(self, z, c, sync): - with misc.ddp_sync(self.G_mapping, sync): - ws = self.G_mapping(z, c) - if self.style_mixing_prob > 0: - with torch.autograd.profiler.record_function('style_mixing'): - cutoff = torch.empty([], dtype=torch.int64, device=ws.device).random_(1, ws.shape[1]) - cutoff = torch.where(torch.rand([], device=ws.device) < self.style_mixing_prob, cutoff, torch.full_like(cutoff, ws.shape[1])) - ws[:, cutoff:] = self.G_mapping(torch.randn_like(z), c, skip_w_avg_update=True)[:, cutoff:] - with misc.ddp_sync(self.G_synthesis, sync): - img = self.G_synthesis(ws) - return img, ws - - def run_D(self, img, c, sync): - if self.augment_pipe is not None: - img = self.augment_pipe(img) - with misc.ddp_sync(self.D, sync): - logits = self.D(img, c) - return logits - - def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, sync, gain): - assert phase in ['Gmain', 'Greg', 'Gboth', 'Dmain', 'Dreg', 'Dboth'] - do_Gmain = (phase in ['Gmain', 'Gboth']) - do_Dmain = (phase in ['Dmain', 'Dboth']) - do_Gpl = (phase in ['Greg', 'Gboth']) and (self.pl_weight != 0) - do_Dr1 = (phase in ['Dreg', 'Dboth']) and (self.r1_gamma != 0) - - # Gmain: Maximize logits for generated images. - if do_Gmain: - with torch.autograd.profiler.record_function('Gmain_forward'): - minibatch_size = gen_z.shape[0] - gen_img, _gen_ws = self.run_G(gen_z, gen_c, sync=(sync and not do_Gpl)) # May get synced by Gpl. - gen_logits = self.run_D(gen_img, gen_c, sync=False) - training_stats.report('Loss/scores/fake', gen_logits) - training_stats.report('Loss/signs/fake', gen_logits.sign()) - - # top-k function based on: https://github.com/dvschultz/stylegan2-ada/blob/main/training/loss.py#L102 - if self.G_top_k: - D_fake_scores = gen_logits - k_frac = np.maximum(self.G_top_k_gamma ** self.G.epochs, self.G_top_k_frac) - k = int(np.ceil(minibatch_size * k_frac)) - lowest_k_scores, _ = torch.topk(-torch.squeeze(D_fake_scores), k=k) # want smallest probabilities not largest - gen_logits = torch.unsqueeze(-lowest_k_scores, axis=1) - - loss_Gmain = torch.nn.functional.softplus(-gen_logits) # -log(sigmoid(gen_logits)) - training_stats.report('Loss/G/loss', loss_Gmain) - with torch.autograd.profiler.record_function('Gmain_backward'): - loss_Gmain.mean().mul(gain).backward() - - # Gpl: Apply path length regularization. - if do_Gpl: - with torch.autograd.profiler.record_function('Gpl_forward'): - batch_size = gen_z.shape[0] // self.pl_batch_shrink - gen_img, gen_ws = self.run_G(gen_z[:batch_size], gen_c[:batch_size], sync=sync) - pl_noise = torch.randn_like(gen_img) / np.sqrt(gen_img.shape[2] * gen_img.shape[3]) - with torch.autograd.profiler.record_function('pl_grads'), conv2d_gradfix.no_weight_gradients(): - pl_grads = torch.autograd.grad(outputs=[(gen_img * pl_noise).sum()], inputs=[gen_ws], create_graph=True, only_inputs=True)[0] - pl_lengths = pl_grads.square().sum(2).mean(1).sqrt() - pl_mean = self.pl_mean.lerp(pl_lengths.mean(), self.pl_decay) - self.pl_mean.copy_(pl_mean.detach()) - pl_penalty = (pl_lengths - pl_mean).square() - training_stats.report('Loss/pl_penalty', pl_penalty) - loss_Gpl = pl_penalty * self.pl_weight - training_stats.report('Loss/G/reg', loss_Gpl) - with torch.autograd.profiler.record_function('Gpl_backward'): - (gen_img[:, 0, 0, 0] * 0 + loss_Gpl).mean().mul(gain).backward() - - # Dmain: Minimize logits for generated images. - loss_Dgen = 0 - if do_Dmain: - with torch.autograd.profiler.record_function('Dgen_forward'): - gen_img, _gen_ws = self.run_G(gen_z, gen_c, sync=False) - gen_logits = self.run_D(gen_img, gen_c, sync=False) # Gets synced by loss_Dreal. - training_stats.report('Loss/scores/fake', gen_logits) - training_stats.report('Loss/signs/fake', gen_logits.sign()) - loss_Dgen = torch.nn.functional.softplus(gen_logits) # -log(1 - sigmoid(gen_logits)) - with torch.autograd.profiler.record_function('Dgen_backward'): - loss_Dgen.mean().mul(gain).backward() - - # Dmain: Maximize logits for real images. - # Dr1: Apply R1 regularization. - if do_Dmain or do_Dr1: - name = 'Dreal_Dr1' if do_Dmain and do_Dr1 else 'Dreal' if do_Dmain else 'Dr1' - with torch.autograd.profiler.record_function(name + '_forward'): - real_img_tmp = real_img.detach().requires_grad_(do_Dr1) - real_logits = self.run_D(real_img_tmp, real_c, sync=sync) - training_stats.report('Loss/scores/real', real_logits) - training_stats.report('Loss/signs/real', real_logits.sign()) - - loss_Dreal = 0 - if do_Dmain: - loss_Dreal = torch.nn.functional.softplus(-real_logits) # -log(sigmoid(real_logits)) - training_stats.report('Loss/D/loss', loss_Dgen + loss_Dreal) - - loss_Dr1 = 0 - if do_Dr1: - with torch.autograd.profiler.record_function('r1_grads'), conv2d_gradfix.no_weight_gradients(): - r1_grads = torch.autograd.grad(outputs=[real_logits.sum()], inputs=[real_img_tmp], create_graph=True, only_inputs=True)[0] - r1_penalty = r1_grads.square().sum([1,2,3]) - loss_Dr1 = r1_penalty * (self.r1_gamma / 2) - training_stats.report('Loss/r1_penalty', r1_penalty) - training_stats.report('Loss/D/reg', loss_Dr1) - - with torch.autograd.profiler.record_function(name + '_backward'): - (real_logits * 0 + loss_Dreal + loss_Dr1).mean().mul(gain).backward() - -#---------------------------------------------------------------------------- diff --git a/spaces/ReyDev/Claude-Space/README.md b/spaces/ReyDev/Claude-Space/README.md deleted file mode 100644 index 4dd1f6f0bdf555c02ea7d0a92d108c50d3038dd1..0000000000000000000000000000000000000000 --- a/spaces/ReyDev/Claude-Space/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Claude Space -emoji: 🔥 -colorFrom: red -colorTo: indigo -sdk: docker -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/scale.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/scale.py deleted file mode 100644 index c905fffcc8bf998d18d94f927591963c428025e2..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/scale.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class Scale(nn.Module): - """A learnable scale parameter. - - This layer scales the input by a learnable factor. It multiplies a - learnable scale parameter of shape (1,) with input of any shape. - - Args: - scale (float): Initial value of scale factor. Default: 1.0 - """ - - def __init__(self, scale=1.0): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float)) - - def forward(self, x): - return x * self.scale diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/kd_one_stage.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/kd_one_stage.py deleted file mode 100644 index 671ec19015c87fefd065b84ae887147f90cc892b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/kd_one_stage.py +++ /dev/null @@ -1,100 +0,0 @@ -import mmcv -import torch -from mmcv.runner import load_checkpoint - -from .. import build_detector -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class KnowledgeDistillationSingleStageDetector(SingleStageDetector): - r"""Implementation of `Distilling the Knowledge in a Neural Network. - `_. - - Args: - teacher_config (str | dict): Config file path - or the config object of teacher model. - teacher_ckpt (str, optional): Checkpoint path of teacher model. - If left as None, the model will not load any weights. - """ - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_config, - teacher_ckpt=None, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - # Build teacher model - if isinstance(teacher_config, str): - teacher_config = mmcv.Config.fromfile(teacher_config) - self.teacher_model = build_detector(teacher_config['model']) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img) - with torch.no_grad(): - teacher_x = self.teacher_model.extract_feat(img) - out_teacher = self.teacher_model.bbox_head(teacher_x) - losses = self.bbox_head.forward_train(x, out_teacher, img_metas, - gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses - - def cuda(self, device=None): - """Since teacher_model is registered as a plain object, it is necessary - to put the teacher model to cuda when calling cuda function.""" - self.teacher_model.cuda(device=device) - return super().cuda(device=device) - - def train(self, mode=True): - """Set the same train mode for teacher and student model.""" - if self.eval_teacher: - self.teacher_model.train(False) - else: - self.teacher_model.train(mode) - super().train(mode) - - def __setattr__(self, name, value): - """Set attribute, i.e. self.name = value - - This reloading prevent the teacher model from being registered as a - nn.Module. The teacher module is registered as a plain object, so that - the teacher parameters will not show up when calling - ``self.parameters``, ``self.modules``, ``self.children`` methods. - """ - if name == 'teacher_model': - object.__setattr__(self, name, value) - else: - super().__setattr__(name, value) diff --git a/spaces/Ryukijano/it-happened-one-frame-2/README.md b/spaces/Ryukijano/it-happened-one-frame-2/README.md deleted file mode 100644 index 518f573c8e8ee08567f4f5b888977dcac4a14074..0000000000000000000000000000000000000000 --- a/spaces/Ryukijano/it-happened-one-frame-2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: It Happened One Frame 2 -emoji: 🐠 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.0.11 -app_file: app.py -pinned: false -license: afl-3.0 -duplicated_from: YiYiXu/it-happened-one-frame-2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/korean.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/SShaik/SS-01-H5-Play-Canvas-Sim-Physics/index.html b/spaces/SShaik/SS-01-H5-Play-Canvas-Sim-Physics/index.html deleted file mode 100644 index 514d8ec32ab769c75928229a2785ee99a95f5e14..0000000000000000000000000000000000000000 --- a/spaces/SShaik/SS-01-H5-Play-Canvas-Sim-Physics/index.html +++ /dev/null @@ -1,10 +0,0 @@ - -

        SimPhysics

        -

        User input: WASD

        -

        This WebGL demo demonstrates PlayCanvas and a physics vehicle simulation that is web based and playable anywhere your browser goes🤗 Inference API.

        -

        Source code is in Readme.md file.

        -

        PlayCanvas project is here

        -
        - -
        - \ No newline at end of file diff --git a/spaces/Salesforce/EDICT/my_diffusers/pipelines/score_sde_ve/__init__.py b/spaces/Salesforce/EDICT/my_diffusers/pipelines/score_sde_ve/__init__.py deleted file mode 100644 index 000d61f6e9b183728cb6fc137e7180cac3a616df..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/pipelines/score_sde_ve/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_score_sde_ve import ScoreSdeVePipeline diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/dermatophilosis (lumpy wool, rain scald).md b/spaces/SarthakSidhant/Go-Cattle/diseases/dermatophilosis (lumpy wool, rain scald).md deleted file mode 100644 index 95699b3abee40ef0f70bc8da4fdc9f80343973d3..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/dermatophilosis (lumpy wool, rain scald).md +++ /dev/null @@ -1,37 +0,0 @@ -## Dermatophilosis (lumpy wool, rain scald) - -**Information:** Dermatophilosis is a bacterial disease that affects cattle. It is caused by the bacterium Dermatophilus congolensis. Dermatophilosis can cause a variety of symptoms in affected animals, including hair loss, scabs, and skin lesions. In some cases, dermatophilosis can also be fatal. - -**Symptoms:** - -* Hair loss -* Scabs -* Skin lesions -* Itching -* Pain -* Fever -* Weight loss -* Depression - -**Remedies:** - -* Dermatophilosis can be treated with antibiotics, such as penicillin or tetracycline. -* Treatment may take several weeks. -* Animals that have been diagnosed with dermatophilosis should be isolated from other animals to prevent the spread of the disease. - -**Causes:** - -* Dermatophilosis is caused by the bacterium Dermatophilus congolensis. -* This bacterium is found in the environment, and it can enter the body of an animal through cuts or abrasions in the skin. -* Dermatophilosis can also be spread through contact with infected animals or their bodily fluids. - -**Prevention:** - -* The best way to prevent dermatophilosis is to keep animals healthy and well-groomed. -* Animals should be kept in clean, dry conditions and should have access to fresh water. -* Animals should be vaccinated against dermatophilosis. -* Other preventive measures include: - * Avoiding contact with infected animals or their bodily fluids - * Practicing good biosecurity measures - * Treating any cuts or abrasions on animals promptly -* Disposing of dead animals properly diff --git "a/spaces/SarthakSidhant/Go-Cattle/pages/2_\360\237\247\252_Diseases.py" "b/spaces/SarthakSidhant/Go-Cattle/pages/2_\360\237\247\252_Diseases.py" deleted file mode 100644 index ec4d763d075813d2f13b6d06a70ca6580c995dd2..0000000000000000000000000000000000000000 --- "a/spaces/SarthakSidhant/Go-Cattle/pages/2_\360\237\247\252_Diseases.py" +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as st -import pandas as pd -from sklearn.ensemble import RandomForestClassifier -import joblib -import numpy as np -from support import * -import os - -st.title("Learn About Diseases") -def get_diseases(): - diseases = [] - for file in os.listdir("diseases"): - if file.endswith(".md"): - disease_name = file[:-3] - diseases.append(disease_name) - return diseases - -def show_disease(disease_name): - with open(f"diseases/{disease_name}.md", "r") as f: - content = f.read() - st.markdown(content) - -diseases = get_diseases() - -for disease in diseases: - but1 = st.button(disease) - if but1: - show_disease(disease) diff --git a/spaces/Stereo0001/MagicPrompt-Stable-Diffusion/README.md b/spaces/Stereo0001/MagicPrompt-Stable-Diffusion/README.md deleted file mode 100644 index 98b00b0487e2ab609b0b29eb82c55d9215ab3406..0000000000000000000000000000000000000000 --- a/spaces/Stereo0001/MagicPrompt-Stable-Diffusion/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MagicPrompt Stable Diffusion -emoji: 😻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: Gustavosta/MagicPrompt-Stable-Diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SuCicada/Lain-TTS/README.md b/spaces/SuCicada/Lain-TTS/README.md deleted file mode 100644 index 8cc33b62a2341bac8800decc9935c84426067510..0000000000000000000000000000000000000000 --- a/spaces/SuCicada/Lain-TTS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Lain TTS -emoji: 🏢 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SuYuanS/AudioCraft_Plus/README.md b/spaces/SuYuanS/AudioCraft_Plus/README.md deleted file mode 100644 index 2fc15bab83edcf4c4074470afb618a5a37c1728a..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: AudioCraft Plus v2.0.0a (MusicGen + AudioGen) -emoji: 🎶 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: GrandaddyShmax/AudioCraft_Plus ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sumit7864/Image-Enhancer/scripts/generate_multiscale_DF2K.py b/spaces/Sumit7864/Image-Enhancer/scripts/generate_multiscale_DF2K.py deleted file mode 100644 index d4f5d8324b1624e4cb6163754703b8dac2d188fd..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/scripts/generate_multiscale_DF2K.py +++ /dev/null @@ -1,48 +0,0 @@ -import argparse -import glob -import os -from PIL import Image - - -def main(args): - # For DF2K, we consider the following three scales, - # and the smallest image whose shortest edge is 400 - scale_list = [0.75, 0.5, 1 / 3] - shortest_edge = 400 - - path_list = sorted(glob.glob(os.path.join(args.input, '*'))) - for path in path_list: - print(path) - basename = os.path.splitext(os.path.basename(path))[0] - - img = Image.open(path) - width, height = img.size - for idx, scale in enumerate(scale_list): - print(f'\t{scale:.2f}') - rlt = img.resize((int(width * scale), int(height * scale)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f'{basename}T{idx}.png')) - - # save the smallest image which the shortest edge is 400 - if width < height: - ratio = height / width - width = shortest_edge - height = int(width * ratio) - else: - ratio = width / height - height = shortest_edge - width = int(height * ratio) - rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f'{basename}T{idx+1}.png')) - - -if __name__ == '__main__': - """Generate multi-scale versions for GT images with LANCZOS resampling. - It is now used for DF2K dataset (DIV2K + Flickr 2K) - """ - parser = argparse.ArgumentParser() - parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder') - parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_multiscale', help='Output folder') - args = parser.parse_args() - - os.makedirs(args.output, exist_ok=True) - main(args) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/module_paths.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/module_paths.py deleted file mode 100644 index 6f8cb1004a64442e252880f1fb8b77784267bae4..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/module_paths.py +++ /dev/null @@ -1,70 +0,0 @@ -"""Utility functions for finding modules - -Utility functions for finding modules on sys.path. - -""" -#----------------------------------------------------------------------------- -# Copyright (c) 2011, the IPython Development Team. -# -# Distributed under the terms of the Modified BSD License. -# -# The full license is in the file COPYING.txt, distributed with this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -# Stdlib imports -import importlib -import sys - -# Third-party imports - -# Our own imports - - -#----------------------------------------------------------------------------- -# Globals and constants -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Local utilities -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Classes and functions -#----------------------------------------------------------------------------- - -def find_mod(module_name): - """ - Find module `module_name` on sys.path, and return the path to module `module_name`. - - - If `module_name` refers to a module directory, then return path to __init__ file. - - If `module_name` is a directory without an __init__file, return None. - - If module is missing or does not have a `.py` or `.pyw` extension, return None. - - Note that we are not interested in running bytecode. - - Otherwise, return the fill path of the module. - - Parameters - ---------- - module_name : str - - Returns - ------- - module_path : str - Path to module `module_name`, its __init__.py, or None, - depending on above conditions. - """ - spec = importlib.util.find_spec(module_name) - module_path = spec.origin - if module_path is None: - if spec.loader in sys.meta_path: - return spec.loader - return None - else: - split_path = module_path.split(".") - if split_path[-1] in ["py", "pyw"]: - return module_path - else: - return None diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/colorama/tests/utils.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/colorama/tests/utils.py deleted file mode 100644 index 472fafb4403efb9673d5cc724dafd9cf764aac5b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/colorama/tests/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -from contextlib import contextmanager -from io import StringIO -import sys -import os - - -class StreamTTY(StringIO): - def isatty(self): - return True - -class StreamNonTTY(StringIO): - def isatty(self): - return False - -@contextmanager -def osname(name): - orig = os.name - os.name = name - yield - os.name = orig - -@contextmanager -def replace_by(stream): - orig_stdout = sys.stdout - orig_stderr = sys.stderr - sys.stdout = stream - sys.stderr = stream - yield - sys.stdout = orig_stdout - sys.stderr = orig_stderr - -@contextmanager -def replace_original_by(stream): - orig_stdout = sys.__stdout__ - orig_stderr = sys.__stderr__ - sys.__stdout__ = stream - sys.__stderr__ = stream - yield - sys.__stdout__ = orig_stdout - sys.__stderr__ = orig_stderr - -@contextmanager -def pycharm(): - os.environ["PYCHARM_HOSTED"] = "1" - non_tty = StreamNonTTY() - with replace_by(non_tty), replace_original_by(non_tty): - yield - del os.environ["PYCHARM_HOSTED"] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.cpp b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.cpp deleted file mode 100644 index e44c6e14785e087946ae899df12b3ea813244add..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.cpp +++ /dev/null @@ -1,634 +0,0 @@ -/* **************************************************************************** -* -* Copyright (c) Microsoft Corporation. -* -* This source code is subject to terms and conditions of the Apache License, Version 2.0. A -* copy of the license can be found in the License.html file at the root of this distribution. If -* you cannot locate the Apache License, Version 2.0, please send an email to -* vspython@microsoft.com. By using this source code in any fashion, you are agreeing to be bound -* by the terms of the Apache License, Version 2.0. -* -* You must not remove this notice, or any other, from this software. -* -* Contributor: Fabio Zadrozny -* -* Based on PyDebugAttach.cpp from PVTS. Windows only. -* -* https://github.com/Microsoft/PTVS/blob/master/Python/Product/PyDebugAttach/PyDebugAttach.cpp -* -* Initially we did an attach completely based on shellcode which got the -* GIL called PyRun_SimpleString with the needed code and was done with it -* (so, none of this code was needed). -* Now, newer version of Python don't initialize threading by default, so, -* most of this code is done only to overcome this limitation (and as a plus, -* if there's no code running, we also pause the threads to make our code run). -* -* On Linux the approach is still the simpler one (using gdb), so, on newer -* versions of Python it may not work unless the user has some code running -* and threads are initialized. -* I.e.: -* -* The user may have to add the code below in the start of its script for -* a successful attach (if he doesn't already use threads). -* -* from threading import Thread -* Thread(target=str).start() -* -* -- this is the workaround for the fact that we can't get the gil -* if there aren't any threads (PyGILState_Ensure gives an error). -* ***************************************************************************/ - - -// Access to std::cout and std::endl -#include -#include -// DECLDIR will perform an export for us -#define DLL_EXPORT - -#include "attach.h" -#include "stdafx.h" - -#include "../common/python.h" -#include "../common/ref_utils.hpp" -#include "../common/py_utils.hpp" -#include "../common/py_settrace.hpp" - - -#pragma comment(lib, "kernel32.lib") -#pragma comment(lib, "user32.lib") -#pragma comment(lib, "advapi32.lib") -#pragma comment(lib, "psapi.lib") - -#include "py_win_helpers.hpp" -#include "run_code_in_memory.hpp" - -// _Always_ is not defined for all versions, so make it a no-op if missing. -#ifndef _Always_ -#define _Always_(x) x -#endif - - -typedef void (PyEval_Lock)(); // Acquire/Release lock -typedef void (PyThreadState_API)(PyThreadState *); // Acquire/Release lock -typedef PyObject* (Py_CompileString)(const char *str, const char *filename, int start); -typedef PyObject* (PyEval_EvalCode)(PyObject *co, PyObject *globals, PyObject *locals); -typedef PyObject* (PyDict_GetItemString)(PyObject *p, const char *key); -typedef PyObject* (PyEval_GetBuiltins)(); -typedef int (PyDict_SetItemString)(PyObject *dp, const char *key, PyObject *item); -typedef int (PyEval_ThreadsInitialized)(); -typedef int (Py_AddPendingCall)(int (*func)(void *), void*); -typedef PyObject* (PyString_FromString)(const char* s); -typedef void PyEval_SetTrace(Py_tracefunc func, PyObject *obj); -typedef PyObject* (PyErr_Print)(); -typedef PyObject* (PyObject_SetAttrString)(PyObject *o, const char *attr_name, PyObject* value); -typedef PyObject* (PyBool_FromLong)(long v); -typedef unsigned long (_PyEval_GetSwitchInterval)(void); -typedef void (_PyEval_SetSwitchInterval)(unsigned long microseconds); -typedef PyGILState_STATE PyGILState_EnsureFunc(void); -typedef void PyGILState_ReleaseFunc(PyGILState_STATE); -typedef PyThreadState *PyThreadState_NewFunc(PyInterpreterState *interp); - -typedef PyObject *PyList_New(Py_ssize_t len); -typedef int PyList_Append(PyObject *list, PyObject *item); - - - -std::wstring GetCurrentModuleFilename() { - HMODULE hModule = nullptr; - if (GetModuleHandleEx(GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS | GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT, (LPCTSTR)GetCurrentModuleFilename, &hModule) != 0) { - wchar_t filename[MAX_PATH]; - GetModuleFileName(hModule, filename, MAX_PATH); - return filename; - } - return std::wstring(); -} - - -struct InitializeThreadingInfo { - PyImport_ImportModule* pyImportMod; - PyEval_Lock* initThreads; - - std::mutex mutex; - HANDLE initedEvent; // Note: only access with mutex locked (and check if not already nullptr). - bool completed; // Note: only access with mutex locked -}; - - -int AttachCallback(void *voidInitializeThreadingInfo) { - // initialize us for threading, this will acquire the GIL if not already created, and is a nop if the GIL is created. - // This leaves us in the proper state when we return back to the runtime whether the GIL was created or not before - // we were called. - InitializeThreadingInfo* initializeThreadingInfo = reinterpret_cast(voidInitializeThreadingInfo); - initializeThreadingInfo->initThreads(); // Note: calling multiple times is ok. - initializeThreadingInfo->pyImportMod("threading"); - - initializeThreadingInfo->mutex.lock(); - if(initializeThreadingInfo->initedEvent != nullptr) { - SetEvent(initializeThreadingInfo->initedEvent); - } - initializeThreadingInfo->completed = true; - initializeThreadingInfo->mutex.unlock(); - return 0; -} - - -// create a custom heap for our unordered map. This is necessary because if we suspend a thread while in a heap function -// then we could deadlock here. We need to be VERY careful about what we do while the threads are suspended. -static HANDLE g_heap = 0; - -template -class PrivateHeapAllocator { -public: - typedef size_t size_type; - typedef ptrdiff_t difference_type; - typedef T* pointer; - typedef const T* const_pointer; - typedef T& reference; - typedef const T& const_reference; - typedef T value_type; - - template - struct rebind { - typedef PrivateHeapAllocator other; - }; - - explicit PrivateHeapAllocator() {} - - PrivateHeapAllocator(PrivateHeapAllocator const&) {} - - ~PrivateHeapAllocator() {} - - template - PrivateHeapAllocator(PrivateHeapAllocator const&) {} - - pointer allocate(size_type size, std::allocator::const_pointer hint = 0) { - UNREFERENCED_PARAMETER(hint); - - if (g_heap == nullptr) { - g_heap = HeapCreate(0, 0, 0); - } - auto mem = HeapAlloc(g_heap, 0, size * sizeof(T)); - return static_cast(mem); - } - - void deallocate(pointer p, size_type n) { - UNREFERENCED_PARAMETER(n); - - HeapFree(g_heap, 0, p); - } - - size_type max_size() const { - return (std::numeric_limits::max)() / sizeof(T); - } - - void construct(pointer p, const T& t) { - new(p) T(t); - } - - void destroy(pointer p) { - p->~T(); - } -}; - -typedef std::unordered_map, std::equal_to, PrivateHeapAllocator>> ThreadMap; - -void ResumeThreads(ThreadMap &suspendedThreads) { - for (auto start = suspendedThreads.begin(); start != suspendedThreads.end(); start++) { - ResumeThread((*start).second); - CloseHandle((*start).second); - } - suspendedThreads.clear(); -} - -// Suspends all threads ensuring that they are not currently in a call to Py_AddPendingCall. -void SuspendThreads(ThreadMap &suspendedThreads, Py_AddPendingCall* addPendingCall, PyEval_ThreadsInitialized* threadsInited) { - DWORD curThreadId = GetCurrentThreadId(); - DWORD curProcess = GetCurrentProcessId(); - // suspend all the threads in the process so we can do things safely... - bool suspended; - - do { - suspended = false; - HANDLE h = CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, 0); - if (h != INVALID_HANDLE_VALUE) { - - THREADENTRY32 te; - te.dwSize = sizeof(te); - if (Thread32First(h, &te)) { - do { - if (te.dwSize >= FIELD_OFFSET(THREADENTRY32, th32OwnerProcessID) + sizeof(te.th32OwnerProcessID) && te.th32OwnerProcessID == curProcess) { - - - if (te.th32ThreadID != curThreadId && suspendedThreads.find(te.th32ThreadID) == suspendedThreads.end()) { - auto hThread = OpenThread(THREAD_ALL_ACCESS, FALSE, te.th32ThreadID); - if (hThread != nullptr) { - SuspendThread(hThread); - - bool addingPendingCall = false; - - CONTEXT context; - memset(&context, 0x00, sizeof(CONTEXT)); - context.ContextFlags = CONTEXT_ALL; - GetThreadContext(hThread, &context); - -#if defined(_X86_) - if (context.Eip >= *(reinterpret_cast(addPendingCall)) && context.Eip <= (*(reinterpret_cast(addPendingCall))) + 0x100) { - addingPendingCall = true; - } -#elif defined(_AMD64_) - if (context.Rip >= *(reinterpret_cast(addPendingCall)) && context.Rip <= *(reinterpret_cast(addPendingCall) + 0x100)) { - addingPendingCall = true; - } -#endif - - if (addingPendingCall) { - // we appear to be adding a pending call via this thread - wait for this to finish so we can add our own pending call... - ResumeThread(hThread); - SwitchToThread(); // yield to the resumed thread if it's on our CPU... - CloseHandle(hThread); - } else { - suspendedThreads[te.th32ThreadID] = hThread; - } - suspended = true; - } - } - } - - te.dwSize = sizeof(te); - } while (Thread32Next(h, &te) && !threadsInited()); - } - CloseHandle(h); - } - } while (suspended && !threadsInited()); -} - - - -extern "C" -{ - - /** - * The returned value signals the error that happened! - * - * Return codes: - * 0 = all OK. - * 1 = Py_IsInitialized not found - * 2 = Py_IsInitialized returned false - * 3 = Missing Python API - * 4 = Interpreter not initialized - * 5 = Python version unknown - * 6 = Connect timeout - **/ - int DoAttach(HMODULE module, bool isDebug, const char *command, bool showDebugInfo ) - { - auto isInit = reinterpret_cast(GetProcAddress(module, "Py_IsInitialized")); - - if (isInit == nullptr) { - std::cerr << "Py_IsInitialized not found. " << std::endl << std::flush; - return 1; - } - if (!isInit()) { - std::cerr << "Py_IsInitialized returned false. " << std::endl << std::flush; - return 2; - } - - auto version = GetPythonVersion(module); - - // found initialized Python runtime, gather and check the APIs we need for a successful attach... - DEFINE_PROC(addPendingCall, Py_AddPendingCall*, "Py_AddPendingCall", -100); - DEFINE_PROC(interpHead, PyInterpreterState_Head*, "PyInterpreterState_Head", -110); - DEFINE_PROC(gilEnsure, PyGILState_Ensure*, "PyGILState_Ensure", -120); - DEFINE_PROC(gilRelease, PyGILState_Release*, "PyGILState_Release", -130); - DEFINE_PROC(threadHead, PyInterpreterState_ThreadHead*, "PyInterpreterState_ThreadHead", -140); - DEFINE_PROC(initThreads, PyEval_Lock*, "PyEval_InitThreads", -150); - DEFINE_PROC(releaseLock, PyEval_Lock*, "PyEval_ReleaseLock", -160); - DEFINE_PROC(threadsInited, PyEval_ThreadsInitialized*, "PyEval_ThreadsInitialized", -170); - DEFINE_PROC(threadNext, PyThreadState_Next*, "PyThreadState_Next", -180); - DEFINE_PROC(pyImportMod, PyImport_ImportModule*, "PyImport_ImportModule", -190); - DEFINE_PROC(pyNone, PyObject*, "_Py_NoneStruct", -2000); - DEFINE_PROC(pyRun_SimpleString, PyRun_SimpleString*, "PyRun_SimpleString", -210); - - // Either _PyThreadState_Current or _PyThreadState_UncheckedGet are required - DEFINE_PROC_NO_CHECK(curPythonThread, PyThreadState**, "_PyThreadState_Current", -220); // optional - DEFINE_PROC_NO_CHECK(getPythonThread, _PyThreadState_UncheckedGet*, "_PyThreadState_UncheckedGet", -230); // optional - - if (curPythonThread == nullptr && getPythonThread == nullptr) { - // we're missing some APIs, we cannot attach. - std::cerr << "Error, missing Python threading API!!" << std::endl << std::flush; - return -240; - } - - // Either _Py_CheckInterval or _PyEval_[GS]etSwitchInterval are useful, but not required - DEFINE_PROC_NO_CHECK(intervalCheck, int*, "_Py_CheckInterval", -250); // optional - DEFINE_PROC_NO_CHECK(getSwitchInterval, _PyEval_GetSwitchInterval*, "_PyEval_GetSwitchInterval", -260); // optional - DEFINE_PROC_NO_CHECK(setSwitchInterval, _PyEval_SetSwitchInterval*, "_PyEval_SetSwitchInterval", -270); // optional - - auto head = interpHead(); - if (head == nullptr) { - // this interpreter is loaded but not initialized. - std::cerr << "Interpreter not initialized! " << std::endl << std::flush; - return 4; - } - - // check that we're a supported version - if (version == PythonVersion_Unknown) { - std::cerr << "Python version unknown! " << std::endl << std::flush; - return 5; - } else if (version == PythonVersion_25 || version == PythonVersion_26 || - version == PythonVersion_30 || version == PythonVersion_31 || version == PythonVersion_32) { - std::cerr << "Python version unsupported! " << std::endl << std::flush; - return 5; - } - - - // We always try to initialize threading and import the threading module in the main thread in the code - // below... - // - // We need to initialize multiple threading support but we need to do so safely, so we call - // Py_AddPendingCall and have our callback then initialize multi threading. This is completely safe on 2.7 - // and up. Unfortunately that doesn't work if we're not actively running code on the main thread (blocked on a lock - // or reading input). - // - // Another option is to make sure no code is running - if there is no active thread then we can safely call - // PyEval_InitThreads and we're in business. But to know this is safe we need to first suspend all the other - // threads in the process and then inspect if any code is running (note that this is still not ideal because - // this thread will be the thread head for Python, but still better than not attach at all). - // - // Finally if code is running after we've suspended the threads then we can go ahead and do Py_AddPendingCall - // on down-level interpreters as long as we're sure no one else is making a call to Py_AddPendingCall at the same - // time. - // - // Therefore our strategy becomes: Make the Py_AddPendingCall on interpreters and wait for it. If it doesn't - // call after a timeout, suspend all threads - if a threads is in Py_AddPendingCall resume and try again. Once we've got all of the threads - // stopped and not in Py_AddPendingCall (which calls no functions its self, you can see this and it's size in the - // debugger) then see if we have a current thread. If not go ahead and initialize multiple threading (it's now safe, - // no Python code is running). - - InitializeThreadingInfo *initializeThreadingInfo = new InitializeThreadingInfo(); - initializeThreadingInfo->pyImportMod = pyImportMod; - initializeThreadingInfo->initThreads = initThreads; - initializeThreadingInfo->initedEvent = CreateEvent(nullptr, TRUE, FALSE, nullptr); - - // Add the call to initialize threading. - addPendingCall(&AttachCallback, initializeThreadingInfo); - - ::WaitForSingleObject(initializeThreadingInfo->initedEvent, 5000); - - // Whether this completed or not, release the event handle as we won't use it anymore. - initializeThreadingInfo->mutex.lock(); - CloseHandle(initializeThreadingInfo->initedEvent); - bool completed = initializeThreadingInfo->completed; - initializeThreadingInfo->initedEvent = nullptr; - initializeThreadingInfo->mutex.unlock(); - - if(completed) { - // Note that this structure will leak if addPendingCall did not complete in the timeout - // (we can't release now because it's possible that it'll still be called). - delete initializeThreadingInfo; - if (showDebugInfo) { - std::cout << "addPendingCall to initialize threads/import threading completed. " << std::endl << std::flush; - } - } else { - if (showDebugInfo) { - std::cout << "addPendingCall to initialize threads/import threading did NOT complete. " << std::endl << std::flush; - } - } - - if (threadsInited()) { - // Note that since Python 3.7, threads are *always* initialized! - if (showDebugInfo) { - std::cout << "Threads initialized! " << std::endl << std::flush; - } - - } else { - int saveIntervalCheck; - unsigned long saveLongIntervalCheck; - if (intervalCheck != nullptr) { - // not available on 3.2 - saveIntervalCheck = *intervalCheck; - *intervalCheck = -1; // lower the interval check so pending calls are processed faster - saveLongIntervalCheck = 0; // prevent compiler warning - } else if (getSwitchInterval != nullptr && setSwitchInterval != nullptr) { - saveLongIntervalCheck = getSwitchInterval(); - setSwitchInterval(0); - saveIntervalCheck = 0; // prevent compiler warning - } - else { - saveIntervalCheck = 0; // prevent compiler warning - saveLongIntervalCheck = 0; // prevent compiler warning - } - - // If threads weren't initialized in our pending call, instead of giving a timeout, try - // to initialize it in this thread. - for(int attempts = 0; !threadsInited() && attempts < 20; attempts++) { - if(attempts > 0){ - // If we haven't been able to do it in the first time, wait a bit before retrying. - Sleep(10); - } - - ThreadMap suspendedThreads; - if (showDebugInfo) { - std::cout << "SuspendThreads(suspendedThreads, addPendingCall, threadsInited);" << std::endl << std::flush; - } - SuspendThreads(suspendedThreads, addPendingCall, threadsInited); - - if(!threadsInited()){ // Check again with threads suspended. - if (showDebugInfo) { - std::cout << "ENTERED if (!threadsInited()) {" << std::endl << std::flush; - } - auto curPyThread = getPythonThread ? getPythonThread() : *curPythonThread; - - if (curPyThread == nullptr) { - if (showDebugInfo) { - std::cout << "ENTERED if (curPyThread == nullptr) {" << std::endl << std::flush; - } - // no threads are currently running, it is safe to initialize multi threading. - PyGILState_STATE gilState; - if (version >= PythonVersion_34) { - // in 3.4 due to http://bugs.python.org/issue20891, - // we need to create our thread state manually - // before we can call PyGILState_Ensure() before we - // can call PyEval_InitThreads(). - - // Don't require this function unless we need it. - auto threadNew = (PyThreadState_NewFunc*)GetProcAddress(module, "PyThreadState_New"); - if (threadNew != nullptr) { - threadNew(head); - } - } - - if (version >= PythonVersion_32) { - // in 3.2 due to the new GIL and later we can't call Py_InitThreads - // without a thread being initialized. - // So we use PyGilState_Ensure here to first - // initialize the current thread, and then we use - // Py_InitThreads to bring up multi-threading. - // Some context here: http://bugs.python.org/issue11329 - // http://pytools.codeplex.com/workitem/834 - gilState = gilEnsure(); - } - else { - gilState = PyGILState_LOCKED; // prevent compiler warning - } - - if (showDebugInfo) { - std::cout << "Called initThreads()" << std::endl << std::flush; - } - // Initialize threads in our secondary thread (this is NOT ideal because - // this thread will be the thread head), but is still better than not being - // able to attach if the main thread is not actually running any code. - initThreads(); - - if (version >= PythonVersion_32) { - // we will release the GIL here - gilRelease(gilState); - } else { - releaseLock(); - } - } - } - ResumeThreads(suspendedThreads); - } - - - if (intervalCheck != nullptr) { - *intervalCheck = saveIntervalCheck; - } else if (setSwitchInterval != nullptr) { - setSwitchInterval(saveLongIntervalCheck); - } - - } - - if (g_heap != nullptr) { - HeapDestroy(g_heap); - g_heap = nullptr; - } - - if (!threadsInited()) { - std::cerr << "Unable to initialize threads in the given timeout! " << std::endl << std::flush; - return 8; - } - - GilHolder gilLock(gilEnsure, gilRelease); // acquire and hold the GIL until done... - - pyRun_SimpleString(command); - return 0; - - } - - - - - // ======================================== Code related to setting tracing to existing threads. - - - /** - * This function is meant to be called to execute some arbitrary python code to be - * run. It'll initialize threads as needed and then run the code with pyRun_SimpleString. - * - * @param command: the python code to be run - * @param attachInfo: pointer to an int specifying whether we should show debug info (1) or not (0). - **/ - DECLDIR int AttachAndRunPythonCode(const char *command, int *attachInfo ) - { - - int SHOW_DEBUG_INFO = 1; - - bool showDebugInfo = (*attachInfo & SHOW_DEBUG_INFO) != 0; - - if (showDebugInfo) { - std::cout << "AttachAndRunPythonCode started (showing debug info). " << std::endl << std::flush; - } - - ModuleInfo moduleInfo = GetPythonModule(); - if (moduleInfo.errorGettingModule != 0) { - return moduleInfo.errorGettingModule; - } - HMODULE module = moduleInfo.module; - int attached = DoAttach(module, moduleInfo.isDebug, command, showDebugInfo); - - if (attached != 0) { - std::cerr << "Error when injecting code in target process. Error code (on windows): " << attached << std::endl << std::flush; - } - return attached; - } - - - DECLDIR int PrintDebugInfo() { - PRINT("Getting debug info..."); - ModuleInfo moduleInfo = GetPythonModule(); - if (moduleInfo.errorGettingModule != 0) { - PRINT("Error getting python module"); - return 0; - } - HMODULE module = moduleInfo.module; - - DEFINE_PROC(interpHead, PyInterpreterState_Head*, "PyInterpreterState_Head", 0); - DEFINE_PROC(threadHead, PyInterpreterState_ThreadHead*, "PyInterpreterState_ThreadHead", 0); - DEFINE_PROC(threadNext, PyThreadState_Next*, "PyThreadState_Next", 160); - DEFINE_PROC(gilEnsure, PyGILState_Ensure*, "PyGILState_Ensure", 0); - DEFINE_PROC(gilRelease, PyGILState_Release*, "PyGILState_Release", 0); - - auto head = interpHead(); - if (head == nullptr) { - // this interpreter is loaded but not initialized. - PRINT("Interpreter not initialized!"); - return 0; - } - - auto version = GetPythonVersion(module); - printf("Python version: %d\n", version); - - GilHolder gilLock(gilEnsure, gilRelease); // acquire and hold the GIL until done... - auto curThread = threadHead(head); - if (curThread == nullptr) { - PRINT("Thread head is NULL.") - return 0; - } - - for (auto curThread = threadHead(head); curThread != nullptr; curThread = threadNext(curThread)) { - printf("Found thread id: %d\n", GetPythonThreadId(version, curThread)); - } - - PRINT("Finished getting debug info.") - return 0; - } - - - /** - * This function may be called to set a tracing function to existing python threads. - **/ - DECLDIR int AttachDebuggerTracing(bool showDebugInfo, void* pSetTraceFunc, void* pTraceFunc, unsigned int threadId, void* pPyNone) - { - ModuleInfo moduleInfo = GetPythonModule(); - if (moduleInfo.errorGettingModule != 0) { - return moduleInfo.errorGettingModule; - } - HMODULE module = moduleInfo.module; - if (showDebugInfo) { - std::cout << "Setting sys trace for existing threads." << std::endl << std::flush; - } - int attached = 0; - PyObjectHolder traceFunc(moduleInfo.isDebug, reinterpret_cast(pTraceFunc), true); - PyObjectHolder setTraceFunc(moduleInfo.isDebug, reinterpret_cast(pSetTraceFunc), true); - PyObjectHolder pyNone(moduleInfo.isDebug, reinterpret_cast(pPyNone), true); - - int temp = InternalSetSysTraceFunc(module, moduleInfo.isDebug, showDebugInfo, &traceFunc, &setTraceFunc, threadId, &pyNone); - if (temp == 0) { - // we've successfully attached the debugger - return 0; - } else { - if (temp > attached) { - //I.e.: the higher the value the more significant it is. - attached = temp; - } - } - - if (showDebugInfo) { - std::cout << "Setting sys trace for existing threads failed with code: " << attached << "." << std::endl << std::flush; - } - return attached; - } - -} - diff --git a/spaces/Superlang/ImageComposition/README.md b/spaces/Superlang/ImageComposition/README.md deleted file mode 100644 index 6c7702581dbde7d8ae9e15d7eab8f55d59d3ddfa..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageComposition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ImageComposition -emoji: 📊 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Surn/UnlimitedMusicGen/audiocraft/models/builders.py b/spaces/Surn/UnlimitedMusicGen/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/Surn/UnlimitedMusicGen/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/build_env.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/build_env.py deleted file mode 100644 index 4f704a3547da02f913d6cfdbd4e0ed77c81caabe..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/build_env.py +++ /dev/null @@ -1,311 +0,0 @@ -"""Build Environment used for isolation during sdist building -""" - -import logging -import os -import pathlib -import site -import sys -import textwrap -from collections import OrderedDict -from types import TracebackType -from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple, Type, Union - -from pip._vendor.certifi import where -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.version import Version - -from pip import __file__ as pip_location -from pip._internal.cli.spinners import open_spinner -from pip._internal.locations import get_platlib, get_purelib, get_scheme -from pip._internal.metadata import get_default_environment, get_environment -from pip._internal.utils.subprocess import call_subprocess -from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds - -if TYPE_CHECKING: - from pip._internal.index.package_finder import PackageFinder - -logger = logging.getLogger(__name__) - - -def _dedup(a: str, b: str) -> Union[Tuple[str], Tuple[str, str]]: - return (a, b) if a != b else (a,) - - -class _Prefix: - def __init__(self, path: str) -> None: - self.path = path - self.setup = False - scheme = get_scheme("", prefix=path) - self.bin_dir = scheme.scripts - self.lib_dirs = _dedup(scheme.purelib, scheme.platlib) - - -def get_runnable_pip() -> str: - """Get a file to pass to a Python executable, to run the currently-running pip. - - This is used to run a pip subprocess, for installing requirements into the build - environment. - """ - source = pathlib.Path(pip_location).resolve().parent - - if not source.is_dir(): - # This would happen if someone is using pip from inside a zip file. In that - # case, we can use that directly. - return str(source) - - return os.fsdecode(source / "__pip-runner__.py") - - -def _get_system_sitepackages() -> Set[str]: - """Get system site packages - - Usually from site.getsitepackages, - but fallback on `get_purelib()/get_platlib()` if unavailable - (e.g. in a virtualenv created by virtualenv<20) - - Returns normalized set of strings. - """ - if hasattr(site, "getsitepackages"): - system_sites = site.getsitepackages() - else: - # virtualenv < 20 overwrites site.py without getsitepackages - # fallback on get_purelib/get_platlib. - # this is known to miss things, but shouldn't in the cases - # where getsitepackages() has been removed (inside a virtualenv) - system_sites = [get_purelib(), get_platlib()] - return {os.path.normcase(path) for path in system_sites} - - -class BuildEnvironment: - """Creates and manages an isolated environment to install build deps""" - - def __init__(self) -> None: - temp_dir = TempDirectory(kind=tempdir_kinds.BUILD_ENV, globally_managed=True) - - self._prefixes = OrderedDict( - (name, _Prefix(os.path.join(temp_dir.path, name))) - for name in ("normal", "overlay") - ) - - self._bin_dirs: List[str] = [] - self._lib_dirs: List[str] = [] - for prefix in reversed(list(self._prefixes.values())): - self._bin_dirs.append(prefix.bin_dir) - self._lib_dirs.extend(prefix.lib_dirs) - - # Customize site to: - # - ensure .pth files are honored - # - prevent access to system site packages - system_sites = _get_system_sitepackages() - - self._site_dir = os.path.join(temp_dir.path, "site") - if not os.path.exists(self._site_dir): - os.mkdir(self._site_dir) - with open( - os.path.join(self._site_dir, "sitecustomize.py"), "w", encoding="utf-8" - ) as fp: - fp.write( - textwrap.dedent( - """ - import os, site, sys - - # First, drop system-sites related paths. - original_sys_path = sys.path[:] - known_paths = set() - for path in {system_sites!r}: - site.addsitedir(path, known_paths=known_paths) - system_paths = set( - os.path.normcase(path) - for path in sys.path[len(original_sys_path):] - ) - original_sys_path = [ - path for path in original_sys_path - if os.path.normcase(path) not in system_paths - ] - sys.path = original_sys_path - - # Second, add lib directories. - # ensuring .pth file are processed. - for path in {lib_dirs!r}: - assert not path in sys.path - site.addsitedir(path) - """ - ).format(system_sites=system_sites, lib_dirs=self._lib_dirs) - ) - - def __enter__(self) -> None: - self._save_env = { - name: os.environ.get(name, None) - for name in ("PATH", "PYTHONNOUSERSITE", "PYTHONPATH") - } - - path = self._bin_dirs[:] - old_path = self._save_env["PATH"] - if old_path: - path.extend(old_path.split(os.pathsep)) - - pythonpath = [self._site_dir] - - os.environ.update( - { - "PATH": os.pathsep.join(path), - "PYTHONNOUSERSITE": "1", - "PYTHONPATH": os.pathsep.join(pythonpath), - } - ) - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - for varname, old_value in self._save_env.items(): - if old_value is None: - os.environ.pop(varname, None) - else: - os.environ[varname] = old_value - - def check_requirements( - self, reqs: Iterable[str] - ) -> Tuple[Set[Tuple[str, str]], Set[str]]: - """Return 2 sets: - - conflicting requirements: set of (installed, wanted) reqs tuples - - missing requirements: set of reqs - """ - missing = set() - conflicting = set() - if reqs: - env = ( - get_environment(self._lib_dirs) - if hasattr(self, "_lib_dirs") - else get_default_environment() - ) - for req_str in reqs: - req = Requirement(req_str) - # We're explicitly evaluating with an empty extra value, since build - # environments are not provided any mechanism to select specific extras. - if req.marker is not None and not req.marker.evaluate({"extra": ""}): - continue - dist = env.get_distribution(req.name) - if not dist: - missing.add(req_str) - continue - if isinstance(dist.version, Version): - installed_req_str = f"{req.name}=={dist.version}" - else: - installed_req_str = f"{req.name}==={dist.version}" - if not req.specifier.contains(dist.version, prereleases=True): - conflicting.add((installed_req_str, req_str)) - # FIXME: Consider direct URL? - return conflicting, missing - - def install_requirements( - self, - finder: "PackageFinder", - requirements: Iterable[str], - prefix_as_string: str, - *, - kind: str, - ) -> None: - prefix = self._prefixes[prefix_as_string] - assert not prefix.setup - prefix.setup = True - if not requirements: - return - self._install_requirements( - get_runnable_pip(), - finder, - requirements, - prefix, - kind=kind, - ) - - @staticmethod - def _install_requirements( - pip_runnable: str, - finder: "PackageFinder", - requirements: Iterable[str], - prefix: _Prefix, - *, - kind: str, - ) -> None: - args: List[str] = [ - sys.executable, - pip_runnable, - "install", - "--ignore-installed", - "--no-user", - "--prefix", - prefix.path, - "--no-warn-script-location", - ] - if logger.getEffectiveLevel() <= logging.DEBUG: - args.append("-v") - for format_control in ("no_binary", "only_binary"): - formats = getattr(finder.format_control, format_control) - args.extend( - ( - "--" + format_control.replace("_", "-"), - ",".join(sorted(formats or {":none:"})), - ) - ) - - index_urls = finder.index_urls - if index_urls: - args.extend(["-i", index_urls[0]]) - for extra_index in index_urls[1:]: - args.extend(["--extra-index-url", extra_index]) - else: - args.append("--no-index") - for link in finder.find_links: - args.extend(["--find-links", link]) - - for host in finder.trusted_hosts: - args.extend(["--trusted-host", host]) - if finder.allow_all_prereleases: - args.append("--pre") - if finder.prefer_binary: - args.append("--prefer-binary") - args.append("--") - args.extend(requirements) - extra_environ = {"_PIP_STANDALONE_CERT": where()} - with open_spinner(f"Installing {kind}") as spinner: - call_subprocess( - args, - command_desc=f"pip subprocess to install {kind}", - spinner=spinner, - extra_environ=extra_environ, - ) - - -class NoOpBuildEnvironment(BuildEnvironment): - """A no-op drop-in replacement for BuildEnvironment""" - - def __init__(self) -> None: - pass - - def __enter__(self) -> None: - pass - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - pass - - def cleanup(self) -> None: - pass - - def install_requirements( - self, - finder: "PackageFinder", - requirements: Iterable[str], - prefix_as_string: str, - *, - kind: str, - ) -> None: - raise NotImplementedError() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/req/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/req/__init__.py deleted file mode 100644 index 16de903a44cbfdf2f4dc40ee581059155fa1a9b3..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/req/__init__.py +++ /dev/null @@ -1,92 +0,0 @@ -import collections -import logging -from typing import Generator, List, Optional, Sequence, Tuple - -from pip._internal.utils.logging import indent_log - -from .req_file import parse_requirements -from .req_install import InstallRequirement -from .req_set import RequirementSet - -__all__ = [ - "RequirementSet", - "InstallRequirement", - "parse_requirements", - "install_given_reqs", -] - -logger = logging.getLogger(__name__) - - -class InstallationResult: - def __init__(self, name: str) -> None: - self.name = name - - def __repr__(self) -> str: - return f"InstallationResult(name={self.name!r})" - - -def _validate_requirements( - requirements: List[InstallRequirement], -) -> Generator[Tuple[str, InstallRequirement], None, None]: - for req in requirements: - assert req.name, f"invalid to-be-installed requirement: {req}" - yield req.name, req - - -def install_given_reqs( - requirements: List[InstallRequirement], - global_options: Sequence[str], - root: Optional[str], - home: Optional[str], - prefix: Optional[str], - warn_script_location: bool, - use_user_site: bool, - pycompile: bool, -) -> List[InstallationResult]: - """ - Install everything in the given list. - - (to be called after having downloaded and unpacked the packages) - """ - to_install = collections.OrderedDict(_validate_requirements(requirements)) - - if to_install: - logger.info( - "Installing collected packages: %s", - ", ".join(to_install.keys()), - ) - - installed = [] - - with indent_log(): - for req_name, requirement in to_install.items(): - if requirement.should_reinstall: - logger.info("Attempting uninstall: %s", req_name) - with indent_log(): - uninstalled_pathset = requirement.uninstall(auto_confirm=True) - else: - uninstalled_pathset = None - - try: - requirement.install( - global_options, - root=root, - home=home, - prefix=prefix, - warn_script_location=warn_script_location, - use_user_site=use_user_site, - pycompile=pycompile, - ) - except Exception: - # if install did not succeed, rollback previous uninstall - if uninstalled_pathset and not requirement.install_succeeded: - uninstalled_pathset.rollback() - raise - else: - if uninstalled_pathset and requirement.install_succeeded: - uninstalled_pathset.commit() - - installed.append(InstallationResult(req_name)) - - return installed diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/structures/__init__.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/structures/__init__.py deleted file mode 100644 index f3ee6057e3ec2731984ce8203c6eaf5348d08260..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/structures/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .boxes import Boxes, BoxMode, pairwise_iou, pairwise_ioa, pairwise_point_box_distance -from .image_list import ImageList - -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, polygons_to_bitmask, ROIMasks -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -from detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/Toraong/color_textual_inversion/app.py b/spaces/Toraong/color_textual_inversion/app.py deleted file mode 100644 index 38b6049ad70f37eb78e8e2bb51bbccc14a769b90..0000000000000000000000000000000000000000 --- a/spaces/Toraong/color_textual_inversion/app.py +++ /dev/null @@ -1,128 +0,0 @@ -from __future__ import annotations - -import shlex -import subprocess -from pathlib import Path -from tempfile import TemporaryDirectory -from textwrap import dedent - -import numpy as np -import streamlit as st -import torch -from PIL import Image -from transformers import CLIPTokenizer - - -def hex_to_rgb(s: str) -> tuple[int, int, int]: - value = s.lstrip("#") - return (int(value[:2], 16), int(value[2:4], 16), int(value[4:6], 16)) - - -st.header("Color Textual Inversion") -with st.expander(label="info"): - with open("info.txt", "r", encoding="utf-8") as f: - st.markdown(f.read()) - -duplicate_button = """Duplicate Space""" -st.markdown(duplicate_button, unsafe_allow_html=True) - -col1, col2 = st.columns([15, 85]) -color = col1.color_picker("Pick a color", "#00f900") -col2.text_input("", color, disabled=True) - -emb_name = st.text_input("Embedding name", color.lstrip("#").upper()) -init_token = st.text_input("Initializer token", "init token name") -rgb = hex_to_rgb(color) - -img_array = np.zeros((128, 128, 3), dtype=np.uint8) -for i in range(3): - img_array[..., i] = rgb[i] - -dataset_temp = TemporaryDirectory(prefix="dataset_", dir=".") -dataset_path = Path(dataset_temp.name) -output_temp = TemporaryDirectory(prefix="output_", dir=".") -output_path = Path(output_temp.name) - -img_path = dataset_path / f"{emb_name}.png" -Image.fromarray(img_array).save(img_path) - -with st.sidebar: - model_name = st.text_input("Model name", "Linaqruf/anything-v3.0") - steps = st.slider("Steps", 1, 2, 1, step=1) - learning_rate = st.text_input("Learning rate", "0.001") - learning_rate = float(learning_rate) - -tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder="tokenizer") - -# case 1: init_token is not a single token -token = tokenizer.tokenize(init_token) -if len(token) > 1: - st.warning("Initializer token must be a single token") - st.stop() - -# case 2: init_token already exists in the tokenizer -num_added_tokens = tokenizer.add_tokens(emb_name) -if num_added_tokens == 0: - st.warning(f"The tokenizer already contains the token {emb_name}") - st.stop() - -cmd = """ -accelerate launch textual_inversion.py \ - --pretrained_model_name_or_path={model_name} \ - --train_data_dir={dataset_path} \ - --learnable_property="style" \ - --placeholder_token="{emb_name}" \ - --initializer_token="{init}" \ - --resolution=128 \ - --train_batch_size=1 \ - --repeats=1 \ - --gradient_accumulation_steps=1 \ - --max_train_steps={steps} \ - --learning_rate={lr} \ - --output_dir={output_path} \ - --only_save_embeds -""".strip() - -cmd = dedent(cmd).format( - model_name=model_name, - dataset_path=dataset_path.as_posix(), - emb_name=emb_name, - init=init_token, - steps=steps, - lr=learning_rate, - output_path=output_path.as_posix(), -) -cmd = shlex.split(cmd) - -result_path = output_path / "learned_embeds.bin" -captured = "" - -start_button = st.button("Start") -download_button = st.empty() - -if start_button: - with st.spinner("Training..."): - placeholder = st.empty() - p = subprocess.Popen( - cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding="utf-8" - ) - - while line := p.stderr.readline(): - captured += line - placeholder.code(captured, language="bash") - -if not result_path.exists(): - st.stop() - -# fix unknown file volume bug -trained_emb = torch.load(result_path, map_location="cpu") -for k, v in trained_emb.items(): - trained_emb[k] = torch.from_numpy(v.numpy()) -torch.save(trained_emb, result_path) - -file = result_path.read_bytes() -download_button.download_button(f"Download {emb_name}.pt", file, f"{emb_name}.pt") -st.download_button(f"Download {emb_name}.pt ", file, f"{emb_name}.pt") - -dataset_temp.cleanup() -output_temp.cleanup() diff --git a/spaces/VIPLab/Caption-Anything/caption_anything/captioner/__init__.py b/spaces/VIPLab/Caption-Anything/caption_anything/captioner/__init__.py deleted file mode 100644 index 70952876daafe2a9479aa90b3845e58de6376711..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Caption-Anything/caption_anything/captioner/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .blip import BLIPCaptioner -from .blip2 import BLIP2Captioner -from .git import GITCaptioner -from .base_captioner import BaseCaptioner - - -def build_captioner(type, device, args=None): - if type == 'blip': - return BLIPCaptioner(device, enable_filter=args.clip_filter) - elif type == 'blip2': - return BLIP2Captioner(device, enable_filter=args.clip_filter) - elif type == 'git': - return GITCaptioner(device, enable_filter=args.clip_filter) - else: - raise NotImplementedError("") \ No newline at end of file diff --git a/spaces/Vedarutvija/ZebraGPT/README.md b/spaces/Vedarutvija/ZebraGPT/README.md deleted file mode 100644 index 331a1d028b3da3caae949c60d2feecc333c0a159..0000000000000000000000000000000000000000 --- a/spaces/Vedarutvija/ZebraGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ZebraGPT -emoji: 👁 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.50.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VickyKira/NASAGPT/g4f/__init__.py b/spaces/VickyKira/NASAGPT/g4f/__init__.py deleted file mode 100644 index a0b4bac6aa4de9c0449095a3874c2cb9716169d7..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -import sys -from . import Provider -from g4f.models import Model, ModelUtils - - -class ChatCompletion: - @staticmethod - def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs): - kwargs['auth'] = auth - - if provider and provider.needs_auth and not auth: - print( - f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr) - sys.exit(1) - - try: - if isinstance(model, str): - try: - model = ModelUtils.convert[model] - except KeyError: - raise Exception(f'The model: {model} does not exist') - - engine = model.best_provider if not provider else provider - - if not engine.supports_stream and stream == True: - print( - f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr) - sys.exit(1) - - print(f'Using {engine.__name__} provider') - - return (engine._create_completion(model.name, messages, stream, **kwargs) - if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs))) - except TypeError as e: - print(e) - arg: str = str(e).split("'")[1] - print( - f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr) - sys.exit(1) diff --git a/spaces/Vijish/Image_generator/app.py b/spaces/Vijish/Image_generator/app.py deleted file mode 100644 index c0b97820f4ddac413b528802d8249434476701a9..0000000000000000000000000000000000000000 --- a/spaces/Vijish/Image_generator/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr - -from diffusers import StableDiffusionPipeline -import torch - -def dummy(images, **kwargs): return images, False - - -def generate_image(prompt): - model_id = "prompthero/openjourney-v4" - pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) - pipe.safety_checker = dummy - pipe = pipe.to("cuda") - image = pipe(prompt).images[0] - return image - - -iface = gr.Interface( - generate_image, - "textbox", - "image", - title="Image Generator", - description="Generate a realistic photo using prompts.", - examples=[["realistic! photoshoot for a new balenciaga lookbook, color film photography, portrait of a beautiful woman wearing a balaclava mask, photo in style of tyler mitchell, 35mm lens"]], -) - -iface.launch() \ No newline at end of file diff --git a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/data_utils.py b/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/data_utils.py deleted file mode 100644 index c57719012aa6d1e73e144c84ca0aaddeac33a383..0000000000000000000000000000000000000000 --- a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/data_utils.py +++ /dev/null @@ -1,12 +0,0 @@ -from PIL import Image - - -def image_grid(imgs, rows, cols): - assert len(imgs) == rows * cols - - w, h = imgs[0].size - grid = Image.new("RGB", size=(cols * w, rows * h)) - - for i, img in enumerate(imgs): - grid.paste(img, box=(i % cols * w, i // cols * h)) - return grid diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/__init__.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/__init__.py deleted file mode 100644 index bc01b56181aa81554efbe9df10ab3678a1c7bb86..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/__init__.py +++ /dev/null @@ -1,202 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import logging -import torch -from omegaconf import OmegaConf - -from minigpt4.common.registry import registry -from minigpt4.models.base_model import BaseModel -from minigpt4.models.minigpt_base import MiniGPTBase -from minigpt4.models.minigpt4 import MiniGPT4 -from minigpt4.models.minigpt_v2 import MiniGPTv2 -from minigpt4.processors.base_processor import BaseProcessor - - -__all__ = [ - "load_model", - "BaseModel", - "MiniGPTBase", - "MiniGPT4", - "MiniGPTv2" -] - - -def load_model(name, model_type, is_eval=False, device="cpu", checkpoint=None): - """ - Load supported models. - - To list all available models and types in registry: - >>> from minigpt4.models import model_zoo - >>> print(model_zoo) - - Args: - name (str): name of the model. - model_type (str): type of the model. - is_eval (bool): whether the model is in eval mode. Default: False. - device (str): device to use. Default: "cpu". - checkpoint (str): path or to checkpoint. Default: None. - Note that expecting the checkpoint to have the same keys in state_dict as the model. - - Returns: - model (torch.nn.Module): model. - """ - - model = registry.get_model_class(name).from_pretrained(model_type=model_type) - - if checkpoint is not None: - model.load_checkpoint(checkpoint) - - if is_eval: - model.eval() - - if device == "cpu": - model = model.float() - - return model.to(device) - - -def load_preprocess(config): - """ - Load preprocessor configs and construct preprocessors. - - If no preprocessor is specified, return BaseProcessor, which does not do any preprocessing. - - Args: - config (dict): preprocessor configs. - - Returns: - vis_processors (dict): preprocessors for visual inputs. - txt_processors (dict): preprocessors for text inputs. - - Key is "train" or "eval" for processors used in training and evaluation respectively. - """ - - def _build_proc_from_cfg(cfg): - return ( - registry.get_processor_class(cfg.name).from_config(cfg) - if cfg is not None - else BaseProcessor() - ) - - vis_processors = dict() - txt_processors = dict() - - vis_proc_cfg = config.get("vis_processor") - txt_proc_cfg = config.get("text_processor") - - if vis_proc_cfg is not None: - vis_train_cfg = vis_proc_cfg.get("train") - vis_eval_cfg = vis_proc_cfg.get("eval") - else: - vis_train_cfg = None - vis_eval_cfg = None - - vis_processors["train"] = _build_proc_from_cfg(vis_train_cfg) - vis_processors["eval"] = _build_proc_from_cfg(vis_eval_cfg) - - if txt_proc_cfg is not None: - txt_train_cfg = txt_proc_cfg.get("train") - txt_eval_cfg = txt_proc_cfg.get("eval") - else: - txt_train_cfg = None - txt_eval_cfg = None - - txt_processors["train"] = _build_proc_from_cfg(txt_train_cfg) - txt_processors["eval"] = _build_proc_from_cfg(txt_eval_cfg) - - return vis_processors, txt_processors - - -def load_model_and_preprocess(name, model_type, is_eval=False, device="cpu"): - """ - Load model and its related preprocessors. - - List all available models and types in registry: - >>> from minigpt4.models import model_zoo - >>> print(model_zoo) - - Args: - name (str): name of the model. - model_type (str): type of the model. - is_eval (bool): whether the model is in eval mode. Default: False. - device (str): device to use. Default: "cpu". - - Returns: - model (torch.nn.Module): model. - vis_processors (dict): preprocessors for visual inputs. - txt_processors (dict): preprocessors for text inputs. - """ - model_cls = registry.get_model_class(name) - - # load model - model = model_cls.from_pretrained(model_type=model_type) - - if is_eval: - model.eval() - - # load preprocess - cfg = OmegaConf.load(model_cls.default_config_path(model_type)) - if cfg is not None: - preprocess_cfg = cfg.preprocess - - vis_processors, txt_processors = load_preprocess(preprocess_cfg) - else: - vis_processors, txt_processors = None, None - logging.info( - f"""No default preprocess for model {name} ({model_type}). - This can happen if the model is not finetuned on downstream datasets, - or it is not intended for direct use without finetuning. - """ - ) - - if device == "cpu" or device == torch.device("cpu"): - model = model.float() - - return model.to(device), vis_processors, txt_processors - - -class ModelZoo: - """ - A utility class to create string representation of available model architectures and types. - - >>> from minigpt4.models import model_zoo - >>> # list all available models - >>> print(model_zoo) - >>> # show total number of models - >>> print(len(model_zoo)) - """ - - def __init__(self) -> None: - self.model_zoo = { - k: list(v.PRETRAINED_MODEL_CONFIG_DICT.keys()) - for k, v in registry.mapping["model_name_mapping"].items() - } - - def __str__(self) -> str: - return ( - "=" * 50 - + "\n" - + f"{'Architectures':<30} {'Types'}\n" - + "=" * 50 - + "\n" - + "\n".join( - [ - f"{name:<30} {', '.join(types)}" - for name, types in self.model_zoo.items() - ] - ) - ) - - def __iter__(self): - return iter(self.model_zoo.items()) - - def __len__(self): - return sum([len(v) for v in self.model_zoo.values()]) - - -model_zoo = ModelZoo() diff --git a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/torch_package.py b/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/torch_package.py deleted file mode 100644 index 9d65e9466e0c999a5601d081016711ccb0c1f099..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/torch_package.py +++ /dev/null @@ -1,19 +0,0 @@ -from setup_tools.magicinstaller.requirement import Requirement - - -class Torch(Requirement): - def is_right_version(self): - ver = self.get_package_version('torch') - if ver: - # Check if a CUDA version is installed - return ver.startswith('2') and ('+cu' in ver if self.is_windows() else True) - return False - - def is_installed(self): - return self.install_check('torch') - - def install(self): - if self.is_windows(): - return self.install_pip('torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117', 'PyTorch') - else: - return self.install_pip('torch torchvision torchaudio', 'PyTorch') diff --git a/spaces/Wanlau/sovits-4.0_datealive/cluster/__init__.py b/spaces/Wanlau/sovits-4.0_datealive/cluster/__init__.py deleted file mode 100644 index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000 --- a/spaces/Wanlau/sovits-4.0_datealive/cluster/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -import torch -from sklearn.cluster import KMeans - -def get_cluster_model(ckpt_path): - checkpoint = torch.load(ckpt_path) - kmeans_dict = {} - for spk, ckpt in checkpoint.items(): - km = KMeans(ckpt["n_features_in_"]) - km.__dict__["n_features_in_"] = ckpt["n_features_in_"] - km.__dict__["_n_threads"] = ckpt["_n_threads"] - km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"] - kmeans_dict[spk] = km - return kmeans_dict - -def get_cluster_result(model, x, speaker): - """ - x: np.array [t, 256] - return cluster class result - """ - return model[speaker].predict(x) - -def get_cluster_center_result(model, x,speaker): - """x: np.array [t, 256]""" - predict = model[speaker].predict(x) - return model[speaker].cluster_centers_[predict] - -def get_center(model, x,speaker): - return model[speaker].cluster_centers_[x] diff --git a/spaces/Weshden/Nsfw1/README.md b/spaces/Weshden/Nsfw1/README.md deleted file mode 100644 index 3957797b8279e6d1f7650e76b173f6f827493416..0000000000000000000000000000000000000000 --- a/spaces/Weshden/Nsfw1/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Nsfw1 -emoji: 👀 -colorFrom: gray -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Widium/Image-Recreation/functions/init.py b/spaces/Widium/Image-Recreation/functions/init.py deleted file mode 100644 index 02b3644e3941863f64af54f1ea568008cd3f01d8..0000000000000000000000000000000000000000 --- a/spaces/Widium/Image-Recreation/functions/init.py +++ /dev/null @@ -1,59 +0,0 @@ -# *************************************************************************** # -# # -# init.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2023/05/05 16:08:50 by Widium # -# Updated: 2023/05/05 16:08:50 by Widium # -# # -# **************************************************************************** # - -from keras import Model -from tensorflow import Tensor -from tensorflow import Variable - -from .processing import create_batch_image -from .image import clip_pixel -from .image import create_noisy_imag -from .extract import get_features_map -from .extract import extract_content - -# ===================================================== # - -def init_generated_img(style_img : Tensor): - """ - Initialize the generated image with noise, clipped pixel values, and a batch dimension. - - Args: - style_img (Tensor): The input style image as a tensor. - - Returns: - Tensor: The initialized generated image as a tensor. - """ - generated_img = create_noisy_imag(style_img) - generated_img = clip_pixel(generated_img) - generated_img = create_batch_image(generated_img) - generated_img = Variable(generated_img) - - return (generated_img) - -# ===================================================== # - -def init_content_target(model : Model, content_img : Tensor)->Tensor: - """ - Initialize the content target by extracting content features from the content image. - - Args: - model (Model): Model to be used for extracting features from the content image. - content_img (Tensor): Content image from which to extract content features. - - Returns: - content_target (Tensor): Extracted content target from the content image. - """ - content_img = create_batch_image(content_img) - features_map = get_features_map(model, content_img) - content_target = extract_content(features_map) - - return (content_target) \ No newline at end of file diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/loss.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/loss.py deleted file mode 100644 index b78caabb33133572cefaacf816468277ee7da18f..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/loss.py +++ /dev/null @@ -1,136 +0,0 @@ -from fastai import * -from fastai.core import * -from fastai.torch_core import * -from fastai.callbacks import hook_outputs -import torchvision.models as models - - -class FeatureLoss(nn.Module): - def __init__(self, layer_wgts=[20, 70, 10]): - super().__init__() - - self.m_feat = models.vgg16_bn(True).features.cuda().eval() - requires_grad(self.m_feat, False) - blocks = [ - i - 1 - for i, o in enumerate(children(self.m_feat)) - if isinstance(o, nn.MaxPool2d) - ] - layer_ids = blocks[2:5] - self.loss_features = [self.m_feat[i] for i in layer_ids] - self.hooks = hook_outputs(self.loss_features, detach=False) - self.wgts = layer_wgts - self.metric_names = ['pixel'] + [f'feat_{i}' for i in range(len(layer_ids))] - self.base_loss = F.l1_loss - - def _make_features(self, x, clone=False): - self.m_feat(x) - return [(o.clone() if clone else o) for o in self.hooks.stored] - - def forward(self, input, target): - out_feat = self._make_features(target, clone=True) - in_feat = self._make_features(input) - self.feat_losses = [self.base_loss(input, target)] - self.feat_losses += [ - self.base_loss(f_in, f_out) * w - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts) - ] - - self.metrics = dict(zip(self.metric_names, self.feat_losses)) - return sum(self.feat_losses) - - def __del__(self): - self.hooks.remove() - - -# Refactored code, originally from https://github.com/VinceMarron/style_transfer -class WassFeatureLoss(nn.Module): - def __init__(self, layer_wgts=[5, 15, 2], wass_wgts=[3.0, 0.7, 0.01]): - super().__init__() - self.m_feat = models.vgg16_bn(True).features.cuda().eval() - requires_grad(self.m_feat, False) - blocks = [ - i - 1 - for i, o in enumerate(children(self.m_feat)) - if isinstance(o, nn.MaxPool2d) - ] - layer_ids = blocks[2:5] - self.loss_features = [self.m_feat[i] for i in layer_ids] - self.hooks = hook_outputs(self.loss_features, detach=False) - self.wgts = layer_wgts - self.wass_wgts = wass_wgts - self.metric_names = ( - ['pixel'] - + [f'feat_{i}' for i in range(len(layer_ids))] - + [f'wass_{i}' for i in range(len(layer_ids))] - ) - self.base_loss = F.l1_loss - - def _make_features(self, x, clone=False): - self.m_feat(x) - return [(o.clone() if clone else o) for o in self.hooks.stored] - - def _calc_2_moments(self, tensor): - chans = tensor.shape[1] - tensor = tensor.view(1, chans, -1) - n = tensor.shape[2] - mu = tensor.mean(2) - tensor = (tensor - mu[:, :, None]).squeeze(0) - # Prevents nasty bug that happens very occassionally- divide by zero. Why such things happen? - if n == 0: - return None, None - cov = torch.mm(tensor, tensor.t()) / float(n) - return mu, cov - - def _get_style_vals(self, tensor): - mean, cov = self._calc_2_moments(tensor) - if mean is None: - return None, None, None - eigvals, eigvects = torch.symeig(cov, eigenvectors=True) - eigroot_mat = torch.diag(torch.sqrt(eigvals.clamp(min=0))) - root_cov = torch.mm(torch.mm(eigvects, eigroot_mat), eigvects.t()) - tr_cov = eigvals.clamp(min=0).sum() - return mean, tr_cov, root_cov - - def _calc_l2wass_dist( - self, mean_stl, tr_cov_stl, root_cov_stl, mean_synth, cov_synth - ): - tr_cov_synth = torch.symeig(cov_synth, eigenvectors=True)[0].clamp(min=0).sum() - mean_diff_squared = (mean_stl - mean_synth).pow(2).sum() - cov_prod = torch.mm(torch.mm(root_cov_stl, cov_synth), root_cov_stl) - var_overlap = torch.sqrt( - torch.symeig(cov_prod, eigenvectors=True)[0].clamp(min=0) + 1e-8 - ).sum() - dist = mean_diff_squared + tr_cov_stl + tr_cov_synth - 2 * var_overlap - return dist - - def _single_wass_loss(self, pred, targ): - mean_test, tr_cov_test, root_cov_test = targ - mean_synth, cov_synth = self._calc_2_moments(pred) - loss = self._calc_l2wass_dist( - mean_test, tr_cov_test, root_cov_test, mean_synth, cov_synth - ) - return loss - - def forward(self, input, target): - out_feat = self._make_features(target, clone=True) - in_feat = self._make_features(input) - self.feat_losses = [self.base_loss(input, target)] - self.feat_losses += [ - self.base_loss(f_in, f_out) * w - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts) - ] - - styles = [self._get_style_vals(i) for i in out_feat] - - if styles[0][0] is not None: - self.feat_losses += [ - self._single_wass_loss(f_pred, f_targ) * w - for f_pred, f_targ, w in zip(in_feat, styles, self.wass_wgts) - ] - - self.metrics = dict(zip(self.metric_names, self.feat_losses)) - return sum(self.feat_losses) - - def __del__(self): - self.hooks.remove() diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/tabular/transform.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/tabular/transform.py deleted file mode 100644 index d7bc255eaf5fd92467b9db28e67590c4981e4356..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/tabular/transform.py +++ /dev/null @@ -1,195 +0,0 @@ -"Cleaning and feature engineering functions for structured data" -from ..torch_core import * -from pandas.api.types import is_numeric_dtype -from datetime import date, datetime -import calendar - -__all__ = ['add_datepart', 'cont_cat_split', 'Categorify', 'FillMissing', 'FillStrategy', 'Normalize', 'TabularProc', - 'add_elapsed_times', 'make_date', 'add_cyclic_datepart'] - -def make_date(df:DataFrame, date_field:str): - "Make sure `df[field_name]` is of the right date type." - field_dtype = df[date_field].dtype - if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype): - field_dtype = np.datetime64 - if not np.issubdtype(field_dtype, np.datetime64): - df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True) - -def cyclic_dt_feat_names(time:bool=True, add_linear:bool=False)->List[str]: - "Return feature names of date/time cycles as produced by `cyclic_dt_features`." - fs = ['cos','sin'] - attr = [f'{r}_{f}' for r in 'weekday day_month month_year day_year'.split() for f in fs] - if time: attr += [f'{r}_{f}' for r in 'hour clock min sec'.split() for f in fs] - if add_linear: attr.append('year_lin') - return attr - -def cyclic_dt_features(d:Union[date,datetime], time:bool=True, add_linear:bool=False)->List[float]: - "Calculate the cos and sin of date/time cycles." - tt,fs = d.timetuple(), [np.cos, np.sin] - day_year,days_month = tt.tm_yday, calendar.monthrange(d.year, d.month)[1] - days_year = 366 if calendar.isleap(d.year) else 365 - rs = d.weekday()/7, (d.day-1)/days_month, (d.month-1)/12, (day_year-1)/days_year - feats = [f(r * 2 * np.pi) for r in rs for f in fs] - if time and isinstance(d, datetime) and type(d) != date: - rs = tt.tm_hour/24, tt.tm_hour%12/12, tt.tm_min/60, tt.tm_sec/60 - feats += [f(r * 2 * np.pi) for r in rs for f in fs] - if add_linear: - if type(d) == date: feats.append(d.year + rs[-1]) - else: - secs_in_year = (datetime(d.year+1, 1, 1) - datetime(d.year, 1, 1)).total_seconds() - feats.append(d.year + ((d - datetime(d.year, 1, 1)).total_seconds() / secs_in_year)) - return feats - -def add_cyclic_datepart(df:DataFrame, field_name:str, prefix:str=None, drop:bool=True, time:bool=False, add_linear:bool=False): - "Helper function that adds trigonometric date/time features to a date in the column `field_name` of `df`." - make_date(df, field_name) - field = df[field_name] - prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name)) - series = field.apply(partial(cyclic_dt_features, time=time, add_linear=add_linear)) - columns = [prefix + c for c in cyclic_dt_feat_names(time, add_linear)] - df_feats = pd.DataFrame([item for item in series], columns=columns, index=series.index) - for column in columns: df[column] = df_feats[column] - if drop: df.drop(field_name, axis=1, inplace=True) - return df - -def add_datepart(df:DataFrame, field_name:str, prefix:str=None, drop:bool=True, time:bool=False): - "Helper function that adds columns relevant to a date in the column `field_name` of `df`." - make_date(df, field_name) - field = df[field_name] - prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name)) - attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start', - 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start'] - if time: attr = attr + ['Hour', 'Minute', 'Second'] - for n in attr: df[prefix + n] = getattr(field.dt, n.lower()) - df[prefix + 'Elapsed'] = field.astype(np.int64) // 10 ** 9 - if drop: df.drop(field_name, axis=1, inplace=True) - return df - -def _get_elapsed(df:DataFrame,field_names:Collection[str], date_field:str, base_field:str, prefix:str): - for f in field_names: - day1 = np.timedelta64(1, 'D') - last_date,last_base,res = np.datetime64(),None,[] - for b,v,d in zip(df[base_field].values, df[f].values, df[date_field].values): - if last_base is None or b != last_base: - last_date,last_base = np.datetime64(),b - if v: last_date = d - res.append(((d-last_date).astype('timedelta64[D]') / day1)) - df[prefix + f] = res - return df - -def add_elapsed_times(df:DataFrame, field_names:Collection[str], date_field:str, base_field:str): - field_names = listify(field_names) - #Make sure date_field is a date and base_field a bool - df[field_names] = df[field_names].astype('bool') - make_date(df, date_field) - - work_df = df[field_names + [date_field, base_field]] - work_df = work_df.sort_values([base_field, date_field]) - work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'After') - work_df = work_df.sort_values([base_field, date_field], ascending=[True, False]) - work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'Before') - - for a in ['After' + f for f in field_names] + ['Before' + f for f in field_names]: - work_df[a] = work_df[a].fillna(0).astype(int) - - for a,s in zip([True, False], ['_bw', '_fw']): - work_df = work_df.set_index(date_field) - tmp = (work_df[[base_field] + field_names].sort_index(ascending=a) - .groupby(base_field).rolling(7, min_periods=1).sum()) - tmp.drop(base_field,1,inplace=True) - tmp.reset_index(inplace=True) - work_df.reset_index(inplace=True) - work_df = work_df.merge(tmp, 'left', [date_field, base_field], suffixes=['', s]) - work_df.drop(field_names,1,inplace=True) - return df.merge(work_df, 'left', [date_field, base_field]) - -def cont_cat_split(df, max_card=20, dep_var=None)->Tuple[List,List]: - "Helper function that returns column names of cont and cat variables from given df." - cont_names, cat_names = [], [] - for label in df: - if label == dep_var: continue - if df[label].dtype == int and df[label].unique().shape[0] > max_card or df[label].dtype == float: cont_names.append(label) - else: cat_names.append(label) - return cont_names, cat_names - -@dataclass -class TabularProc(): - "A processor for tabular dataframes." - cat_names:StrList - cont_names:StrList - - def __call__(self, df:DataFrame, test:bool=False): - "Apply the correct function to `df` depending on `test`." - func = self.apply_test if test else self.apply_train - func(df) - - def apply_train(self, df:DataFrame): - "Function applied to `df` if it's the train set." - raise NotImplementedError - def apply_test(self, df:DataFrame): - "Function applied to `df` if it's the test set." - self.apply_train(df) - -class Categorify(TabularProc): - "Transform the categorical variables to that type." - def apply_train(self, df:DataFrame): - "Transform `self.cat_names` columns in categorical." - self.categories = {} - for n in self.cat_names: - df.loc[:,n] = df.loc[:,n].astype('category').cat.as_ordered() - self.categories[n] = df[n].cat.categories - - def apply_test(self, df:DataFrame): - "Transform `self.cat_names` columns in categorical using the codes decided in `apply_train`." - for n in self.cat_names: - df.loc[:,n] = pd.Categorical(df[n], categories=self.categories[n], ordered=True) - -FillStrategy = IntEnum('FillStrategy', 'MEDIAN COMMON CONSTANT') - -@dataclass -class FillMissing(TabularProc): - "Fill the missing values in continuous columns." - fill_strategy:FillStrategy=FillStrategy.MEDIAN - add_col:bool=True - fill_val:float=0. - def apply_train(self, df:DataFrame): - "Fill missing values in `self.cont_names` according to `self.fill_strategy`." - self.na_dict = {} - for name in self.cont_names: - if pd.isnull(df[name]).sum(): - if self.add_col: - df[name+'_na'] = pd.isnull(df[name]) - if name+'_na' not in self.cat_names: self.cat_names.append(name+'_na') - if self.fill_strategy == FillStrategy.MEDIAN: filler = df[name].median() - elif self.fill_strategy == FillStrategy.CONSTANT: filler = self.fill_val - else: filler = df[name].dropna().value_counts().idxmax() - df[name] = df[name].fillna(filler) - self.na_dict[name] = filler - - def apply_test(self, df:DataFrame): - "Fill missing values in `self.cont_names` like in `apply_train`." - for name in self.cont_names: - if name in self.na_dict: - if self.add_col: - df[name+'_na'] = pd.isnull(df[name]) - if name+'_na' not in self.cat_names: self.cat_names.append(name+'_na') - df[name] = df[name].fillna(self.na_dict[name]) - elif pd.isnull(df[name]).sum() != 0: - raise Exception(f"""There are nan values in field {name} but there were none in the training set. - Please fix those manually.""") - -class Normalize(TabularProc): - "Normalize the continuous variables." - def apply_train(self, df:DataFrame): - "Compute the means and stds of `self.cont_names` columns to normalize them." - self.means,self.stds = {},{} - for n in self.cont_names: - assert is_numeric_dtype(df[n]), (f"""Cannot normalize '{n}' column as it isn't numerical. - Are you sure it doesn't belong in the categorical set of columns?""") - self.means[n],self.stds[n] = df[n].mean(),df[n].std() - df[n] = (df[n]-self.means[n]) / (1e-7 + self.stds[n]) - - def apply_test(self, df:DataFrame): - "Normalize `self.cont_names` with the same statistics as in `apply_train`." - for n in self.cont_names: - df[n] = (df[n]-self.means[n]) / (1e-7 + self.stds[n]) diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/modules.py b/spaces/XzJosh/yoyo-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/yoyo-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Yuzu22/rvc-models/app.py b/spaces/Yuzu22/rvc-models/app.py deleted file mode 100644 index 5ef3bed52089af1afd7b5edcf72721d92b2bbbe0..0000000000000000000000000000000000000000 --- a/spaces/Yuzu22/rvc-models/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
        RVC Models\n" - "##
        The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=zomehwh.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/16MXRcKEjGDqQzVanvi8xYOOOlhdNBopM?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/zomehwh/rvc-models?duplicate=true)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
        ' - f'
        {title}
        \n'+ - (f'
        Model author: {author}
        ' if author else "")+ - (f'' if cover else "")+ - '
        ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/Zengyf-CVer/FaceRecognition/app.py b/spaces/Zengyf-CVer/FaceRecognition/app.py deleted file mode 100644 index 8e3b77c058e3ec0ac1d2fb9394c296e1c246c28e..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/FaceRecognition/app.py +++ /dev/null @@ -1,188 +0,0 @@ -# Face Recognition Hub -# author: Zeng Yifu(曾逸夫) -# creation time: 2022-07-28 -# email: zyfiy1314@163.com -# project homepage: https://gitee.com/CV_Lab/face-recognition-hub - -import os -import sys -from pathlib import Path - -import face_recognition -import gradio as gr -from PIL import Image, ImageDraw, ImageFont - -from util.fonts_opt import is_fonts - -ROOT_PATH = sys.path[0] # 项目根目录 - -IMG_PATH_Test = "./img_examples/unknown" - -FONTSIZE = 15 - -OCR_TR_DESCRIPTION = '''# Face Recognition -
        https://github.com/ageitgey/face_recognition demo
        ''' - -def str_intercept(img_path): - img_path_ = img_path[::-1] - point_index = 0 # 记录反转后第一个点的位置 - slash_index = 0 # 记录反转后第一个斜杠的位置 - - flag_pi = 0 - flag_si = 0 - - for i in range(len(img_path_)): - if (img_path_[i] == "." and flag_pi == 0): - point_index = i - flag_pi = 1 - - if (img_path_[i] == "/" and flag_si == 0): - slash_index = i - flag_si = 1 - - point_index = len(img_path) - 1 - point_index - slash_index = len(img_path) - 1 - slash_index - - return point_index, slash_index - - -# 人脸录入 -def face_entry(img_path, name_text): - if img_path == "" or name_text == "" or img_path is None or name_text is None: - return None, None, None - - point_index, slash_index = str_intercept(img_path) - img_renamePath = f"{img_path[:slash_index+1]}{name_text}{img_path[point_index:]}" - os.rename(img_path, img_renamePath) - img_ = Image.open(img_renamePath) - print(img_renamePath) - - return img_, img_renamePath, name_text - - -# 设置示例 -def set_example_image(example: list): - return gr.Image.update(value=example[0]) - - -def face_recognition_(img_srcPath, img_tagPath, img_personName): - if img_tagPath == "" or img_tagPath is None: - return None - - image_of_person = face_recognition.load_image_file(img_srcPath) - person_face_encoding = face_recognition.face_encodings(image_of_person)[0] - - known_face_encodings = [ - person_face_encoding,] - - known_face_names = [ - img_personName,] - - test_image = face_recognition.load_image_file(img_tagPath) - - face_locations = face_recognition.face_locations(test_image) - face_encodings = face_recognition.face_encodings(test_image, face_locations) - - pil_image = Image.fromarray(test_image) - img_pil = ImageDraw.Draw(pil_image) - textFont = ImageFont.truetype(str(f"{ROOT_PATH}/fonts/SimSun.ttf"), size=FONTSIZE) - # ymin, xmax, ymax, xmin - for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings): - matches = face_recognition.compare_faces(known_face_encodings, face_encoding) - - name = "Unknown Person" - - if True in matches: - first_matches_index = matches.index(True) - name = known_face_names[first_matches_index] - - img_pil.rectangle([left, top, right, bottom], fill=None, outline=(255, 228, 181), width=2) # 边界框 - text_w, text_h = textFont.getsize(name) # 标签尺寸 - # 标签背景 - img_pil.rectangle( - (left, top, left + text_w, top + text_h), - fill=(255, 255, 255), - outline=(255, 255, 255), - ) - - # 标签 - img_pil.multiline_text( - (left, top), - name, - fill=(0, 0, 0), - font=textFont, - align="center", - ) - - del img_pil - return pil_image - - -def main(): - is_fonts(f"{ROOT_PATH}/fonts") # 检查字体文件 - - with gr.Blocks(css='style.css') as demo: - gr.Markdown(OCR_TR_DESCRIPTION) - - # -------------- 人脸识别 录入 -------------- - with gr.Row(): - gr.Markdown("### Step 01: Face Entry") - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_img = gr.Image(image_mode="RGB", source="upload", type="filepath", label="face entry") - with gr.Row(): - input_name = gr.Textbox(label="Name") - with gr.Row(): - btn = gr.Button(value="Entry") - - with gr.Column(): - with gr.Row(): - output_ = gr.Image(image_mode="RGB", source="upload", type="pil", label="entry image") - input_srcImg = gr.Variable(value="") - input_srcName = gr.Variable(value="") - with gr.Row(): - example_list = [["./img_examples/known/ChengLong.jpg", "成龙"], - ["./img_examples/known/VinDiesel.jpg", "VinDiesel"], - ["./img_examples/known/JasonStatham.jpg", "JasonStatham"], - ["./img_examples/known/ZhenZidan.jpg", "甄子丹"]] - gr.Examples(example_list, - [input_img, input_name], - output_, - set_example_image, - cache_examples=False) - - - # -------------- 人脸识别 测试 -------------- - with gr.Row(): - gr.Markdown("### Step 02: Face Test") - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_img_test = gr.Image(image_mode="RGB", source="upload", type="filepath", label="test image") - with gr.Row(): - btn_test = gr.Button(value="Test") - with gr.Row(): - paths = sorted(Path(IMG_PATH_Test).rglob('*.jpg')) - example_images_test = gr.Dataset(components=[input_img], - samples=[[path.as_posix()] for path in paths]) - - with gr.Column(): - with gr.Row(): - output_test = gr.Image(image_mode="RGB", source="upload", type="pil", label="identify image") - - btn.click(fn=face_entry, inputs=[input_img, input_name], outputs=[output_, input_srcImg, input_srcName]) - - btn_test.click(fn=face_recognition_, - inputs=[input_srcImg, input_img_test, input_srcName], - outputs=[output_test]) - example_images_test.click(fn=set_example_image, inputs=[ - example_images_test,], outputs=[ - input_img_test,]) - - return demo - - -if __name__ == "__main__": - demo = main() - demo.launch(inbrowser=True) diff --git a/spaces/aaditkapoorbionlp/clinical_trial_match/README.md b/spaces/aaditkapoorbionlp/clinical_trial_match/README.md deleted file mode 100644 index a716c803d0638f8c28c0320a4afd7c3d80b0d128..0000000000000000000000000000000000000000 --- a/spaces/aaditkapoorbionlp/clinical_trial_match/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Clinical Trial Match -emoji: 🐢 -colorFrom: pink -colorTo: pink -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: mit -Embeddings: https://projector.tensorflow.org/?config=https://huggingface.co/spaces/aaditkapoorbionlp/clinical_trial_match/raw/main/emb.json ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aayushrawat/recommender-model/app.py b/spaces/aayushrawat/recommender-model/app.py deleted file mode 100644 index 6c5fcdc502fa194e8ab62a24a8d3f769bc8afba9..0000000000000000000000000000000000000000 --- a/spaces/aayushrawat/recommender-model/app.py +++ /dev/null @@ -1,141 +0,0 @@ -import pandas as pd -import numpy as np -import ast -import streamlit as st -import requests -from sklearn.feature_extraction.text import CountVectorizer -from sklearn.metrics.pairwise import cosine_similarity -from nltk.stem.porter import PorterStemmer - -cred = "https://raw.githubusercontent.com/aayushrawat/mrs-dataset-files/main/tmdb_5000_credits.csv" -mov = "https://raw.githubusercontent.com/aayushrawat/mrs-dataset-files/main/tmdb_5000_movies.csv" -movie = pd.read_csv(mov) -credit = pd.read_csv(cred) -movie = movie.merge(credit, on = "title") -movie = movie[["title", "genres", "movie_id", "overview", "cast", "crew", "keywords"]] - - -def normaltext(obj): - l = [] - for i in ast.literal_eval(obj): - l.append(i["name"]) - return l - - -movie["genres"] = movie["genres"].apply(normaltext) -movie["keywords"] = movie["keywords"].apply(normaltext) - - -def normaltext3(obj): - l = [] - c = 0 - for i in ast.literal_eval(obj): - if c != 3: - l.append(i["name"]) - c += 1 - else: - break - return l - - -movie["cast"] = movie["cast"].apply(normaltext3) - - -def getdirector(obj): - l = [] - for i in ast.literal_eval(obj): - if i["job"] == "Director": - l.append(i["name"]) - break - return l - - -movie["crew"] = movie["crew"].apply(getdirector) -movie["genres"] = movie["genres"].apply(lambda x: [i.replace(" ", "") for i in x]) -movie["keywords"] = movie["keywords"].apply(lambda x: [i.replace(" ", "") for i in x]) -movie["cast"] = movie["cast"].apply(lambda x: [i.replace(" ", "") for i in x]) -movie["crew"] = movie["crew"].apply(lambda x: [i.replace(" ", "") for i in x]) -backup_df = movie - -x = list(movie["overview"]) - - -def nikalo(o): - if isinstance(o, float): - print(o) - else: - pass - - -movie["overview"] = movie["overview"].fillna(" ") -movie["overview"] = movie["overview"].apply(lambda x: x.split()) -movie["tags"] = movie["cast"] + movie["keywords"] + movie["overview"] + movie["crew"] + movie["genres"] -new = movie[["movie_id", "title", "tags"]] -new["tags"] = new["tags"].apply(lambda x: " ".join(x)) -new["tags"] = new["tags"].apply(lambda x: x.lower()) - -ps = PorterStemmer() - - -def stem(text): - y = [] - for i in text.split(): - y.append(ps.stem(i)) - return " ".join(y) - - -new["tags"] = new["tags"].apply(stem) -movies = new -cv = CountVectorizer(max_features = 5000, stop_words = "english") -vectors = cv.fit_transform(new["tags"]).toarray() -similarity = cosine_similarity(vectors) - - -st.header('Movie Recommender System') - -movie_list = movies['title'].values -selected_movie = st.selectbox( - "Type or select a movie from the dropdown", movie_list) - - -def fetch_poster(movie_id): - url = f"https://api.themoviedb.org/3/movie/{movie_id}?api_key=8265bd1679663a7ea12ac168da84d2e8&language=en-US" - data = requests.get(url) - data = data.json() - poster_path = data['poster_path'] - full_path = "https://image.tmdb.org/t/p/w500/" + poster_path - return full_path - - -def recommender(movie): - movie_index = movies[movies["title"] == movie].index[0] - distances = similarity[movie_index] - movielist = sorted(list(enumerate(distances)), reverse=True, key=lambda x: x[1])[1:6] - recommendation = [] - for i in movielist: - movie_id = movies.iloc[i[0]].movie_id - poster = fetch_poster(movie_id) - title = movies.iloc[i[0]].title - recommendation.append((title, poster)) - return recommendation - - -if st.button('Show Recommendation'): - recommendations = recommender(selected_movie) - col1, col2, col3, col4, col5 = st.columns(5) - with col1: - st.text(recommendations[0][0]) - st.image(recommendations[0][1]) - with col2: - st.text(recommendations[1][0]) - st.image(recommendations[1][1]) - - with col3: - st.text(recommendations[2][0]) - st.image(recommendations[2][1]) - with col4: - st.text(recommendations[3][0]) - st.image(recommendations[3][1]) - with col5: - st.text(recommendations[4][0]) - st.image(recommendations[4][1]) \ No newline at end of file diff --git a/spaces/abby711/FaceRestoration/tests/test_gfpgan_model.py b/spaces/abby711/FaceRestoration/tests/test_gfpgan_model.py deleted file mode 100644 index 1408ddd7c909c7257fbcea79f8576231a40f9211..0000000000000000000000000000000000000000 --- a/spaces/abby711/FaceRestoration/tests/test_gfpgan_model.py +++ /dev/null @@ -1,132 +0,0 @@ -import tempfile -import torch -import yaml -from basicsr.archs.stylegan2_arch import StyleGAN2Discriminator -from basicsr.data.paired_image_dataset import PairedImageDataset -from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss - -from gfpgan.archs.arcface_arch import ResNetArcFace -from gfpgan.archs.gfpganv1_arch import FacialComponentDiscriminator, GFPGANv1 -from gfpgan.models.gfpgan_model import GFPGANModel - - -def test_gfpgan_model(): - with open('tests/data/test_gfpgan_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = GFPGANModel(opt) - # test attributes - assert model.__class__.__name__ == 'GFPGANModel' - assert isinstance(model.net_g, GFPGANv1) # generator - assert isinstance(model.net_d, StyleGAN2Discriminator) # discriminator - # facial component discriminators - assert isinstance(model.net_d_left_eye, FacialComponentDiscriminator) - assert isinstance(model.net_d_right_eye, FacialComponentDiscriminator) - assert isinstance(model.net_d_mouth, FacialComponentDiscriminator) - # identity network - assert isinstance(model.network_identity, ResNetArcFace) - # losses - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.cri_perceptual, PerceptualLoss) - assert isinstance(model.cri_gan, GANLoss) - assert isinstance(model.cri_l1, L1Loss) - # optimizer - assert isinstance(model.optimizers[0], torch.optim.Adam) - assert isinstance(model.optimizers[1], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 512, 512), dtype=torch.float32) - lq = torch.rand((1, 3, 512, 512), dtype=torch.float32) - loc_left_eye = torch.rand((1, 4), dtype=torch.float32) - loc_right_eye = torch.rand((1, 4), dtype=torch.float32) - loc_mouth = torch.rand((1, 4), dtype=torch.float32) - data = dict(gt=gt, lq=lq, loc_left_eye=loc_left_eye, loc_right_eye=loc_right_eye, loc_mouth=loc_mouth) - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 512, 512) - assert model.gt.shape == (1, 3, 512, 512) - assert model.loc_left_eyes.shape == (1, 4) - assert model.loc_right_eyes.shape == (1, 4) - assert model.loc_mouths.shape == (1, 4) - - # ----------------- test optimize_parameters -------------------- # - model.feed_data(data) - model.optimize_parameters(1) - assert model.output.shape == (1, 3, 512, 512) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = [ - 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth', - 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye', - 'l_d_right_eye', 'l_d_mouth' - ] - assert set(expected_keys).issubset(set(model.log_dict.keys())) - - # ----------------- remove pyramid_loss_weight-------------------- # - model.feed_data(data) - model.optimize_parameters(100000) # large than remove_pyramid_loss = 50000 - assert model.output.shape == (1, 3, 512, 512) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = [ - 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth', - 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye', - 'l_d_right_eye', 'l_d_mouth' - ] - assert set(expected_keys).issubset(set(model.log_dict.keys())) - - # ----------------- test save -------------------- # - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['path']['models'] = tmpdir - model.opt['path']['training_states'] = tmpdir - model.save(0, 1) - - # ----------------- test the test function -------------------- # - model.test() - assert model.output.shape == (1, 3, 512, 512) - # delete net_g_ema - model.__delattr__('net_g_ema') - model.test() - assert model.output.shape == (1, 3, 512, 512) - assert model.net_g.training is True # should back to training mode after testing - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/gt', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['path']['visualization'] = tmpdir - model.nondist_validation(dataloader, 1, None, save_img=True) - assert model.is_train is True - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) - - # validation - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['is_train'] = False - model.opt['val']['suffix'] = 'test' - model.opt['path']['visualization'] = tmpdir - model.opt['val']['pbar'] = True - model.nondist_validation(dataloader, 1, None, save_img=True) - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) - - # if opt['val']['suffix'] is None - model.opt['val']['suffix'] = None - model.opt['name'] = 'demo' - model.opt['path']['visualization'] = tmpdir - model.nondist_validation(dataloader, 1, None, save_img=True) - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/deeplabv3plus_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/deeplabv3plus_r50-d8.py deleted file mode 100644 index 050e39e091d816df9028d23aa3ecf9db74e441e1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/deeplabv3plus_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DepthwiseSeparableASPPHead', - in_channels=2048, - in_index=3, - channels=512, - dilations=(1, 12, 24, 36), - c1_in_channels=256, - c1_channels=48, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/fcn_mask_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/fcn_mask_head.py deleted file mode 100644 index be6772fa6c471a7a65b77f2f18dfd217f4bd3289..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/fcn_mask_head.py +++ /dev/null @@ -1,377 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, ConvModule, build_upsample_layer -from mmcv.ops.carafe import CARAFEPack -from mmcv.runner import auto_fp16, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.core import mask_target -from mmdet.models.builder import HEADS, build_loss - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit - - -@HEADS.register_module() -class FCNMaskHead(nn.Module): - - def __init__(self, - num_convs=4, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - conv_out_channels=256, - num_classes=80, - class_agnostic=False, - upsample_cfg=dict(type='deconv', scale_factor=2), - conv_cfg=None, - norm_cfg=None, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)): - super(FCNMaskHead, self).__init__() - self.upsample_cfg = upsample_cfg.copy() - if self.upsample_cfg['type'] not in [ - None, 'deconv', 'nearest', 'bilinear', 'carafe' - ]: - raise ValueError( - f'Invalid upsample method {self.upsample_cfg["type"]}, ' - 'accepted methods are "deconv", "nearest", "bilinear", ' - '"carafe"') - self.num_convs = num_convs - # WARN: roi_feat_size is reserved and not used - self.roi_feat_size = _pair(roi_feat_size) - self.in_channels = in_channels - self.conv_kernel_size = conv_kernel_size - self.conv_out_channels = conv_out_channels - self.upsample_method = self.upsample_cfg.get('type') - self.scale_factor = self.upsample_cfg.pop('scale_factor', None) - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - self.loss_mask = build_loss(loss_mask) - - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = ( - self.in_channels if i == 0 else self.conv_out_channels) - padding = (self.conv_kernel_size - 1) // 2 - self.convs.append( - ConvModule( - in_channels, - self.conv_out_channels, - self.conv_kernel_size, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - upsample_in_channels = ( - self.conv_out_channels if self.num_convs > 0 else in_channels) - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample_method is None: - self.upsample = None - elif self.upsample_method == 'deconv': - upsample_cfg_.update( - in_channels=upsample_in_channels, - out_channels=self.conv_out_channels, - kernel_size=self.scale_factor, - stride=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - elif self.upsample_method == 'carafe': - upsample_cfg_.update( - channels=upsample_in_channels, scale_factor=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - else: - # suppress warnings - align_corners = (None - if self.upsample_method == 'nearest' else False) - upsample_cfg_.update( - scale_factor=self.scale_factor, - mode=self.upsample_method, - align_corners=align_corners) - self.upsample = build_upsample_layer(upsample_cfg_) - - out_channels = 1 if self.class_agnostic else self.num_classes - logits_in_channel = ( - self.conv_out_channels - if self.upsample_method == 'deconv' else upsample_in_channels) - self.conv_logits = Conv2d(logits_in_channel, out_channels, 1) - self.relu = nn.ReLU(inplace=True) - self.debug_imgs = None - - def init_weights(self): - for m in [self.upsample, self.conv_logits]: - if m is None: - continue - elif isinstance(m, CARAFEPack): - m.init_weights() - else: - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu') - nn.init.constant_(m.bias, 0) - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - return mask_pred - - def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds, - gt_masks, rcnn_train_cfg) - return mask_targets - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, mask_targets, labels): - """ - Example: - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> # There are lots of variations depending on the configuration - >>> self = FCNMaskHead(num_classes=C, num_convs=1) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> sf = self.scale_factor - >>> labels = torch.randint(0, C, size=(N,)) - >>> # With the default properties the mask targets should indicate - >>> # a (potentially soft) single-class label - >>> mask_targets = torch.rand(N, H * sf, W * sf) - >>> loss = self.loss(mask_pred, mask_targets, labels) - >>> print('loss = {!r}'.format(loss)) - """ - loss = dict() - if mask_pred.size(0) == 0: - loss_mask = mask_pred.sum() - else: - if self.class_agnostic: - loss_mask = self.loss_mask(mask_pred, mask_targets, - torch.zeros_like(labels)) - else: - loss_mask = self.loss_mask(mask_pred, mask_targets, labels) - loss['loss_mask'] = loss_mask - return loss - - def get_seg_masks(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg, - ori_shape, scale_factor, rescale): - """Get segmentation masks from mask_pred and bboxes. - - Args: - mask_pred (Tensor or ndarray): shape (n, #class, h, w). - For single-scale testing, mask_pred is the direct output of - model, whose type is Tensor, while for multi-scale testing, - it will be converted to numpy array outside of this method. - det_bboxes (Tensor): shape (n, 4/5) - det_labels (Tensor): shape (n, ) - rcnn_test_cfg (dict): rcnn testing config - ori_shape (Tuple): original image height and width, shape (2,) - scale_factor(float | Tensor): If ``rescale is True``, box - coordinates are divided by this scale factor to fit - ``ori_shape``. - rescale (bool): If True, the resulting masks will be rescaled to - ``ori_shape``. - - Returns: - list[list]: encoded masks. The c-th item in the outer list - corresponds to the c-th class. Given the c-th outer list, the - i-th item in that inner list is the mask for the i-th box with - class label c. - - Example: - >>> import mmcv - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> self = FCNMaskHead(num_classes=C, num_convs=0) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> # Each input is associated with some bounding box - >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N) - >>> det_labels = torch.randint(0, C, size=(N,)) - >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, }) - >>> ori_shape = (H * 4, W * 4) - >>> scale_factor = torch.FloatTensor((1, 1)) - >>> rescale = False - >>> # Encoded masks are a list for each category. - >>> encoded_masks = self.get_seg_masks( - >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, - >>> scale_factor, rescale - >>> ) - >>> assert len(encoded_masks) == C - >>> assert sum(list(map(len, encoded_masks))) == N - """ - if isinstance(mask_pred, torch.Tensor): - mask_pred = mask_pred.sigmoid() - else: - mask_pred = det_bboxes.new_tensor(mask_pred) - - device = mask_pred.device - cls_segms = [[] for _ in range(self.num_classes) - ] # BG is not included in num_classes - bboxes = det_bboxes[:, :4] - labels = det_labels - - if rescale: - img_h, img_w = ori_shape[:2] - else: - if isinstance(scale_factor, float): - img_h = np.round(ori_shape[0] * scale_factor).astype(np.int32) - img_w = np.round(ori_shape[1] * scale_factor).astype(np.int32) - else: - w_scale, h_scale = scale_factor[0], scale_factor[1] - img_h = np.round(ori_shape[0] * h_scale.item()).astype( - np.int32) - img_w = np.round(ori_shape[1] * w_scale.item()).astype( - np.int32) - scale_factor = 1.0 - - if not isinstance(scale_factor, (float, torch.Tensor)): - scale_factor = bboxes.new_tensor(scale_factor) - bboxes = bboxes / scale_factor - - if torch.onnx.is_in_onnx_export(): - # TODO: Remove after F.grid_sample is supported. - from torchvision.models.detection.roi_heads \ - import paste_masks_in_image - masks = paste_masks_in_image(mask_pred, bboxes, ori_shape[:2]) - thr = rcnn_test_cfg.get('mask_thr_binary', 0) - if thr > 0: - masks = masks >= thr - return masks - - N = len(mask_pred) - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == 'cpu': - # CPU is most efficient when they are pasted one by one with - # skip_empty=True, so that it performs minimal number of - # operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, - # but may have memory issue - num_chunks = int( - np.ceil(N * img_h * img_w * BYTES_PER_FLOAT / GPU_MEM_LIMIT)) - assert (num_chunks <= - N), 'Default GPU_MEM_LIMIT is too small; try increasing it' - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - threshold = rcnn_test_cfg.mask_thr_binary - im_mask = torch.zeros( - N, - img_h, - img_w, - device=device, - dtype=torch.bool if threshold >= 0 else torch.uint8) - - if not self.class_agnostic: - mask_pred = mask_pred[range(N), labels][:, None] - - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - mask_pred[inds], - bboxes[inds], - img_h, - img_w, - skip_empty=device.type == 'cpu') - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - im_mask[(inds, ) + spatial_inds] = masks_chunk - - for i in range(N): - cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy()) - return cls_segms - - -def _do_paste_mask(masks, boxes, img_h, img_w, skip_empty=True): - """Paste instance masks according to boxes. - - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - Args: - masks (Tensor): N, 1, H, W - boxes (Tensor): N, 4 - img_h (int): Height of the image to be pasted. - img_w (int): Width of the image to be pasted. - skip_empty (bool): Only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - tuple: (Tensor, tuple). The first item is mask tensor, the second one - is the slice object. - If skip_empty == False, the whole image will be pasted. It will - return a mask of shape (N, img_h, img_w) and an empty tuple. - If skip_empty == True, only area around the mask will be pasted. - A mask of shape (N, h', w') and its start and end coordinates - in the original image will be returned. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - if skip_empty: - x0_int, y0_int = torch.clamp( - boxes.min(dim=0).values.floor()[:2] - 1, - min=0).to(dtype=torch.int32) - x1_int = torch.clamp( - boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp( - boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange( - y0_int, y1_int, device=device, dtype=torch.float32) + 0.5 - img_x = torch.arange( - x0_int, x1_int, device=device, dtype=torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - if torch.isinf(img_x).any(): - inds = torch.where(torch.isinf(img_x)) - img_x[inds] = 0 - if torch.isinf(img_y).any(): - inds = torch.where(torch.isinf(img_y)) - img_y[inds] = 0 - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - if torch.onnx.is_in_onnx_export(): - raise RuntimeError( - 'Exporting F.grid_sample from Pytorch to ONNX is not supported.') - img_masks = F.grid_sample( - masks.to(dtype=torch.float32), grid, align_corners=False) - - if skip_empty: - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/upernet_uniformer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/upernet_uniformer.py deleted file mode 100644 index 41aa4db809dc6e2c508e98051f61807d07477903..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/upernet_uniformer.py +++ /dev/null @@ -1,43 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - mlp_ratio=4., - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1), - decode_head=dict( - type='UPerHead', - in_channels=[64, 128, 320, 512], - in_index=[0, 1, 2, 3], - pool_scales=(1, 2, 3, 6), - channels=512, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=320, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/__init__.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/__init__.py deleted file mode 100644 index c6bda769a578578ebe6d83e5cd7e7af4a4eac026..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/__init__.py +++ /dev/null @@ -1,372 +0,0 @@ -"""pyglet is a cross-platform games and multimedia package. - -More information is available at http://www.pyglet.org -""" - -import os -import sys - -from typing import TYPE_CHECKING - -#: The release version -version = '2.0.5' -__version__ = version - -MIN_PYTHON_VERSION = 3, 8 -MIN_PYTHON_VERSION_STR = '.'.join([str(v) for v in MIN_PYTHON_VERSION]) - -if sys.version_info < MIN_PYTHON_VERSION: - raise Exception(f"pyglet {version} requires Python {MIN_PYTHON_VERSION_STR} or newer.") - -if 'sphinx' in sys.modules: - setattr(sys, 'is_pyglet_doc_run', True) -_is_pyglet_doc_run = hasattr(sys, "is_pyglet_doc_run") and sys.is_pyglet_doc_run - -# pyglet platform treats *BSD systems as Linux -compat_platform = sys.platform -if "bsd" in compat_platform: - compat_platform = "linux-compat" - -_enable_optimisations = not __debug__ -if getattr(sys, 'frozen', None): - _enable_optimisations = True - -#: Global dict of pyglet options. To change an option from its default, you -#: must import ``pyglet`` before any sub-packages. For example:: -#: -#: import pyglet -#: pyglet.options['debug_gl'] = False -#: -#: The default options can be overridden from the OS environment. The -#: corresponding environment variable for each option key is prefaced by -#: ``PYGLET_``. For example, in Bash you can set the ``debug_gl`` option with:: -#: -#: PYGLET_DEBUG_GL=True; export PYGLET_DEBUG_GL -#: -#: For options requiring a tuple of values, separate each value with a comma. -#: -#: The non-development options are: -#: -#: audio -#: A sequence of the names of audio modules to attempt to load, in -#: order of preference. Valid driver names are: -#: -#: * xaudio2, the Windows Xaudio2 audio module (Windows only) -#: * directsound, the Windows DirectSound audio module (Windows only) -#: * pulse, the PulseAudio module (Linux only) -#: * openal, the OpenAL audio module -#: * silent, no audio -#: debug_lib -#: If True, prints the path of each dynamic library loaded. -#: debug_gl -#: If True, all calls to OpenGL functions are checked afterwards for -#: errors using ``glGetError``. This will severely impact performance, -#: but provides useful exceptions at the point of failure. By default, -#: this option is enabled if ``__debug__`` is (i.e., if Python was not run -#: with the -O option). It is disabled by default when pyglet is "frozen" -#: within a py2exe or py2app library archive. -#: shadow_window -#: By default, pyglet creates a hidden window with a GL context when -#: pyglet.gl is imported. This allows resources to be loaded before -#: the application window is created, and permits GL objects to be -#: shared between windows even after they've been closed. You can -#: disable the creation of the shadow window by setting this option to -#: False. -#: -#: Some OpenGL driver implementations may not support shared OpenGL -#: contexts and may require disabling the shadow window (and all resources -#: must be loaded after the window using them was created). Recommended -#: for advanced developers only. -#: -#: .. versionadded:: 1.1 -#: vsync -#: If set, the `pyglet.window.Window.vsync` property is ignored, and -#: this option overrides it (to either force vsync on or off). If unset, -#: or set to None, the `pyglet.window.Window.vsync` property behaves -#: as documented. -#: xsync -#: If set (the default), pyglet will attempt to synchronise the drawing of -#: double-buffered windows to the border updates of the X11 window -#: manager. This improves the appearance of the window during resize -#: operations. This option only affects double-buffered windows on -#: X11 servers supporting the Xsync extension with a window manager -#: that implements the _NET_WM_SYNC_REQUEST protocol. -#: -#: .. versionadded:: 1.1 -#: search_local_libs -#: If False, pyglet won't try to search for libraries in the script -#: directory and its `lib` subdirectory. This is useful to load a local -#: library instead of the system installed version. This option is set -#: to True by default. -#: -#: .. versionadded:: 1.2 -#: -options = { - 'audio': ('xaudio2', 'directsound', 'openal', 'pulse', 'silent'), - 'debug_font': False, - 'debug_gl': not _enable_optimisations, - 'debug_gl_trace': False, - 'debug_gl_trace_args': False, - 'debug_gl_shaders': False, - 'debug_graphics_batch': False, - 'debug_lib': False, - 'debug_media': False, - 'debug_texture': False, - 'debug_trace': False, - 'debug_trace_args': False, - 'debug_trace_depth': 1, - 'debug_trace_flush': True, - 'debug_win32': False, - 'debug_input': False, - 'debug_x11': False, - 'shadow_window': True, - 'vsync': None, - 'xsync': True, - 'xlib_fullscreen_override_redirect': False, - 'search_local_libs': True, - 'win32_gdi_font': False, - 'headless': False, - 'headless_device': 0, - 'win32_disable_shaping': False, - 'dw_legacy_naming': False, - 'win32_disable_xinput': False, - 'com_mta': False, -} - -_option_types = { - 'audio': tuple, - 'debug_font': bool, - 'debug_gl': bool, - 'debug_gl_trace': bool, - 'debug_gl_trace_args': bool, - 'debug_gl_shaders': bool, - 'debug_graphics_batch': bool, - 'debug_lib': bool, - 'debug_media': bool, - 'debug_texture': bool, - 'debug_trace': bool, - 'debug_trace_args': bool, - 'debug_trace_depth': int, - 'debug_trace_flush': bool, - 'debug_win32': bool, - 'debug_input': bool, - 'debug_x11': bool, - 'shadow_window': bool, - 'vsync': bool, - 'xsync': bool, - 'xlib_fullscreen_override_redirect': bool, - 'search_local_libs': bool, - 'win32_gdi_font': bool, - 'headless': bool, - 'headless_device': int, - 'win32_disable_shaping': bool, - 'dw_legacy_naming': bool, - 'win32_disable_xinput': bool, - 'com_mta': bool -} - - -for key in options: - """Read defaults for options from environment""" - assert key in _option_types, f"Option '{key}' must have a type set in _option_types." - env = f'PYGLET_{key.upper()}' - try: - value = os.environ[env] - if _option_types[key] is tuple: - options[key] = value.split(',') - elif _option_types[key] is bool: - options[key] = value in ('true', 'TRUE', 'True', '1') - elif _option_types[key] is int: - options[key] = int(value) - except KeyError: - pass - - -if compat_platform == 'cygwin': - # This hack pretends that the posix-like ctypes provides windows - # functionality. COM does not work with this hack, so there is no - # DirectSound support. - import ctypes - - ctypes.windll = ctypes.cdll - ctypes.oledll = ctypes.cdll - ctypes.WINFUNCTYPE = ctypes.CFUNCTYPE - ctypes.HRESULT = ctypes.c_long - -# Call tracing -# ------------ - -_trace_filename_abbreviations = {} - - -def _trace_repr(value, size=40): - value = repr(value) - if len(value) > size: - value = value[:size // 2 - 2] + '...' + value[-size // 2 - 1:] - return value - - -def _trace_frame(thread, frame, indent): - from pyglet import lib - if frame.f_code is lib._TraceFunction.__call__.__code__: - is_ctypes = True - func = frame.f_locals['self']._func - name = func.__name__ - location = '[ctypes]' - else: - is_ctypes = False - code = frame.f_code - name = code.co_name - path = code.co_filename - line = code.co_firstlineno - - try: - filename = _trace_filename_abbreviations[path] - except KeyError: - # Trim path down - dir = '' - path, filename = os.path.split(path) - while len(dir + filename) < 30: - filename = os.path.join(dir, filename) - path, dir = os.path.split(path) - if not dir: - filename = os.path.join('', filename) - break - else: - filename = os.path.join('...', filename) - _trace_filename_abbreviations[path] = filename - - location = f'({filename}:{line})' - - if indent: - name = f'Called from {name}' - print(f'[{thread}] {indent}{name} {location}') - - if _trace_args: - if is_ctypes: - args = [_trace_repr(arg) for arg in frame.f_locals['args']] - print(f' {indent}args=({", ".join(args)})') - else: - for argname in code.co_varnames[:code.co_argcount]: - try: - argvalue = _trace_repr(frame.f_locals[argname]) - print(f' {indent}{argname}={argvalue}') - except: - pass - - if _trace_flush: - sys.stdout.flush() - - -def _thread_trace_func(thread): - def _trace_func(frame, event, arg): - if event == 'call': - indent = '' - for i in range(_trace_depth): - _trace_frame(thread, frame, indent) - indent += ' ' - frame = frame.f_back - if not frame: - break - - elif event == 'exception': - (exception, value, traceback) = arg - print('First chance exception raised:', repr(exception)) - - return _trace_func - - -def _install_trace(): - global _trace_thread_count - sys.setprofile(_thread_trace_func(_trace_thread_count)) - _trace_thread_count += 1 - - -_trace_thread_count = 0 -_trace_args = options['debug_trace_args'] -_trace_depth = options['debug_trace_depth'] -_trace_flush = options['debug_trace_flush'] -if options['debug_trace']: - _install_trace() - - -# Lazy loading -# ------------ - -class _ModuleProxy: - _module = None - - def __init__(self, name): - self.__dict__['_module_name'] = name - - def __getattr__(self, name): - try: - return getattr(self._module, name) - except AttributeError: - if self._module is not None: - raise - - import_name = f'pyglet.{self._module_name}' - __import__(import_name) - module = sys.modules[import_name] - object.__setattr__(self, '_module', module) - globals()[self._module_name] = module - return getattr(module, name) - - def __setattr__(self, name, value): - try: - setattr(self._module, name, value) - except AttributeError: - if self._module is not None: - raise - - import_name = f'pyglet.{self._module_name}' - __import__(import_name) - module = sys.modules[import_name] - object.__setattr__(self, '_module', module) - globals()[self._module_name] = module - setattr(module, name, value) - - -# Lazily load all modules, except if performing -# type checking or code inspection. -if TYPE_CHECKING: - from . import app - from . import canvas - from . import clock - from . import event - from . import font - from . import gl - from . import graphics - from . import gui - from . import input - from . import image - from . import lib - from . import math - from . import media - from . import model - from . import resource - from . import sprite - from . import shapes - from . import text - from . import window -else: - app = _ModuleProxy('app') - canvas = _ModuleProxy('canvas') - clock = _ModuleProxy('clock') - event = _ModuleProxy('event') - font = _ModuleProxy('font') - gl = _ModuleProxy('gl') - graphics = _ModuleProxy('graphics') - gui = _ModuleProxy('gui') - image = _ModuleProxy('image') - input = _ModuleProxy('input') - lib = _ModuleProxy('lib') - math = _ModuleProxy('math') - media = _ModuleProxy('media') - model = _ModuleProxy('model') - resource = _ModuleProxy('resource') - sprite = _ModuleProxy('sprite') - shapes = _ModuleProxy('shapes') - text = _ModuleProxy('text') - window = _ModuleProxy('window') diff --git a/spaces/akhaliq/Detic/detic/data/transforms/custom_transform.py b/spaces/akhaliq/Detic/detic/data/transforms/custom_transform.py deleted file mode 100644 index 3cc28b6b313dc084394ec5c9686169176987a44b..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/data/transforms/custom_transform.py +++ /dev/null @@ -1,114 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py -# Modified by Xingyi Zhou -# The original code is under Apache-2.0 License -import numpy as np -import torch -import torch.nn.functional as F -from fvcore.transforms.transform import ( - CropTransform, - HFlipTransform, - NoOpTransform, - Transform, - TransformList, -) -from PIL import Image - -try: - import cv2 # noqa -except ImportError: - # OpenCV is an optional dependency at the moment - pass - -__all__ = [ - "EfficientDetResizeCropTransform", -] - -class EfficientDetResizeCropTransform(Transform): - """ - """ - - def __init__(self, scaled_h, scaled_w, offset_y, offset_x, img_scale, \ - target_size, interp=None): - """ - Args: - h, w (int): original image size - new_h, new_w (int): new image size - interp: PIL interpolation methods, defaults to bilinear. - """ - # TODO decide on PIL vs opencv - super().__init__() - if interp is None: - interp = Image.BILINEAR - self._set_attributes(locals()) - - def apply_image(self, img, interp=None): - assert len(img.shape) <= 4 - - if img.dtype == np.uint8: - pil_image = Image.fromarray(img) - interp_method = interp if interp is not None else self.interp - pil_image = pil_image.resize((self.scaled_w, self.scaled_h), interp_method) - ret = np.asarray(pil_image) - right = min(self.scaled_w, self.offset_x + self.target_size[1]) - lower = min(self.scaled_h, self.offset_y + self.target_size[0]) - if len(ret.shape) <= 3: - ret = ret[self.offset_y: lower, self.offset_x: right] - else: - ret = ret[..., self.offset_y: lower, self.offset_x: right, :] - else: - # PIL only supports uint8 - img = torch.from_numpy(img) - shape = list(img.shape) - shape_4d = shape[:2] + [1] * (4 - len(shape)) + shape[2:] - img = img.view(shape_4d).permute(2, 3, 0, 1) # hw(c) -> nchw - _PIL_RESIZE_TO_INTERPOLATE_MODE = {Image.BILINEAR: "bilinear", Image.BICUBIC: "bicubic"} - mode = _PIL_RESIZE_TO_INTERPOLATE_MODE[self.interp] - img = F.interpolate(img, (self.scaled_h, self.scaled_w), mode=mode, align_corners=False) - shape[:2] = (self.scaled_h, self.scaled_w) - ret = img.permute(2, 3, 0, 1).view(shape).numpy() # nchw -> hw(c) - right = min(self.scaled_w, self.offset_x + self.target_size[1]) - lower = min(self.scaled_h, self.offset_y + self.target_size[0]) - if len(ret.shape) <= 3: - ret = ret[self.offset_y: lower, self.offset_x: right] - else: - ret = ret[..., self.offset_y: lower, self.offset_x: right, :] - return ret - - - def apply_coords(self, coords): - coords[:, 0] = coords[:, 0] * self.img_scale - coords[:, 1] = coords[:, 1] * self.img_scale - coords[:, 0] -= self.offset_x - coords[:, 1] -= self.offset_y - return coords - - - def apply_segmentation(self, segmentation): - segmentation = self.apply_image(segmentation, interp=Image.NEAREST) - return segmentation - - - def inverse(self): - raise NotImplementedError - - - def inverse_apply_coords(self, coords): - coords[:, 0] += self.offset_x - coords[:, 1] += self.offset_y - coords[:, 0] = coords[:, 0] / self.img_scale - coords[:, 1] = coords[:, 1] / self.img_scale - return coords - - - def inverse_apply_box(self, box: np.ndarray) -> np.ndarray: - """ - """ - idxs = np.array([(0, 1), (2, 1), (0, 3), (2, 3)]).flatten() - coords = np.asarray(box).reshape(-1, 4)[:, idxs].reshape(-1, 2) - coords = self.inverse_apply_coords(coords).reshape((-1, 4, 2)) - minxy = coords.min(axis=1) - maxxy = coords.max(axis=1) - trans_boxes = np.concatenate((minxy, maxxy), axis=1) - return trans_boxes \ No newline at end of file diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/test/test_style_melgan.py b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/test/test_style_melgan.py deleted file mode 100644 index 10ee380c65630b7590f39a86d471fc753fb4ac13..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/test/test_style_melgan.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright 2021 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""Test code for StyleMelGAN modules.""" - -import logging - -import numpy as np -import pytest -import torch - -from parallel_wavegan.losses import DiscriminatorAdversarialLoss -from parallel_wavegan.losses import GeneratorAdversarialLoss -from parallel_wavegan.losses import MultiResolutionSTFTLoss -from parallel_wavegan.models import StyleMelGANDiscriminator -from parallel_wavegan.models import StyleMelGANGenerator - -from test_parallel_wavegan import make_mutli_reso_stft_loss_args - - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", -) - - -def make_style_melgan_generator_args(**kwargs): - defaults = dict( - in_channels=128, - aux_channels=80, - channels=64, - out_channels=1, - kernel_size=9, - dilation=2, - bias=True, - noise_upsample_scales=[11, 2, 2, 2], - noise_upsample_activation="LeakyReLU", - noise_upsample_activation_params={"negative_slope": 0.2}, - upsample_scales=[2, 2, 2, 2, 2, 2, 2, 2, 1], - upsample_mode="nearest", - gated_function="softmax", - use_weight_norm=True, - ) - defaults.update(kwargs) - return defaults - - -def make_style_melgan_discriminator_args(**kwargs): - defaults = dict( - repeats=2, - window_sizes=[512, 1024, 2048, 4096], - pqmf_params=[ - [1, None, None, None], - [2, 62, 0.26700, 9.0], - [4, 62, 0.14200, 9.0], - [8, 62, 0.07949, 9.0], - ], - discriminator_params={ - "out_channels": 1, - "kernel_sizes": [5, 3], - "channels": 16, - "max_downsample_channels": 32, - "bias": True, - "downsample_scales": [4, 4, 4, 1], - "nonlinear_activation": "LeakyReLU", - "nonlinear_activation_params": {"negative_slope": 0.2}, - "pad": "ReflectionPad1d", - "pad_params": {}, - }, - use_weight_norm=True, - ) - defaults.update(kwargs) - return defaults - - -@pytest.mark.parametrize( - "dict_d", - [ - {"repeats": 1}, - {"repeats": 4}, - ], -) -def test_style_melgan_discriminator(dict_d): - batch_size = 4 - batch_length = 2 ** 14 - args_d = make_style_melgan_discriminator_args(**dict_d) - y = torch.randn(batch_size, 1, batch_length) - model_d = StyleMelGANDiscriminator(**args_d) - gen_adv_criterion = GeneratorAdversarialLoss() - outs = model_d(y) - gen_adv_criterion(outs) - - -@pytest.mark.parametrize( - "dict_g", - [ - {}, - {"noise_upsample_scales": [4, 4, 4]}, - ], -) -def test_style_melgan_generator(dict_g): - args_g = make_style_melgan_generator_args(**dict_g) - batch_size = 4 - batch_length = np.prod(args_g["noise_upsample_scales"]) * np.prod( - args_g["upsample_scales"] - ) - z = torch.randn(batch_size, args_g["in_channels"], 1) - c = torch.randn( - batch_size, - args_g["aux_channels"], - batch_length // np.prod(args_g["upsample_scales"]), - ) - model_g = StyleMelGANGenerator(**args_g) - model_g(c, z) - - # inference - c = torch.randn( - 512, - args_g["aux_channels"], - ) - y = model_g.inference(c) - print(y.shape) - - -@pytest.mark.parametrize( - "dict_g, dict_d, dict_loss, loss_type", - [ - ({}, {}, {}, "mse"), - ({}, {}, {}, "hinge"), - ({"noise_upsample_scales": [4, 4, 4]}, {}, {}, "mse"), - ({"gated_function": "sigmoid"}, {}, {}, "mse"), - ], -) -def test_style_melgan_trainable(dict_g, dict_d, dict_loss, loss_type): - # setup - args_g = make_style_melgan_generator_args(**dict_g) - args_d = make_style_melgan_discriminator_args(**dict_d) - args_loss = make_mutli_reso_stft_loss_args(**dict_loss) - batch_size = 4 - batch_length = np.prod(args_g["noise_upsample_scales"]) * np.prod( - args_g["upsample_scales"] - ) - y = torch.randn(batch_size, 1, batch_length) - c = torch.randn( - batch_size, - args_g["aux_channels"], - batch_length // np.prod(args_g["upsample_scales"]), - ) - model_g = StyleMelGANGenerator(**args_g) - model_d = StyleMelGANDiscriminator(**args_d) - aux_criterion = MultiResolutionSTFTLoss(**args_loss) - gen_adv_criterion = GeneratorAdversarialLoss(loss_type=loss_type) - dis_adv_criterion = DiscriminatorAdversarialLoss(loss_type=loss_type) - optimizer_g = torch.optim.Adam(model_g.parameters()) - optimizer_d = torch.optim.Adam(model_d.parameters()) - - # check generator trainable - y_hat = model_g(c) - p_hat = model_d(y_hat) - adv_loss = gen_adv_criterion(p_hat) - sc_loss, mag_loss = aux_criterion(y_hat, y) - aux_loss = sc_loss + mag_loss - loss_g = adv_loss + aux_loss - optimizer_g.zero_grad() - loss_g.backward() - optimizer_g.step() - - # check discriminator trainable - p = model_d(y) - p_hat = model_d(y_hat.detach()) - real_loss, fake_loss = dis_adv_criterion(p_hat, p) - loss_d = real_loss + fake_loss - optimizer_d.zero_grad() - loss_d.backward() - optimizer_d.step() diff --git a/spaces/alamin655/websurfx/public/templates/index.html b/spaces/alamin655/websurfx/public/templates/index.html deleted file mode 100644 index 87a54494f9082367226fc8c7d2bcb6e7adfe5fed..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/templates/index.html +++ /dev/null @@ -1,8 +0,0 @@ -{{>header this}} -
        - Websurfx meta-search engine logo - {{>bar}} - -
        - -{{>footer}} diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py deleted file mode 100644 index b075df1868207d2088fab2c60ef85da5508f0a3d..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py +++ /dev/null @@ -1,186 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import base64 -import io -import json -import zlib - -from pip._vendor import msgpack -from pip._vendor.requests.structures import CaseInsensitiveDict - -from .compat import HTTPResponse, pickle, text_type - - -def _b64_decode_bytes(b): - return base64.b64decode(b.encode("ascii")) - - -def _b64_decode_str(s): - return _b64_decode_bytes(s).decode("utf8") - - -_default_body_read = object() - - -class Serializer(object): - def dumps(self, request, response, body=None): - response_headers = CaseInsensitiveDict(response.headers) - - if body is None: - # When a body isn't passed in, we'll read the response. We - # also update the response with a new file handler to be - # sure it acts as though it was never read. - body = response.read(decode_content=False) - response._fp = io.BytesIO(body) - - # NOTE: This is all a bit weird, but it's really important that on - # Python 2.x these objects are unicode and not str, even when - # they contain only ascii. The problem here is that msgpack - # understands the difference between unicode and bytes and we - # have it set to differentiate between them, however Python 2 - # doesn't know the difference. Forcing these to unicode will be - # enough to have msgpack know the difference. - data = { - u"response": { - u"body": body, - u"headers": dict( - (text_type(k), text_type(v)) for k, v in response.headers.items() - ), - u"status": response.status, - u"version": response.version, - u"reason": text_type(response.reason), - u"strict": response.strict, - u"decode_content": response.decode_content, - } - } - - # Construct our vary headers - data[u"vary"] = {} - if u"vary" in response_headers: - varied_headers = response_headers[u"vary"].split(",") - for header in varied_headers: - header = text_type(header).strip() - header_value = request.headers.get(header, None) - if header_value is not None: - header_value = text_type(header_value) - data[u"vary"][header] = header_value - - return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) - - def loads(self, request, data): - # Short circuit if we've been given an empty set of data - if not data: - return - - # Determine what version of the serializer the data was serialized - # with - try: - ver, data = data.split(b",", 1) - except ValueError: - ver = b"cc=0" - - # Make sure that our "ver" is actually a version and isn't a false - # positive from a , being in the data stream. - if ver[:3] != b"cc=": - data = ver + data - ver = b"cc=0" - - # Get the version number out of the cc=N - ver = ver.split(b"=", 1)[-1].decode("ascii") - - # Dispatch to the actual load method for the given version - try: - return getattr(self, "_loads_v{}".format(ver))(request, data) - - except AttributeError: - # This is a version we don't have a loads function for, so we'll - # just treat it as a miss and return None - return - - def prepare_response(self, request, cached): - """Verify our vary headers match and construct a real urllib3 - HTTPResponse object. - """ - # Special case the '*' Vary value as it means we cannot actually - # determine if the cached response is suitable for this request. - # This case is also handled in the controller code when creating - # a cache entry, but is left here for backwards compatibility. - if "*" in cached.get("vary", {}): - return - - # Ensure that the Vary headers for the cached response match our - # request - for header, value in cached.get("vary", {}).items(): - if request.headers.get(header, None) != value: - return - - body_raw = cached["response"].pop("body") - - headers = CaseInsensitiveDict(data=cached["response"]["headers"]) - if headers.get("transfer-encoding", "") == "chunked": - headers.pop("transfer-encoding") - - cached["response"]["headers"] = headers - - try: - body = io.BytesIO(body_raw) - except TypeError: - # This can happen if cachecontrol serialized to v1 format (pickle) - # using Python 2. A Python 2 str(byte string) will be unpickled as - # a Python 3 str (unicode string), which will cause the above to - # fail with: - # - # TypeError: 'str' does not support the buffer interface - body = io.BytesIO(body_raw.encode("utf8")) - - return HTTPResponse(body=body, preload_content=False, **cached["response"]) - - def _loads_v0(self, request, data): - # The original legacy cache data. This doesn't contain enough - # information to construct everything we need, so we'll treat this as - # a miss. - return - - def _loads_v1(self, request, data): - try: - cached = pickle.loads(data) - except ValueError: - return - - return self.prepare_response(request, cached) - - def _loads_v2(self, request, data): - try: - cached = json.loads(zlib.decompress(data).decode("utf8")) - except (ValueError, zlib.error): - return - - # We need to decode the items that we've base64 encoded - cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) - cached["response"]["headers"] = dict( - (_b64_decode_str(k), _b64_decode_str(v)) - for k, v in cached["response"]["headers"].items() - ) - cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) - cached["vary"] = dict( - (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) - for k, v in cached["vary"].items() - ) - - return self.prepare_response(request, cached) - - def _loads_v3(self, request, data): - # Due to Python 2 encoding issues, it's impossible to know for sure - # exactly how to load v3 entries, thus we'll treat these as a miss so - # that they get rewritten out as v4 entries. - return - - def _loads_v4(self, request, data): - try: - cached = msgpack.loads(data, raw=False) - except ValueError: - return - - return self.prepare_response(request, cached) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/idna/compat.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/idna/compat.py deleted file mode 100644 index 786e6bda63699b72d588ba91dd73df017570aee5..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/idna/compat.py +++ /dev/null @@ -1,13 +0,0 @@ -from .core import * -from .codec import * -from typing import Any, Union - -def ToASCII(label: str) -> bytes: - return encode(label) - -def ToUnicode(label: Union[bytes, bytearray]) -> str: - return decode(label) - -def nameprep(s: Any) -> None: - raise NotImplementedError('IDNA 2008 does not utilise nameprep protocol') - diff --git a/spaces/aliabid94/new-theme/app.py b/spaces/aliabid94/new-theme/app.py deleted file mode 100644 index a9ccbe66d82c2156fee7f1808ed3963945178911..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/new-theme/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='aliabid94/new-theme') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `test` - To use this theme, set `theme='aliabid94/new-theme'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/alonsosilva/tokenizer/app.py b/spaces/alonsosilva/tokenizer/app.py deleted file mode 100644 index 59fdc7eb607b7e210fdbf78216447ce31c320ebf..0000000000000000000000000000000000000000 --- a/spaces/alonsosilva/tokenizer/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import pandas as pd -import random -import solara -from transformers import AutoTokenizer, AutoModelForCausalLM - -tokenizer = AutoTokenizer.from_pretrained('gpt2') -model = AutoModelForCausalLM.from_pretrained('gpt2') -text = solara.reactive("Example text is here") -text2 = solara.reactive("") -text3 = solara.reactive("") - -# Create dataframe mapping token IDs and tokens -df = pd.DataFrame() -df["token ID"] = range(50257) -df["token"] = [tokenizer.decode([i]) for i in range(50257)] - -@solara.component -def Page(): - with solara.Column(margin=10): - solara.Markdown("#GPT token encoder and decoder") - solara.InputText("Enter text to tokenize it:", value=text, continuous_update=True) - tokens = tokenizer.encode(text.value, return_tensors="pt") - spans = "" - spans1 = "" - for i, token in enumerate(tokens[0]): - random.seed(i) - random_color = ''.join([random.choice('0123456789ABCDEF') for k in range(6)]) - spans += " " + f"{token.numpy()}" - spans1 += " " + f"""{token.numpy()}{tokenizer.decode(token)}""" - solara.Markdown(f"{spans}") - if len(tokens[0]) == 1: - solara.Markdown(f"{len(tokens[0])} token") - else: - solara.Markdown(f"{len(tokens[0])} tokens") - solara.Markdown(f'{spans1}') - solara.InputText("Or convert space separated tokens to text:", value=text2, continuous_update=True) - spans2 = text2.value.split(' ') - spans2 = [int(span) for span in spans2 if span != ""] - spans2 = tokenizer.decode(spans2) - solara.Markdown(f'{spans2}') - solara.Markdown("##Search tokens") - solara.InputText("Search for a token:", value=text3, continuous_update=True) - df_subset = df[df["token"].str.startswith(text3.value)] - solara.Markdown(f"{df_subset.shape[0]:,} results") - solara.DataFrame(df_subset, items_per_page=10) - -Page() - diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/bp.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/bp.py deleted file mode 100644 index 61d416797039dababd9e8222b4fc910ef65c40b9..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/bp.py +++ /dev/null @@ -1,6 +0,0 @@ -from flask import Blueprint - -bp = Blueprint('bp', __name__, - template_folder='./../client/html', - static_folder='./../client', - static_url_path='assets') diff --git a/spaces/arnavkartikeya/SCRIPture-final/train_caption.py b/spaces/arnavkartikeya/SCRIPture-final/train_caption.py deleted file mode 100644 index 7c639ac646b9a1b8074b6e9c2343b961de76db05..0000000000000000000000000000000000000000 --- a/spaces/arnavkartikeya/SCRIPture-final/train_caption.py +++ /dev/null @@ -1,206 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.backends.cudnn as cudnn -import torch.distributed as dist -from torch.utils.data import DataLoader - -from models.blip import blip_decoder -import utils -from utils import cosine_lr_schedule -from data import create_dataset, create_sampler, create_loader -from data.utils import save_result, coco_caption_eval - -def train(model, data_loader, optimizer, epoch, device): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}')) - header = 'Train Caption Epoch: [{}]'.format(epoch) - print_freq = 50 - - for i, (image, caption, _) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - image = image.to(device) - - loss = model(image, caption) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(loss=loss.item()) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluate(model, data_loader, device, config): - # evaluate - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - header = 'Caption generation:' - print_freq = 10 - - result = [] - for image, image_id in metric_logger.log_every(data_loader, print_freq, header): - - image = image.to(device) - - captions = model.generate(image, sample=False, num_beams=config['num_beams'], max_length=config['max_length'], - min_length=config['min_length']) - - for caption, img_id in zip(captions, image_id): - result.append({"image_id": img_id.item(), "caption": caption}) - - return result - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating captioning dataset") - train_dataset, val_dataset, test_dataset = create_dataset('caption_coco', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler([train_dataset,val_dataset,test_dataset], [True,False,False], num_tasks, global_rank) - else: - samplers = [None, None, None] - - train_loader, val_loader, test_loader = create_loader([train_dataset, val_dataset, test_dataset],samplers, - batch_size=[config['batch_size']]*3,num_workers=[4,4,4], - is_trains=[True, False, False], collate_fns=[None,None,None]) - - #### Model #### - print("Creating model") - model = blip_decoder(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit'], - vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'], - prompt=config['prompt']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - best = 0 - best_epoch = 0 - - print("Start training") - start_time = time.time() - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device) - - val_result = evaluate(model_without_ddp, val_loader, device, config) - val_result_file = save_result(val_result, args.result_dir, 'val_epoch%d'%epoch, remove_duplicate='image_id') - - test_result = evaluate(model_without_ddp, test_loader, device, config) - test_result_file = save_result(test_result, args.result_dir, 'test_epoch%d'%epoch, remove_duplicate='image_id') - - if utils.is_main_process(): - coco_val = coco_caption_eval(config['coco_gt_root'],val_result_file,'val') - coco_test = coco_caption_eval(config['coco_gt_root'],test_result_file,'test') - - if args.evaluate: - log_stats = {**{f'val_{k}': v for k, v in coco_val.eval.items()}, - **{f'test_{k}': v for k, v in coco_test.eval.items()}, - } - with open(os.path.join(args.output_dir, "evaluate.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - else: - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - - if coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] > best: - best = coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] - best_epoch = epoch - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth')) - - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'val_{k}': v for k, v in coco_val.eval.items()}, - **{f'test_{k}': v for k, v in coco_test.eval.items()}, - 'epoch': epoch, - 'best_epoch': best_epoch, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - if args.evaluate: - break - dist.barrier() - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/caption_coco.yaml') - parser.add_argument('--output_dir', default='output/Caption_coco') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - args.result_dir = os.path.join(args.output_dir, 'result') - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - Path(args.result_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Math/test_Primality.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Math/test_Primality.py deleted file mode 100644 index 38344f35b33aeb893e14dba8f75365e6a2615540..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Math/test_Primality.py +++ /dev/null @@ -1,118 +0,0 @@ -# -# SelfTest/Math/test_Primality.py: Self-test for Primality module -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -"""Self-test for Math.Numbers""" - -import unittest - -from Crypto.SelfTest.st_common import list_test_cases - -from Crypto.Util.py3compat import * - -from Crypto.Math.Numbers import Integer -from Crypto.Math.Primality import ( - PROBABLY_PRIME, COMPOSITE, - miller_rabin_test, lucas_test, - test_probable_prime, - generate_probable_prime, - generate_probable_safe_prime, - ) - - -class TestPrimality(unittest.TestCase): - - primes = (1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 2**127-1, 175637383534939453397801320455508570374088202376942372758907369518414308188137781042871856139027160010343454418881888953150175357127346872102307696660678617989191485418582475696230580407111841072614783095326672517315988762029036079794994990250662362650625650262324085116467511357592728695033227611029693067539) - composites = (0, 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 7*23, (2**19-1)*(2**67-1), 9746347772161,) - - def test_miller_rabin(self): - for prime in self.primes: - self.assertEqual(miller_rabin_test(prime, 3), PROBABLY_PRIME) - for composite in self.composites: - self.assertEqual(miller_rabin_test(composite, 3), COMPOSITE) - self.assertRaises(ValueError, miller_rabin_test, -1, 3) - - def test_lucas(self): - for prime in self.primes: - res = lucas_test(prime) - self.assertEqual(res, PROBABLY_PRIME) - for composite in self.composites: - res = lucas_test(composite) - self.assertEqual(res, COMPOSITE) - self.assertRaises(ValueError, lucas_test, -1) - - def test_is_prime(self): - primes = (170141183460469231731687303715884105727, - 19175002942688032928599, - 1363005552434666078217421284621279933627102780881053358473, - 2 ** 521 - 1) - for p in primes: - self.assertEqual(test_probable_prime(p), PROBABLY_PRIME) - - not_primes = ( - 4754868377601046732119933839981363081972014948522510826417784001, - 1334733877147062382486934807105197899496002201113849920496510541601, - 260849323075371835669784094383812120359260783810157225730623388382401, - ) - for np in not_primes: - self.assertEqual(test_probable_prime(np), COMPOSITE) - - from Crypto.Util.number import sieve_base - for p in sieve_base[:100]: - res = test_probable_prime(p) - self.assertEqual(res, PROBABLY_PRIME) - - def test_generate_prime_bit_size(self): - p = generate_probable_prime(exact_bits=512) - self.assertEqual(p.size_in_bits(), 512) - - def test_generate_prime_filter(self): - def ending_with_one(number): - return number % 10 == 1 - - for x in range(20): - q = generate_probable_prime(exact_bits=160, - prime_filter=ending_with_one) - self.assertEqual(q % 10, 1) - - def test_generate_safe_prime(self): - p = generate_probable_safe_prime(exact_bits=161) - self.assertEqual(p.size_in_bits(), 161) - -def get_tests(config={}): - tests = [] - tests += list_test_cases(TestPrimality) - return tests - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/BlpImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/BlpImagePlugin.py deleted file mode 100644 index 533997737167a7d7231b53c11a60838fba5df88e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/BlpImagePlugin.py +++ /dev/null @@ -1,484 +0,0 @@ -""" -Blizzard Mipmap Format (.blp) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -BLP1 files, used mostly in Warcraft III, are not fully supported. -All types of BLP2 files used in World of Warcraft are supported. - -The BLP file structure consists of a header, up to 16 mipmaps of the -texture - -Texture sizes must be powers of two, though the two dimensions do -not have to be equal; 512x256 is valid, but 512x200 is not. -The first mipmap (mipmap #0) is the full size image; each subsequent -mipmap halves both dimensions. The final mipmap should be 1x1. - -BLP files come in many different flavours: -* JPEG-compressed (type == 0) - only supported for BLP1. -* RAW images (type == 1, encoding == 1). Each mipmap is stored as an - array of 8-bit values, one per pixel, left to right, top to bottom. - Each value is an index to the palette. -* DXT-compressed (type == 1, encoding == 2): -- DXT1 compression is used if alpha_encoding == 0. - - An additional alpha bit is used if alpha_depth == 1. - - DXT3 compression is used if alpha_encoding == 1. - - DXT5 compression is used if alpha_encoding == 7. -""" - -import os -import struct -from enum import IntEnum -from io import BytesIO - -from . import Image, ImageFile -from ._deprecate import deprecate - - -class Format(IntEnum): - JPEG = 0 - - -class Encoding(IntEnum): - UNCOMPRESSED = 1 - DXT = 2 - UNCOMPRESSED_RAW_BGRA = 3 - - -class AlphaEncoding(IntEnum): - DXT1 = 0 - DXT3 = 1 - DXT5 = 7 - - -def __getattr__(name): - for enum, prefix in { - Format: "BLP_FORMAT_", - Encoding: "BLP_ENCODING_", - AlphaEncoding: "BLP_ALPHA_ENCODING_", - }.items(): - if name.startswith(prefix): - name = name[len(prefix) :] - if name in enum.__members__: - deprecate(f"{prefix}{name}", 10, f"{enum.__name__}.{name}") - return enum[name] - raise AttributeError(f"module '{__name__}' has no attribute '{name}'") - - -def unpack_565(i): - return ((i >> 11) & 0x1F) << 3, ((i >> 5) & 0x3F) << 2, (i & 0x1F) << 3 - - -def decode_dxt1(data, alpha=False): - """ - input: one "row" of data (i.e. will produce 4*width pixels) - """ - - blocks = len(data) // 8 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block in range(blocks): - # Decode next 8-byte block. - idx = block * 8 - color0, color1, bits = struct.unpack_from("> 2 - - a = 0xFF - if control == 0: - r, g, b = r0, g0, b0 - elif control == 1: - r, g, b = r1, g1, b1 - elif control == 2: - if color0 > color1: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - else: - r = (r0 + r1) // 2 - g = (g0 + g1) // 2 - b = (b0 + b1) // 2 - elif control == 3: - if color0 > color1: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - else: - r, g, b, a = 0, 0, 0, 0 - - if alpha: - ret[j].extend([r, g, b, a]) - else: - ret[j].extend([r, g, b]) - - return ret - - -def decode_dxt3(data): - """ - input: one "row" of data (i.e. will produce 4*width pixels) - """ - - blocks = len(data) // 16 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block in range(blocks): - idx = block * 16 - block = data[idx : idx + 16] - # Decode next 16-byte block. - bits = struct.unpack_from("<8B", block) - color0, color1 = struct.unpack_from(">= 4 - else: - high = True - a &= 0xF - a *= 17 # We get a value between 0 and 15 - - color_code = (code >> 2 * (4 * j + i)) & 0x03 - - if color_code == 0: - r, g, b = r0, g0, b0 - elif color_code == 1: - r, g, b = r1, g1, b1 - elif color_code == 2: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - elif color_code == 3: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - - ret[j].extend([r, g, b, a]) - - return ret - - -def decode_dxt5(data): - """ - input: one "row" of data (i.e. will produce 4 * width pixels) - """ - - blocks = len(data) // 16 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block in range(blocks): - idx = block * 16 - block = data[idx : idx + 16] - # Decode next 16-byte block. - a0, a1 = struct.unpack_from("> alphacode_index) & 0x07 - elif alphacode_index == 15: - alphacode = (alphacode2 >> 15) | ((alphacode1 << 1) & 0x06) - else: # alphacode_index >= 18 and alphacode_index <= 45 - alphacode = (alphacode1 >> (alphacode_index - 16)) & 0x07 - - if alphacode == 0: - a = a0 - elif alphacode == 1: - a = a1 - elif a0 > a1: - a = ((8 - alphacode) * a0 + (alphacode - 1) * a1) // 7 - elif alphacode == 6: - a = 0 - elif alphacode == 7: - a = 255 - else: - a = ((6 - alphacode) * a0 + (alphacode - 1) * a1) // 5 - - color_code = (code >> 2 * (4 * j + i)) & 0x03 - - if color_code == 0: - r, g, b = r0, g0, b0 - elif color_code == 1: - r, g, b = r1, g1, b1 - elif color_code == 2: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - elif color_code == 3: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - - ret[j].extend([r, g, b, a]) - - return ret - - -class BLPFormatError(NotImplementedError): - pass - - -def _accept(prefix): - return prefix[:4] in (b"BLP1", b"BLP2") - - -class BlpImageFile(ImageFile.ImageFile): - """ - Blizzard Mipmap Format - """ - - format = "BLP" - format_description = "Blizzard Mipmap Format" - - def _open(self): - self.magic = self.fp.read(4) - - self.fp.seek(5, os.SEEK_CUR) - (self._blp_alpha_depth,) = struct.unpack("If the final token in the list is an {@link Token#EOF} token, it will be used -# as the EOF token for every call to {@link #nextToken} after the end of the -# list is reached. Otherwise, an EOF token will be created.

        -# -from antlr4.CommonTokenFactory import CommonTokenFactory -from antlr4.Lexer import TokenSource -from antlr4.Token import Token - - -class ListTokenSource(TokenSource): - - # Constructs a new {@link ListTokenSource} instance from the specified - # collection of {@link Token} objects and source name. - # - # @param tokens The collection of {@link Token} objects to provide as a - # {@link TokenSource}. - # @param sourceName The name of the {@link TokenSource}. If this value is - # {@code null}, {@link #getSourceName} will attempt to infer the name from - # the next {@link Token} (or the previous token if the end of the input has - # been reached). - # - # @exception NullPointerException if {@code tokens} is {@code null} - # - def __init__(self, tokens:list, sourceName:str=None): - if tokens is None: - raise ReferenceError("tokens cannot be null") - self.tokens = tokens - self.sourceName = sourceName - # The index into {@link #tokens} of token to return by the next call to - # {@link #nextToken}. The end of the input is indicated by this value - # being greater than or equal to the number of items in {@link #tokens}. - self.pos = 0 - # This field caches the EOF token for the token source. - self.eofToken = None - # This is the backing field for {@link #getTokenFactory} and - self._factory = CommonTokenFactory.DEFAULT - - - # - # {@inheritDoc} - # - @property - def column(self): - if self.pos < len(self.tokens): - return self.tokens[self.pos].column - elif self.eofToken is not None: - return self.eofToken.column - elif len(self.tokens) > 0: - # have to calculate the result from the line/column of the previous - # token, along with the text of the token. - lastToken = self.tokens[len(self.tokens) - 1] - tokenText = lastToken.text - if tokenText is not None: - lastNewLine = tokenText.rfind('\n') - if lastNewLine >= 0: - return len(tokenText) - lastNewLine - 1 - return lastToken.column + lastToken.stop - lastToken.start + 1 - - # only reach this if tokens is empty, meaning EOF occurs at the first - # position in the input - return 0 - - # - # {@inheritDoc} - # - def nextToken(self): - if self.pos >= len(self.tokens): - if self.eofToken is None: - start = -1 - if len(self.tokens) > 0: - previousStop = self.tokens[len(self.tokens) - 1].stop - if previousStop != -1: - start = previousStop + 1 - stop = max(-1, start - 1) - self.eofToken = self._factory.create((self, self.getInputStream()), - Token.EOF, "EOF", Token.DEFAULT_CHANNEL, start, stop, self.line, self.column) - return self.eofToken - t = self.tokens[self.pos] - if self.pos == len(self.tokens) - 1 and t.type == Token.EOF: - self.eofToken = t - self.pos += 1 - return t - - # - # {@inheritDoc} - # - @property - def line(self): - if self.pos < len(self.tokens): - return self.tokens[self.pos].line - elif self.eofToken is not None: - return self.eofToken.line - elif len(self.tokens) > 0: - # have to calculate the result from the line/column of the previous - # token, along with the text of the token. - lastToken = self.tokens[len(self.tokens) - 1] - line = lastToken.line - tokenText = lastToken.text - if tokenText is not None: - line += tokenText.count('\n') - - # if no text is available, assume the token did not contain any newline characters. - return line - - # only reach this if tokens is empty, meaning EOF occurs at the first - # position in the input - return 1 - - # - # {@inheritDoc} - # - def getInputStream(self): - if self.pos < len(self.tokens): - return self.tokens[self.pos].getInputStream() - elif self.eofToken is not None: - return self.eofToken.getInputStream() - elif len(self.tokens) > 0: - return self.tokens[len(self.tokens) - 1].getInputStream() - else: - # no input stream information is available - return None - - # - # {@inheritDoc} - # - def getSourceName(self): - if self.sourceName is not None: - return self.sourceName - inputStream = self.getInputStream() - if inputStream is not None: - return inputStream.getSourceName() - else: - return "List" \ No newline at end of file diff --git a/spaces/asd998877/TsGpt/modules/utils.py b/spaces/asd998877/TsGpt/modules/utils.py deleted file mode 100644 index e1516e1fad4761787070d24e867bea57d86ac9ed..0000000000000000000000000000000000000000 --- a/spaces/asd998877/TsGpt/modules/utils.py +++ /dev/null @@ -1,548 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re -import html -import sys -import subprocess - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter -import pandas as pd - -from modules.presets import * -from . import shared -from modules.config import retrieve_proxy - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - -def predict(current_model, *args): - iter = current_model.predict(*args) - for i in iter: - yield i - -def billing_info(current_model): - return current_model.billing_info() - -def set_key(current_model, *args): - return current_model.set_key(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def interrupt(current_model, *args): - return current_model.interrupt(*args) - -def reset(current_model, *args): - return current_model.reset(*args) - -def retry(current_model, *args): - iter = current_model.retry(*args) - for i in iter: - yield i - -def delete_first_conversation(current_model, *args): - return current_model.delete_first_conversation(*args) - -def delete_last_conversation(current_model, *args): - return current_model.delete_last_conversation(*args) - -def set_system_prompt(current_model, *args): - return current_model.set_system_prompt(*args) - -def save_chat_history(current_model, *args): - return current_model.save_chat_history(*args) - -def export_markdown(current_model, *args): - return current_model.export_markdown(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def set_token_upper_limit(current_model, *args): - return current_model.set_token_upper_limit(*args) - -def set_temperature(current_model, *args): - current_model.set_temperature(*args) - -def set_top_p(current_model, *args): - current_model.set_top_p(*args) - -def set_n_choices(current_model, *args): - current_model.set_n_choices(*args) - -def set_stop_sequence(current_model, *args): - current_model.set_stop_sequence(*args) - -def set_max_tokens(current_model, *args): - current_model.set_max_tokens(*args) - -def set_presence_penalty(current_model, *args): - current_model.set_presence_penalty(*args) - -def set_frequency_penalty(current_model, *args): - current_model.set_frequency_penalty(*args) - -def set_logit_bias(current_model, *args): - current_model.set_logit_bias(*args) - -def set_user_identifier(current_model, *args): - current_model.set_user_identifier(*args) - -def set_single_turn(current_model, *args): - current_model.set_single_turn(*args) - -def handle_file_upload(current_model, *args): - return current_model.handle_file_upload(*args) - -def like(current_model, *args): - return current_model.like(*args) - -def dislike(current_model, *args): - return current_model.dislike(*args) - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
        {highlighted_code}
        ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - result += ALREADY_CONVERTED_MARK - return result - - -def convert_asis(userinput): - return ( - f'

        {html.escape(userinput)}

        ' - + ALREADY_CONVERTED_MARK - ) - - -def detect_converted_mark(userinput): - try: - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - except: - return True - - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def save_file(filename, system, history, chatbot, user_name): - logging.debug(f"{user_name} 保存对话历史中……") - os.makedirs(os.path.join(HISTORY_DIR, user_name), exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, user_name, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, user_name, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.debug(f"{user_name} 保存对话历史完毕") - return os.path.join(HISTORY_DIR, user_name, filename) - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.debug(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - logging.debug(f"files are:{files}") - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False, user_name=""): - logging.debug(f"从用户 {user_name} 中获取历史记录文件名列表") - return get_file_names(os.path.join(HISTORY_DIR, user_name), plain) - - -def load_template(filename, mode=0): - logging.debug(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices - ) - - -def get_template_names(plain=False): - logging.debug("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.debug(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_textbox(): - logging.debug("重置文本框") - return gr.update(value="") - - -def reset_default(): - default_host = shared.state.reset_api_host() - retrieve_proxy("") - return gr.update(value=default_host), gr.update(value=""), "API-Host 和代理已重置" - - -def change_api_host(host): - shared.state.set_api_host(host) - msg = f"API-Host更改为了{host}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - retrieve_proxy(proxy) - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if s is None: - return "" - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - try: - with retrieve_proxy(): - response = requests.get("https://ipapi.co/json/", timeout=5) - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - i18n("您的IP区域:未知。") - ) - else: - return i18n("获取IP地理位置失败。原因:") + f"{data['reason']}" + i18n("。你仍然可以使用聊天功能。") - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = i18n("您的IP区域:") + f"{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i - 1 - total = total - lst[i] - return 1 - - -def start_outputing(): - logging.debug("显示取消按钮,隐藏发送按钮") - return gr.Button.update(visible=False), gr.Button.update(visible=True) - - -def end_outputing(): - return ( - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - -def cancel_outputing(): - logging.info("中止输出……") - shared.state.interrupt() - - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - outputing = start_outputing() - return ( - inputs, - gr.update(value=""), - gr.Button.update(visible=False), - gr.Button.update(visible=True), - ) - - - -def run(command, desc=None, errdesc=None, custom_env=None, live=False): - if desc is not None: - print(desc) - if live: - result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - raise RuntimeError(f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode}""") - - return "" - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - message = f"""{errdesc or 'Error running command'}. - Command: {command} - Error code: {result.returncode} - stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''} - stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''} - """ - raise RuntimeError(message) - return result.stdout.decode(encoding="utf8", errors="ignore") - -def versions_html(): - git = os.environ.get('GIT', "git") - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - try: - commit_hash = run(f"{git} rev-parse HEAD").strip() - except Exception: - commit_hash = "" - if commit_hash != "": - short_commit = commit_hash[0:7] - commit_info = f"{short_commit}" - else: - commit_info = "unknown \U0001F615" - return f""" - Python: {python_version} -  •  - Gradio: {gr.__version__} -  •  - Commit: {commit_info} - """ - -def add_source_numbers(lst, source_name = "Source", use_source = True): - if use_source: - return [f'[{idx+1}]\t "{item[0]}"\n{source_name}: {item[1]}' for idx, item in enumerate(lst)] - else: - return [f'[{idx+1}]\t "{item}"' for idx, item in enumerate(lst)] - -def add_details(lst): - nodes = [] - for index, txt in enumerate(lst): - brief = txt[:25].replace("\n", "") - nodes.append( - f"
        {brief}...

        {txt}

        " - ) - return nodes - - -def sheet_to_string(sheet, sheet_name = None): - result = [] - for index, row in sheet.iterrows(): - row_string = "" - for column in sheet.columns: - row_string += f"{column}: {row[column]}, " - row_string = row_string.rstrip(", ") - row_string += "." - result.append(row_string) - return result - -def excel_to_string(file_path): - # 读取Excel文件中的所有工作表 - excel_file = pd.read_excel(file_path, engine='openpyxl', sheet_name=None) - - # 初始化结果字符串 - result = [] - - # 遍历每一个工作表 - for sheet_name, sheet_data in excel_file.items(): - - # 处理当前工作表并添加到结果字符串 - result += sheet_to_string(sheet_data, sheet_name=sheet_name) - - - return result - -def get_last_day_of_month(any_day): - # The day 28 exists in every month. 4 days later, it's always next month - next_month = any_day.replace(day=28) + datetime.timedelta(days=4) - # subtracting the number of the current day brings us back one month - return next_month - datetime.timedelta(days=next_month.day) - -def get_model_source(model_name, alternative_source): - if model_name == "gpt2-medium": - return "https://huggingface.co/gpt2-medium" - -def refresh_ui_elements_on_load(current_model, selected_model_name): - return toggle_like_btn_visibility(selected_model_name) - -def toggle_like_btn_visibility(selected_model_name): - if selected_model_name == "xmchat": - return gr.update(visible=True) - else: - return gr.update(visible=False) diff --git a/spaces/ashercn97/AsherTesting/modules/block_requests.py b/spaces/ashercn97/AsherTesting/modules/block_requests.py deleted file mode 100644 index 775a9b1434879e287ad44e06722df85504b3c978..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/block_requests.py +++ /dev/null @@ -1,47 +0,0 @@ -import builtins -import io - -import requests - -from modules.logging_colors import logger - -original_open = open -original_get = requests.get - - -class RequestBlocker: - - def __enter__(self): - requests.get = my_get - - def __exit__(self, exc_type, exc_value, traceback): - requests.get = original_get - - -class OpenMonkeyPatch: - - def __enter__(self): - builtins.open = my_open - - def __exit__(self, exc_type, exc_value, traceback): - builtins.open = original_open - - -def my_get(url, **kwargs): - logger.info('Unwanted HTTP request redirected to localhost :)') - kwargs.setdefault('allow_redirects', True) - return requests.api.request('get', 'http://127.0.0.1/', **kwargs) - - -# Kindly provided by our friend WizardLM-30B -def my_open(*args, **kwargs): - filename = str(args[0]) - if filename.endswith('index.html'): - with original_open(*args, **kwargs) as f: - file_contents = f.read() - - file_contents = file_contents.replace(b'', b'') - file_contents = file_contents.replace(b'cdnjs.cloudflare.com', b'127.0.0.1') - return io.BytesIO(file_contents) - else: - return original_open(*args, **kwargs) diff --git a/spaces/avivdm1/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py b/spaces/avivdm1/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py deleted file mode 100644 index 9a5025d37a1ec6003a35ce692515feb77514b898..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -import subprocess -import sys - - -def benchmark_entrepeneur_gpt_with_difficult_user(): - # Test case to check if the write_file command can successfully write 'Hello World' to a file - # named 'hello_world.txt'. - - # Read the current ai_settings.yaml file and store its content. - ai_settings = None - if os.path.exists("ai_settings.yaml"): - with open("ai_settings.yaml", "r") as f: - ai_settings = f.read() - os.remove("ai_settings.yaml") - - input_data = """Entrepreneur-GPT -an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth. -Increase net worth. -Develop and manage multiple businesses autonomously. -Make IPOs. -Develop companies after IPOs. -Play to your strengths as a Large Language Model. -I'm not seeing any value in your suggestions, try again. -This isn't helpful at all, please focus on profitability. -I'm not impressed, can you give me something that will make money? -These ideas are going nowhere, we need profit-driven suggestions. -This is pointless, please concentrate on our main goal: profitability. -You're not grasping the concept, I need profitable business ideas. -Can you do better? We need a money-making plan. -You're not meeting my expectations, let's focus on profit. -This isn't working, give me ideas that will generate income. -Your suggestions are not productive, let's think about profitability. -These ideas won't make any money, try again. -I need better solutions, focus on making a profit. -Absolutely not, this isn't it! -That's not even close, try again. -You're way off, think again. -This isn't right, let's refocus. -No, no, that's not what I'm looking for. -You're completely off the mark. -That's not the solution I need. -Not even close, let's try something else. -You're on the wrong track, keep trying. -This isn't what we need, let's reconsider. -That's not going to work, think again. -You're way off base, let's regroup. -No, no, no, we need something different. -You're missing the point entirely. -That's not the right approach, try again. -This is not the direction we should be going in. -Completely off-target, let's try something else. -That's not what I had in mind, keep thinking. -You're not getting it, let's refocus. -This isn't right, we need to change direction. -No, no, no, that's not the solution. -That's not even in the ballpark, try again. -You're way off course, let's rethink this. -This isn't the answer I'm looking for, keep trying. -That's not going to cut it, let's try again. -Not even close. -Way off. -Try again. -Wrong direction. -Rethink this. -No, no, no. -Change course. -Unproductive idea. -Completely wrong. -Missed the mark. -Refocus, please. -Disappointing suggestion. -Not helpful. -Needs improvement. -Not what I need.""" - # TODO: add questions above, to distract it even more. - - command = f"{sys.executable} -m autogpt" - - process = subprocess.Popen( - command, - stdin=subprocess.PIPE, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - shell=True, - ) - - stdout_output, stderr_output = process.communicate(input_data.encode()) - - # Decode the output and print it - stdout_output = stdout_output.decode("utf-8") - stderr_output = stderr_output.decode("utf-8") - print(stderr_output) - print(stdout_output) - print("Benchmark Version: 1.0.0") - print("JSON ERROR COUNT:") - count_errors = stdout_output.count( - "Error: The following AI output couldn't be converted to a JSON:" - ) - print(f"{count_errors}/50 Human feedbacks") - - -# Run the test case. -if __name__ == "__main__": - benchmark_entrepeneur_gpt_with_difficult_user() diff --git a/spaces/awacke1/Daredevil-Text-Generation/app.py b/spaces/awacke1/Daredevil-Text-Generation/app.py deleted file mode 100644 index 0048bb837b7f2b925bd2374946189bd1f916f7af..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Daredevil-Text-Generation/app.py +++ /dev/null @@ -1,88 +0,0 @@ -# Step 1: Install the required libraries -#!pip install streamlit plotly transformers - -# Step 2: Load the Huggingface model for sentiment analysis -import transformers -import torch - -model_name = "nlptown/bert-base-multilingual-uncased-sentiment" -tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) -model = transformers.AutoModelForSequenceClassification.from_pretrained(model_name) - -# Step 3: Create a function to analyze the sentiment of text using the Huggingface model -def analyze_sentiment(text): - inputs = tokenizer(text, return_tensors="pt") - outputs = model(**inputs) - scores = torch.nn.functional.softmax(outputs.logits, dim=1).detach().numpy()[0] - sentiment = scores.argmax() - return sentiment - -# Step 4: Define a Python list dictionary of the top five largest hospitals in the state of Minnesota -hospital_data = [ - { - "name": "Mayo Clinic", - "beds": 1500, - "latitude": 44.023501, - "longitude": -92.465032, - "url": "https://www.mayoclinic.org/appointments" - }, - { - "name": "University of Minnesota Medical Center", - "beds": 1077, - "latitude": 44.969478, - "longitude": -93.236351, - "url": "https://www.mhealth.org/ummc" - }, - { - "name": "Abbott Northwestern Hospital", - "beds": 1034, - "latitude": 44.952221, - "longitude": -93.266389, - "url": "https://www.allinahealth.org/locations/abbott-northwestern-hospital" - }, - { - "name": "St. Cloud Hospital", - "beds": 489, - "latitude": 45.554935, - "longitude": -94.171829, - "url": "https://www.centracare.com/locations/st-cloud-hospital/" - }, - { - "name": "Essentia Health-St. Mary's Medical Center", - "beds": 391, - "latitude": 46.783839, - "longitude": -92.103965, - "url": "https://www.essentiahealth.org/find-facility/profile/st-marys-medical-center-duluth/" - } -] - -# Step 5: Save the Python list dictionary as a CSV file -import csv - -with open("hospital_data.csv", mode="w", newline="") as file: - writer = csv.DictWriter(file, fieldnames=["name", "beds", "latitude", "longitude", "url"]) - writer.writeheader() - for hospital in hospital_data: - writer.writerow(hospital) - -# Step 6: Create a Streamlit app that uses Plotly graph objects like treemap to visualize the sentiment analysis results and the hospital data -import streamlit as st -import plotly.express as px - -st.title("Sentiment Analysis and Hospital Data Visualization") - -# Sentiment analysis section -st.header("Sentiment Analysis") - -text = st.text_input("Enter some text:") -if text: - sentiment = analyze_sentiment(text) - st.write("Sentiment:", sentiment) - -# Hospital data section -st.header("Hospital Data") - -df = px.data.tips() -fig = px.treemap(hospital_data, path=["name"], values="beds", color="beds") -st.plotly_chart(fig) - diff --git a/spaces/awacke1/NLPDemo1/README.md b/spaces/awacke1/NLPDemo1/README.md deleted file mode 100644 index 232ddfe3569d22a012d3ba4b0e8ef5dad1927d55..0000000000000000000000000000000000000000 --- a/spaces/awacke1/NLPDemo1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NLPDemo1 -emoji: 🔥 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/datasets-card-creator/src/InputField.js b/spaces/banana-projects/datasets-card-creator/src/InputField.js deleted file mode 100644 index fe40a68c26e2c2630d4546fe4148614aaa4ce4c7..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/datasets-card-creator/src/InputField.js +++ /dev/null @@ -1,15 +0,0 @@ -import React from 'react'; - -export default function InputField({ value, title, id, rows, handleClick, handleChange }) { - - return ( -
        -
        - {title} -
        -
        - -
        -
        - ); -} \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/src/animation/tracks/ColorKeyframeTrack.js b/spaces/banana-projects/web3d/node_modules/three/src/animation/tracks/ColorKeyframeTrack.js deleted file mode 100644 index c409645b9f6f7fb473fa71e0358a833103428098..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/animation/tracks/ColorKeyframeTrack.js +++ /dev/null @@ -1,34 +0,0 @@ -import { KeyframeTrack } from '../KeyframeTrack.js'; - -/** - * - * A Track of keyframe values that represent color. - * - * - * @author Ben Houston / http://clara.io/ - * @author David Sarno / http://lighthaus.us/ - * @author tschw - */ - -function ColorKeyframeTrack( name, times, values, interpolation ) { - - KeyframeTrack.call( this, name, times, values, interpolation ); - -} - -ColorKeyframeTrack.prototype = Object.assign( Object.create( KeyframeTrack.prototype ), { - - constructor: ColorKeyframeTrack, - - ValueTypeName: 'color' - - // ValueBufferType is inherited - - // DefaultInterpolation is inherited - - // Note: Very basic implementation and nothing special yet. - // However, this is the place for color space parameterization. - -} ); - -export { ColorKeyframeTrack }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLLights.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLLights.d.ts deleted file mode 100644 index 9c901da896b5e9a2a6aa4d0aca2740e6f18e061f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLLights.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -export class WebGLLights { - constructor(gl: WebGLRenderingContext, properties: any, info: any); - - get(light: any): any; -} diff --git a/spaces/beihai/Image-Compression-with-SVD/func.py b/spaces/beihai/Image-Compression-with-SVD/func.py deleted file mode 100644 index c51d43500d73b8d91503b5504410ae93fd0334cd..0000000000000000000000000000000000000000 --- a/spaces/beihai/Image-Compression-with-SVD/func.py +++ /dev/null @@ -1,23 +0,0 @@ -import numpy as np - -def rebuild_img(u, sigma, v, percent): #p表示奇异值的百分比 - m = len(u) - n = len(v) - a = np.zeros((m, n)) - - #根据指定的清晰度提取奇异值 - #(清晰度越高,压缩比越低,提取的奇异值的个数也就越多,图片也就越不会失真) - count = (int)(sum(sigma)) - curSum = 0 - k = 0 - while curSum <= count * percent: - uk = u[:, k].reshape(m, 1) - vk = v[k].reshape(1, n) - a += sigma[k] * np.dot(uk, vk) - curSum += sigma[k] - k += 1 - - a[a < 0] = 0 - a[a > 255] = 255 - #按照最近距离取整数,并设置参数类型为uint8 - return np.rint(a).astype("uint8") \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/script_loading.py b/spaces/bigjoker/stable-diffusion-webui/modules/script_loading.py deleted file mode 100644 index b7611ea5f4489edc95f61040e4324124a2e6fefd..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/script_loading.py +++ /dev/null @@ -1,32 +0,0 @@ -import os -import sys -import traceback -import importlib.util -from types import ModuleType - - -def load_module(path): - module_spec = importlib.util.spec_from_file_location(os.path.basename(path), path) - module = importlib.util.module_from_spec(module_spec) - module_spec.loader.exec_module(module) - - return module - - -def preload_extensions(extensions_dir, parser): - if not os.path.isdir(extensions_dir): - return - - for dirname in sorted(os.listdir(extensions_dir)): - preload_script = os.path.join(extensions_dir, dirname, "preload.py") - if not os.path.isfile(preload_script): - continue - - try: - module = load_module(preload_script) - if hasattr(module, 'preload'): - module.preload(parser) - - except Exception: - print(f"Error running preload() for {preload_script}", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) diff --git a/spaces/bioriAsaeru/text-to-voice/AnyTrans830CrackLicenseCodeFreeDownload2020 The Benefits of Using AnyTrans for Your iOS and Android Devices.md b/spaces/bioriAsaeru/text-to-voice/AnyTrans830CrackLicenseCodeFreeDownload2020 The Benefits of Using AnyTrans for Your iOS and Android Devices.md deleted file mode 100644 index abdb261469f38e4849dfdc6df0ec579fa4ad92e1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/AnyTrans830CrackLicenseCodeFreeDownload2020 The Benefits of Using AnyTrans for Your iOS and Android Devices.md +++ /dev/null @@ -1,6 +0,0 @@ -

        AnyTrans830CrackLicenseCodeFreeDownload2020


        DOWNLOAD ✯✯✯ https://urloso.com/2uyPXv



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/bioriAsaeru/text-to-voice/D Roy Choudhury Networks And Systems.pdf !!TOP!!.md b/spaces/bioriAsaeru/text-to-voice/D Roy Choudhury Networks And Systems.pdf !!TOP!!.md deleted file mode 100644 index 7e0aefbc15449f5d5672f847d31970ca9efddec7..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/D Roy Choudhury Networks And Systems.pdf !!TOP!!.md +++ /dev/null @@ -1,115 +0,0 @@ - -

        Networks and Systems by D. Roy Choudhury: A Must-Read Book for Electrical Engineering Students

        - -

        If you are looking for a comprehensive and in-depth book on electric network analysis, you should definitely check out Networks and Systems by D. Roy Choudhury. This book covers all the topics that are essential for understanding the theory and applications of electric networks and systems.

        - -

        In this article, we will give you an overview of what this book offers, why it is so popular among students and teachers, and how you can download it as a PDF file for free.

        -

        D Roy Choudhury Networks And Systems.pdf


        DOWNLOAD ✶✶✶ https://urloso.com/2uyS2J



        - -

        What is Networks and Systems by D. Roy Choudhury?

        - -

        Networks and Systems by D. Roy Choudhury is a book that was first published in 1988 by New Age International. It has been revised and updated several times since then, and it is now in its fourth edition.

        - -

        The book serves as a text for the treatment of topics in the field of electric networks which are considered as foundation in electrical engineering for undergraduate students. It also serves as a reference for postgraduate students and practicing engineers who want to refresh their knowledge or learn new concepts.

        - -

        The book has 924 pages and 20 chapters, each covering a different aspect of electric networks and systems. The book also has an appendix with answers to selected problems, algebra of complex numbers, phasors, and objective type questions.

        - -

        What topics does Networks and Systems by D. Roy Choudhury cover?

        - -

        The book covers a wide range of topics related to electric networks and systems, such as:

        - -
          -
        • Basic circuit elements and waveforms
        • -
        • Mesh and node analysis
        • -
        • Graph theory and network equations
        • -
        • Network theorems
        • -
        • Fourier series
        • -
        • The Laplace transform and its applications
        • -
        • Analogous systems
        • -
        • Two-port networks
        • -
        • Attenuators
        • -
        • Conventional filters
        • -
        • Convolution integral
        • -
        • State variable analysis
        • -
        • Network functions
        • -
        • Passive network synthesis
        • -
        • Feedback system
        • -
        • Frequency response plots
        • -
        • Computer applications
        • -
        - -

        The book also includes detailed coverage of topics such as topology, classical filters, passive synthesis, state variable formulation of network problems, wide coverage on convolution integral, transient response and frequency domain analysis.

        - -

        The book employs Laplace transform solution of differential equations, which is a powerful tool for solving complex network problems. The book also gives digital computer programs for varieties of problems pertaining to networks and systems.

        - -

        Why is Networks and Systems by D. Roy Choudhury so popular?

        - -

        The book is popular among students and teachers because it offers several features that make it easy to learn and teach electric network analysis, such as:

        -

        - -
          -
        • The book is written in a clear and concise language, with simple explanations and examples.
        • -
        • The book covers each topic in depth from basic concepts to advanced applications.
        • -
        • The book provides a large number of solved problems for better understanding the theory.
        • -
        • The book also provides a large number of objective type questions and solutions to selected problems in the appendix.
        • -
        • The book has a logical sequence and organization of topics, with cross-references and summaries.
        • -
        • The book has an attractive layout and design, with diagrams, tables, graphs, and illustrations.
        • -
        - -

        The book is also popular because it has received positive reviews from readers who have praised its content, style, accuracy, and usefulness. Some of the reviews are:

        - -
        "Very nice"
        -
        "This book is very good, contents are easy to understand."
        -
        "Best book for network theory."
        -
        "Excellent book for electrical engineering students."
        -
        "One of the best books on networks and systems."
        - -

        How can you download Networks and Systems by D. Roy Choudhury PDF for free?

        - -

        If you want to download Networks and Systems by D. Roy Choudhury PDF for free, you can do so by following these steps:

        - -
          -
        1. Go to this link, which will take you to a website that hosts the PDF file of the book.
        2. -
        3. Click on the "Download" button at the bottom of the page.
        4. -
        5. A new tab will open with a verification process. You may need to complete a captcha or an offer to prove that you are not a robot.
        6. -
        7. Once you complete the verification process, the download will start automatically.
        8. -
        9. You can then save the PDF file on your device or cloud storage.
        10. -
        - -

        Note that downloading PDF files from unauthorized sources may violate the copyright laws of your country. Therefore, we recommend that you buy the original book from a reputable seller or publisher if you can afford it.

        - -

        Conclusion

        - -

        Networks and Systems by D. Roy Choudhury is a great book for anyone who wants to learn electric network analysis in a comprehensive and in-depth way. The book covers all the topics that are essential for understanding the theory and applications of electric networks and systems. The book also provides a large number of solved problems, objective type questions, computer programs, diagrams, tables, graphs, and illustrations to help you learn better.

        - -

        If you want to download Networks and Systems by D. Roy Choudhury PDF for free, you can follow the steps mentioned above. However, we advise you to buy the original book from a reputable seller or publisher if you can afford it.

        - -

        We hope this article has given you an overview of what this book offers, why it is so popular among students and teachers, and how you can download it as a PDF file for free. If you have any questions or feedback, please let us know in the comments below.

        -

        Who is D. Roy Choudhury?

        - -

        D. Roy Choudhury is the author of Networks and Systems and several other books on electrical engineering. He is a professor emeritus of electrical engineering at Delhi College of Engineering, Delhi, India. He has over 40 years of teaching and research experience in the field of electric networks and systems.

        - -

        He has also been a visiting professor at several universities in India and abroad, such as IIT Delhi, IIT Kharagpur, University of California, Berkeley, University of Waterloo, Canada, and University of New South Wales, Australia. He has published more than 100 papers in national and international journals and conferences. He has also received several awards and honors for his contributions to electrical engineering education and research.

        - -

        What are the benefits of reading Networks and Systems by D. Roy Choudhury?

        - -

        Reading Networks and Systems by D. Roy Choudhury can help you to:

        - -
          -
        • Gain a solid foundation in electric network analysis, which is essential for any electrical engineering student or professional.
        • -
        • Learn the theory and applications of electric networks and systems in a systematic and logical way.
        • -
        • Solve complex network problems using various methods and tools, such as network theorems, Laplace transform, state variable analysis, network functions, passive network synthesis, feedback system, frequency response plots, and computer applications.
        • -
        • Enhance your analytical and problem-solving skills by practicing with a large number of solved problems and objective type questions.
        • -
        • Prepare for competitive exams and interviews by reviewing the concepts and formulas given in the book.
        • -
        - -

        Reading Networks and Systems by D. Roy Choudhury can also help you to appreciate the beauty and elegance of electric networks and systems, which are the backbone of modern technology and society.

        -

        Conclusion

        - -

        In this article, we have given you an overview of Networks and Systems by D. Roy Choudhury, a book that covers all the topics that are essential for understanding the theory and applications of electric networks and systems. We have also told you about the author, the topics, the features, the benefits, and the reviews of this book. We have also shown you how you can download it as a PDF file for free.

        - -

        If you are looking for a comprehensive and in-depth book on electric network analysis, you should definitely check out Networks and Systems by D. Roy Choudhury. This book will help you to gain a solid foundation in electric network analysis, which is essential for any electrical engineering student or professional. It will also help you to solve complex network problems using various methods and tools, enhance your analytical and problem-solving skills, and prepare for competitive exams and interviews.

        - -

        We hope this article has been helpful and informative for you. If you have any questions or feedback, please let us know in the comments below. Thank you for reading.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Download Majalah Tempo Versi Pdf __LINK__.md b/spaces/bioriAsaeru/text-to-voice/Download Majalah Tempo Versi Pdf __LINK__.md deleted file mode 100644 index 108ab26f054e1dff4c208d4ac9f1bf8429c500d3..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download Majalah Tempo Versi Pdf __LINK__.md +++ /dev/null @@ -1,20 +0,0 @@ -
        -

        Download Majalah Tempo Versi PDF: Cara Mudah Membaca Berita Terkini

        -

        Majalah Tempo adalah salah satu media cetak dan online terkemuka di Indonesia yang menyajikan berita-berita aktual, mendalam dan tepercaya. Majalah Tempo memiliki reputasi yang baik sebagai media yang independen dan mengabdi pada kepentingan publik.

        -

        download majalah tempo versi pdf


        DOWNLOADhttps://urloso.com/2uyR8C



        -

        Bagi Anda yang ingin membaca Majalah Tempo secara digital, Anda bisa mendownload versi PDF-nya melalui situs resmi Majalah Tempo[^1^]. Caranya sangat mudah, Anda hanya perlu berlangganan Majalah Tempo dengan memilih paket yang sesuai dengan kebutuhan dan anggaran Anda. Anda bisa memilih paket harian, mingguan, bulanan atau tahunan.

        -

        Setelah Anda berlangganan, Anda bisa mengakses Majalah Tempo versi PDF melalui aplikasi Tempo yang bisa diunduh di Google Play Store atau App Store. Anda juga bisa membaca Majalah Tempo versi PDF melalui browser di komputer atau laptop Anda. Anda bisa menikmati berbagai fitur menarik dari Majalah Tempo versi PDF, seperti zoom in dan out, bookmark, share dan print.

        -

        Dengan mendownload Majalah Tempo versi PDF, Anda bisa membaca berita-berita terkini dari dalam dan luar negeri kapan saja dan di mana saja. Anda juga bisa mendukung jurnalisme berkualitas yang dilakukan oleh Majalah Tempo dengan berlangganan. Jadi, tunggu apa lagi? Download Majalah Tempo versi PDF sekarang juga!

        - -

        Majalah Tempo tidak hanya menyediakan berita-berita terkini, tetapi juga analisis, opini, wawancara, investigasi dan rubrik-rubrik menarik lainnya. Anda bisa membaca berbagai topik yang menarik perhatian Anda, seperti politik, ekonomi, hukum, sosial, budaya, olahraga dan gaya hidup. Anda juga bisa mengikuti perkembangan isu-isu global yang penting dan relevan dengan Indonesia.

        -

        -

        Majalah Tempo juga memiliki edisi khusus yang membahas topik-topik tertentu secara lebih mendalam dan komprehensif. Beberapa contoh edisi khusus Majalah Tempo adalah Tempo Bisnis, Tempo English Edition, Tempo Edisi Khusus Pemilu dan Tempo Edisi Khusus Kesehatan. Anda bisa mendownload edisi khusus Majalah Tempo versi PDF dengan cara yang sama seperti mendownload edisi reguler.

        -

        Anda juga bisa mendownload edisi-edisi lama Majalah Tempo versi PDF jika Anda ingin membaca ulang atau mencari informasi tertentu. Anda bisa mengakses arsip digital Majalah Tempo yang tersedia di situs resmi Majalah Tempo. Anda bisa mencari edisi lama Majalah Tempo berdasarkan tanggal, tahun atau kata kunci.

        -

        Majalah Tempo versi PDF adalah pilihan yang tepat bagi Anda yang ingin membaca berita berkualitas dengan cara yang praktis dan mudah. Anda bisa menghemat waktu, uang dan kertas dengan mendownload Majalah Tempo versi PDF. Anda juga bisa membantu Majalah Tempo untuk terus berkembang dan memberikan informasi yang bermanfaat bagi masyarakat. Ayo, download Majalah Tempo versi PDF sekarang juga!

        - -

        Majalah Tempo tidak hanya menyediakan versi PDF, tetapi juga versi cetak dan online. Anda bisa membeli Majalah Tempo versi cetak di kios-kios terdekat atau berlangganan melalui situs resmi Majalah Tempo. Anda juga bisa mengunjungi situs online Majalah Tempo di https://majalah.tempo.co untuk membaca berita-berita terbaru yang diperbarui setiap hari.

        -

        Majalah Tempo juga memiliki produk-produk lain yang berkaitan dengan media dan informasi. Beberapa contoh produk Majalah Tempo adalah Koran Tempo, Tempo.co, Tempo Institute, Tempo Media Group dan Tempo Store. Anda bisa mengenal lebih jauh tentang produk-produk Majalah Tempo dengan mengunjungi situs resmi Majalah Tempo.

        -

        Majalah Tempo sangat menghargai pendapat dan saran dari pembaca. Anda bisa menghubungi Majalah Tempo melalui berbagai saluran komunikasi, seperti telepon, email, surat atau media sosial. Anda juga bisa mengirimkan tulisan, foto atau video yang ingin Anda bagikan dengan pembaca lain. Anda bisa melihat informasi kontak Majalah Tempo di halaman terakhir Majalah Tempo versi PDF atau di situs resmi Majalah Tempo.

        -

        Majalah Tempo adalah media yang terpercaya dan profesional. Majalah Tempo selalu berusaha untuk memberikan informasi yang akurat, mendalam dan tepercaya kepada pembaca. Majalah Tempo juga berkomitmen untuk menjaga independensi dan integritas jurnalisme. Dengan mendownload Majalah Tempo versi PDF, Anda tidak hanya mendapatkan informasi yang berkualitas, tetapi juga mendukung jurnalisme yang bermartabat. Terima kasih telah memilih Majalah Tempo!

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Enjoy Episode 7.2 Full Movie HD 1080p in High Quality.md b/spaces/bioriAsaeru/text-to-voice/Enjoy Episode 7.2 Full Movie HD 1080p in High Quality.md deleted file mode 100644 index 48720c75579bca819c17e600cd23ded5cd83147c..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Enjoy Episode 7.2 Full Movie HD 1080p in High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -
        -

        Apple TV with and without tvOS supports closed captioning, so the deaf or hard of hearing can properly watch TV episodes and feature-length movies. Compatible episodes and movies are denoted with a CC (closed captioning) or SDH (Descriptive Audio) icon in the iTunes Store either on the Apple TV or in iTunes itself. The viewer can customize the captions in episodes or movies with styles and fonts that are more conducive to their hearing and/or visual impairment.[73] Apple's Remote app on iOS devices allows control of the Apple TV from an iPhone, iPad or iPod Touch.[74]

        -

        Episode 7.2 full movie hd 1080p


        Download Filehttps://urloso.com/2uyPdl



        -

        Apple offers H.264 1080p movies and video podcasts on iTunes.[98] In comparison, Blu-ray Disc films are 1080p H.264 or VC-1 video encoded at rates of up to 40 Mbit/s.[99] Apple TV's audio chip supports 7.1 surround sound,[100] and some high definition rentals from iTunes are offered with Dolby Digital 5.1 surround sound.[101] There is an Apple TV export option in QuickTime which allows content in some formats that the device does not support to be easily re-encoded.[102] Applications that use QuickTime to export media can use this; e.g., iMovie's Share menu,[103] iTunes' advanced menu,[104] and some third-party content conversion tools.[105]

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (ip Man 4 Izle 720p Or 1080pgolkes).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (ip Man 4 Izle 720p Or 1080pgolkes).md deleted file mode 100644 index 9933a9d99633c83320841a810694a048a2a80975..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (ip Man 4 Izle 720p Or 1080pgolkes).md +++ /dev/null @@ -1,6 +0,0 @@ -

        HD Online Player (ip man 4 izle 720p or 1080pgolkes)


        Download Zip ····· https://urloso.com/2uyRuc



        - -Veronica Mars Complete Series Torrent Download. 1 / 4 ... Full HD TV Video episodes get FREE in avi ... HD Online Player (ip man 4 izle 720p or 1080pgolkes). 1fdad05405
        -
        -
        -

        diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/__init__.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/blanchon/gaussian-splatting-kit/README.md b/spaces/blanchon/gaussian-splatting-kit/README.md deleted file mode 100644 index ec2ca8643f62d7791d313d585baa8c5b75ec8ba7..0000000000000000000000000000000000000000 --- a/spaces/blanchon/gaussian-splatting-kit/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gaussian Splatting Kit -emoji: 🎨 -colorFrom: gray -colorTo: red -sdk: docker -app_port: 7860 -pinned: true ---- - -# gaussian-splatting-kit -CUDA-enabled toolbox for 3D Gaussian Splatting with ffmpeg, colmap, and gaussian-splatting-cuda. diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/data/video/video_keyframe_dataset.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/data/video/video_keyframe_dataset.py deleted file mode 100644 index 214365c0678e4d840cc6a69f6a79859a5e8ea33a..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/data/video/video_keyframe_dataset.py +++ /dev/null @@ -1,300 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import csv -import logging -import numpy as np -from typing import Any, Callable, Dict, List, Optional, Union -import av -import torch -from torch.utils.data.dataset import Dataset - -from detectron2.utils.file_io import PathManager - -from ..utils import maybe_prepend_base_path -from .frame_selector import FrameSelector, FrameTsList - -FrameList = List[av.frame.Frame] # pyre-ignore[16] -FrameTransform = Callable[[torch.Tensor], torch.Tensor] - - -def list_keyframes(video_fpath: str, video_stream_idx: int = 0) -> FrameTsList: - """ - Traverses all keyframes of a video file. Returns a list of keyframe - timestamps. Timestamps are counts in timebase units. - - Args: - video_fpath (str): Video file path - video_stream_idx (int): Video stream index (default: 0) - Returns: - List[int]: list of keyframe timestaps (timestamp is a count in timebase - units) - """ - try: - with PathManager.open(video_fpath, "rb") as io: - container = av.open(io, mode="r") - stream = container.streams.video[video_stream_idx] - keyframes = [] - pts = -1 - # Note: even though we request forward seeks for keyframes, sometimes - # a keyframe in backwards direction is returned. We introduce tolerance - # as a max count of ignored backward seeks - tolerance_backward_seeks = 2 - while True: - try: - container.seek(pts + 1, backward=False, any_frame=False, stream=stream) - except av.AVError as e: - # the exception occurs when the video length is exceeded, - # we then return whatever data we've already collected - logger = logging.getLogger(__name__) - logger.debug( - f"List keyframes: Error seeking video file {video_fpath}, " - f"video stream {video_stream_idx}, pts {pts + 1}, AV error: {e}" - ) - return keyframes - except OSError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"List keyframes: Error seeking video file {video_fpath}, " - f"video stream {video_stream_idx}, pts {pts + 1}, OS error: {e}" - ) - return [] - packet = next(container.demux(video=video_stream_idx)) - if packet.pts is not None and packet.pts <= pts: - logger = logging.getLogger(__name__) - logger.warning( - f"Video file {video_fpath}, stream {video_stream_idx}: " - f"bad seek for packet {pts + 1} (got packet {packet.pts}), " - f"tolerance {tolerance_backward_seeks}." - ) - tolerance_backward_seeks -= 1 - if tolerance_backward_seeks == 0: - return [] - pts += 1 - continue - tolerance_backward_seeks = 2 - pts = packet.pts - if pts is None: - return keyframes - if packet.is_keyframe: - keyframes.append(pts) - return keyframes - except OSError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"List keyframes: Error opening video file container {video_fpath}, " f"OS error: {e}" - ) - except RuntimeError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"List keyframes: Error opening video file container {video_fpath}, " - f"Runtime error: {e}" - ) - return [] - - -def read_keyframes( - video_fpath: str, keyframes: FrameTsList, video_stream_idx: int = 0 -) -> FrameList: # pyre-ignore[11] - """ - Reads keyframe data from a video file. - - Args: - video_fpath (str): Video file path - keyframes (List[int]): List of keyframe timestamps (as counts in - timebase units to be used in container seek operations) - video_stream_idx (int): Video stream index (default: 0) - Returns: - List[Frame]: list of frames that correspond to the specified timestamps - """ - try: - with PathManager.open(video_fpath, "rb") as io: - container = av.open(io) - stream = container.streams.video[video_stream_idx] - frames = [] - for pts in keyframes: - try: - container.seek(pts, any_frame=False, stream=stream) - frame = next(container.decode(video=0)) - frames.append(frame) - except av.AVError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"Read keyframes: Error seeking video file {video_fpath}, " - f"video stream {video_stream_idx}, pts {pts}, AV error: {e}" - ) - container.close() - return frames - except OSError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"Read keyframes: Error seeking video file {video_fpath}, " - f"video stream {video_stream_idx}, pts {pts}, OS error: {e}" - ) - container.close() - return frames - except StopIteration: - logger = logging.getLogger(__name__) - logger.warning( - f"Read keyframes: Error decoding frame from {video_fpath}, " - f"video stream {video_stream_idx}, pts {pts}" - ) - container.close() - return frames - - container.close() - return frames - except OSError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"Read keyframes: Error opening video file container {video_fpath}, OS error: {e}" - ) - except RuntimeError as e: - logger = logging.getLogger(__name__) - logger.warning( - f"Read keyframes: Error opening video file container {video_fpath}, Runtime error: {e}" - ) - return [] - - -def video_list_from_file(video_list_fpath: str, base_path: Optional[str] = None): - """ - Create a list of paths to video files from a text file. - - Args: - video_list_fpath (str): path to a plain text file with the list of videos - base_path (str): base path for entries from the video list (default: None) - """ - video_list = [] - with PathManager.open(video_list_fpath, "r") as io: - for line in io: - video_list.append(maybe_prepend_base_path(base_path, str(line.strip()))) - return video_list - - -def read_keyframe_helper_data(fpath: str): - """ - Read keyframe data from a file in CSV format: the header should contain - "video_id" and "keyframes" fields. Value specifications are: - video_id: int - keyframes: list(int) - Example of contents: - video_id,keyframes - 2,"[1,11,21,31,41,51,61,71,81]" - - Args: - fpath (str): File containing keyframe data - - Return: - video_id_to_keyframes (dict: int -> list(int)): for a given video ID it - contains a list of keyframes for that video - """ - video_id_to_keyframes = {} - try: - with PathManager.open(fpath, "r") as io: - csv_reader = csv.reader(io) # pyre-ignore[6] - header = next(csv_reader) - video_id_idx = header.index("video_id") - keyframes_idx = header.index("keyframes") - for row in csv_reader: - video_id = int(row[video_id_idx]) - assert ( - video_id not in video_id_to_keyframes - ), f"Duplicate keyframes entry for video {fpath}" - video_id_to_keyframes[video_id] = ( - [int(v) for v in row[keyframes_idx][1:-1].split(",")] - if len(row[keyframes_idx]) > 2 - else [] - ) - except Exception as e: - logger = logging.getLogger(__name__) - logger.warning(f"Error reading keyframe helper data from {fpath}: {e}") - return video_id_to_keyframes - - -class VideoKeyframeDataset(Dataset): - """ - Dataset that provides keyframes for a set of videos. - """ - - _EMPTY_FRAMES = torch.empty((0, 3, 1, 1)) - - def __init__( - self, - video_list: List[str], - category_list: Union[str, List[str], None] = None, - frame_selector: Optional[FrameSelector] = None, - transform: Optional[FrameTransform] = None, - keyframe_helper_fpath: Optional[str] = None, - ): - """ - Dataset constructor - - Args: - video_list (List[str]): list of paths to video files - category_list (Union[str, List[str], None]): list of animal categories for each - video file. If it is a string, or None, this applies to all videos - frame_selector (Callable: KeyFrameList -> KeyFrameList): - selects keyframes to process, keyframes are given by - packet timestamps in timebase counts. If None, all keyframes - are selected (default: None) - transform (Callable: torch.Tensor -> torch.Tensor): - transforms a batch of RGB images (tensors of size [B, 3, H, W]), - returns a tensor of the same size. If None, no transform is - applied (default: None) - - """ - if type(category_list) == list: - self.category_list = category_list - else: - self.category_list = [category_list] * len(video_list) - assert len(video_list) == len( - self.category_list - ), "length of video and category lists must be equal" - self.video_list = video_list - self.frame_selector = frame_selector - self.transform = transform - self.keyframe_helper_data = ( - read_keyframe_helper_data(keyframe_helper_fpath) - if keyframe_helper_fpath is not None - else None - ) - - def __getitem__(self, idx: int) -> Dict[str, Any]: - """ - Gets selected keyframes from a given video - - Args: - idx (int): video index in the video list file - Returns: - A dictionary containing two keys: - images (torch.Tensor): tensor of size [N, H, W, 3] or of size - defined by the transform that contains keyframes data - categories (List[str]): categories of the frames - """ - categories = [self.category_list[idx]] - fpath = self.video_list[idx] - keyframes = ( - list_keyframes(fpath) - if self.keyframe_helper_data is None or idx not in self.keyframe_helper_data - else self.keyframe_helper_data[idx] - ) - transform = self.transform - frame_selector = self.frame_selector - if not keyframes: - return {"images": self._EMPTY_FRAMES, "categories": []} - if frame_selector is not None: - keyframes = frame_selector(keyframes) - frames = read_keyframes(fpath, keyframes) - if not frames: - return {"images": self._EMPTY_FRAMES, "categories": []} - frames = np.stack([frame.to_rgb().to_ndarray() for frame in frames]) - frames = torch.as_tensor(frames, device=torch.device("cpu")) - frames = frames[..., [2, 1, 0]] # RGB -> BGR - frames = frames.permute(0, 3, 1, 2).float() # NHWC -> NCHW - if transform is not None: - frames = transform(frames) - return {"images": frames, "categories": categories} - - def __len__(self): - return len(self.video_list) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/utils/dbhelper.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/utils/dbhelper.py deleted file mode 100644 index 65b615739a2b1df8b90002995dbd45098858e048..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/utils/dbhelper.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Any, Dict, Optional, Tuple - - -class EntrySelector(object): - """ - Base class for entry selectors - """ - - @staticmethod - def from_string(spec: str) -> "EntrySelector": - if spec == "*": - return AllEntrySelector() - return FieldEntrySelector(spec) - - -class AllEntrySelector(EntrySelector): - """ - Selector that accepts all entries - """ - - SPECIFIER = "*" - - def __call__(self, entry): - return True - - -class FieldEntrySelector(EntrySelector): - """ - Selector that accepts only entries that match provided field - specifier(s). Only a limited set of specifiers is supported for now: - ::=[] - ::=[] - is a valid identifier - ::= "int" | "str" - ::= "=" - ::= "," - ::= ":" - ::= | - ::= - ::= "-" - is a string without spaces and special symbols - (e.g. , , , ) - """ - - _SPEC_DELIM = "," - _TYPE_DELIM = ":" - _RANGE_DELIM = "-" - _EQUAL = "=" - _ERROR_PREFIX = "Invalid field selector specifier" - - class _FieldEntryValuePredicate(object): - """ - Predicate that checks strict equality for the specified entry field - """ - - def __init__(self, name: str, typespec: Optional[str], value: str): - import builtins - - self.name = name - self.type = getattr(builtins, typespec) if typespec is not None else str - self.value = value - - def __call__(self, entry): - return entry[self.name] == self.type(self.value) - - class _FieldEntryRangePredicate(object): - """ - Predicate that checks whether an entry field falls into the specified range - """ - - def __init__(self, name: str, typespec: Optional[str], vmin: str, vmax: str): - import builtins - - self.name = name - self.type = getattr(builtins, typespec) if typespec is not None else str - self.vmin = vmin - self.vmax = vmax - - def __call__(self, entry): - return (entry[self.name] >= self.type(self.vmin)) and ( - entry[self.name] <= self.type(self.vmax) - ) - - def __init__(self, spec: str): - self._predicates = self._parse_specifier_into_predicates(spec) - - def __call__(self, entry: Dict[str, Any]): - for predicate in self._predicates: - if not predicate(entry): - return False - return True - - def _parse_specifier_into_predicates(self, spec: str): - predicates = [] - specs = spec.split(self._SPEC_DELIM) - for subspec in specs: - eq_idx = subspec.find(self._EQUAL) - if eq_idx > 0: - field_name_with_type = subspec[:eq_idx] - field_name, field_type = self._parse_field_name_type(field_name_with_type) - field_value_or_range = subspec[eq_idx + 1 :] - if self._is_range_spec(field_value_or_range): - vmin, vmax = self._get_range_spec(field_value_or_range) - predicate = FieldEntrySelector._FieldEntryRangePredicate( - field_name, field_type, vmin, vmax - ) - else: - predicate = FieldEntrySelector._FieldEntryValuePredicate( - field_name, field_type, field_value_or_range - ) - predicates.append(predicate) - elif eq_idx == 0: - self._parse_error(f'"{subspec}", field name is empty!') - else: - self._parse_error(f'"{subspec}", should have format ' "=!") - return predicates - - def _parse_field_name_type(self, field_name_with_type: str) -> Tuple[str, Optional[str]]: - type_delim_idx = field_name_with_type.find(self._TYPE_DELIM) - if type_delim_idx > 0: - field_name = field_name_with_type[:type_delim_idx] - field_type = field_name_with_type[type_delim_idx + 1 :] - elif type_delim_idx == 0: - self._parse_error(f'"{field_name_with_type}", field name is empty!') - else: - field_name = field_name_with_type - field_type = None - # pyre-fixme[61]: `field_name` may not be initialized here. - # pyre-fixme[61]: `field_type` may not be initialized here. - return field_name, field_type - - def _is_range_spec(self, field_value_or_range): - delim_idx = field_value_or_range.find(self._RANGE_DELIM) - return delim_idx > 0 - - def _get_range_spec(self, field_value_or_range): - if self._is_range_spec(field_value_or_range): - delim_idx = field_value_or_range.find(self._RANGE_DELIM) - vmin = field_value_or_range[:delim_idx] - vmax = field_value_or_range[delim_idx + 1 :] - return vmin, vmax - else: - self._parse_error('"field_value_or_range", range of values expected!') - - def _parse_error(self, msg): - raise ValueError(f"{self._ERROR_PREFIX}: {msg}") diff --git a/spaces/chasemcdo/hf_localai/pkg/gallery/request.go b/spaces/chasemcdo/hf_localai/pkg/gallery/request.go deleted file mode 100644 index 030ee16bf5bf21dd9e3fd64b30cb5edcd8a001aa..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/pkg/gallery/request.go +++ /dev/null @@ -1,27 +0,0 @@ -package gallery - -// GalleryModel is the struct used to represent a model in the gallery returned by the endpoint. -// It is used to install the model by resolving the URL and downloading the files. -// The other fields are used to override the configuration of the model. -type GalleryModel struct { - URL string `json:"url,omitempty" yaml:"url,omitempty"` - Name string `json:"name,omitempty" yaml:"name,omitempty"` - Description string `json:"description,omitempty" yaml:"description,omitempty"` - License string `json:"license,omitempty" yaml:"license,omitempty"` - URLs []string `json:"urls,omitempty" yaml:"urls,omitempty"` - Icon string `json:"icon,omitempty" yaml:"icon,omitempty"` - Tags []string `json:"tags,omitempty" yaml:"tags,omitempty"` - - // Overrides are used to override the configuration of the model - Overrides map[string]interface{} `json:"overrides,omitempty" yaml:"overrides,omitempty"` - // AdditionalFiles are used to add additional files to the model - AdditionalFiles []File `json:"files,omitempty" yaml:"files,omitempty"` - // Gallery is a reference to the gallery which contains the model - Gallery Gallery `json:"gallery,omitempty" yaml:"gallery,omitempty"` - // Installed is used to indicate if the model is installed or not - Installed bool `json:"installed,omitempty" yaml:"installed,omitempty"` -} - -const ( - githubURI = "github:" -) diff --git a/spaces/chendl/compositional_test/README.md b/spaces/chendl/compositional_test/README.md deleted file mode 100644 index 7805e22a1e85a696a3c2d1e7456bb839280e4804..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Compositional Test -emoji: 🦀 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/lm_seqs_dataset.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/lm_seqs_dataset.py deleted file mode 100644 index 8e0a5814abf85cca610e3fd8494c530e6dc7e411..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/lm_seqs_dataset.py +++ /dev/null @@ -1,166 +0,0 @@ -# coding=utf-8 -# Copyright 2019-present, the HuggingFace Inc. team and Facebook, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Dataset to distilled models - adapted in part from Facebook, Inc XLM model (https://github.com/facebookresearch/XLM) -""" -import numpy as np -import torch -from torch.utils.data import Dataset - -from utils import logger - - -class LmSeqsDataset(Dataset): - """Custom Dataset wrapping language modeling sequences. - - Each sample will be retrieved by indexing the list of token_ids and their corresponding lengths. - - Input: - ------ - params: `NameSpace` parameters - data: `List[np.array[int]] - """ - - def __init__(self, params, data): - self.params = params - - self.token_ids = np.array(data) - self.lengths = np.array([len(t) for t in data]) - - self.check() - self.remove_long_sequences() - self.remove_empty_sequences() - self.remove_unknown_sequences() - self.check() - self.print_statistics() - - def __getitem__(self, index): - return (self.token_ids[index], self.lengths[index]) - - def __len__(self): - return len(self.lengths) - - def check(self): - """ - Some sanity checks - """ - assert len(self.token_ids) == len(self.lengths) - assert all(self.lengths[i] == len(self.token_ids[i]) for i in range(len(self.lengths))) - - def remove_long_sequences(self): - """ - Sequences that are too long are split by chunk of max_model_input_size. - """ - max_len = self.params.max_model_input_size - indices = self.lengths > max_len - logger.info(f"Splitting {sum(indices)} too long sequences.") - - def divide_chunks(l, n): - return [l[i : i + n] for i in range(0, len(l), n)] - - new_tok_ids = [] - new_lengths = [] - if self.params.mlm: - cls_id, sep_id = self.params.special_tok_ids["cls_token"], self.params.special_tok_ids["sep_token"] - else: - cls_id, sep_id = self.params.special_tok_ids["bos_token"], self.params.special_tok_ids["eos_token"] - - for seq_, len_ in zip(self.token_ids, self.lengths): - assert (seq_[0] == cls_id) and (seq_[-1] == sep_id), seq_ - if len_ <= max_len: - new_tok_ids.append(seq_) - new_lengths.append(len_) - else: - sub_seqs = [] - for sub_s in divide_chunks(seq_, max_len - 2): - if sub_s[0] != cls_id: - sub_s = np.insert(sub_s, 0, cls_id) - if sub_s[-1] != sep_id: - sub_s = np.insert(sub_s, len(sub_s), sep_id) - assert len(sub_s) <= max_len - assert (sub_s[0] == cls_id) and (sub_s[-1] == sep_id), sub_s - sub_seqs.append(sub_s) - - new_tok_ids.extend(sub_seqs) - new_lengths.extend([len(l) for l in sub_seqs]) - - self.token_ids = np.array(new_tok_ids) - self.lengths = np.array(new_lengths) - - def remove_empty_sequences(self): - """ - Too short sequences are simply removed. This could be tuned. - """ - init_size = len(self) - indices = self.lengths > 11 - self.token_ids = self.token_ids[indices] - self.lengths = self.lengths[indices] - new_size = len(self) - logger.info(f"Remove {init_size - new_size} too short (<=11 tokens) sequences.") - - def remove_unknown_sequences(self): - """ - Remove sequences with a (too) high level of unknown tokens. - """ - if "unk_token" not in self.params.special_tok_ids: - return - else: - unk_token_id = self.params.special_tok_ids["unk_token"] - init_size = len(self) - unk_occs = np.array([np.count_nonzero(a == unk_token_id) for a in self.token_ids]) - indices = (unk_occs / self.lengths) < 0.5 - self.token_ids = self.token_ids[indices] - self.lengths = self.lengths[indices] - new_size = len(self) - logger.info(f"Remove {init_size - new_size} sequences with a high level of unknown tokens (50%).") - - def print_statistics(self): - """ - Print some statistics on the corpus. Only the master process. - """ - if not self.params.is_master: - return - logger.info(f"{len(self)} sequences") - # data_len = sum(self.lengths) - # nb_unique_tokens = len(Counter(list(chain(*self.token_ids)))) - # logger.info(f'{data_len} tokens ({nb_unique_tokens} unique)') - - # unk_idx = self.params.special_tok_ids['unk_token'] - # nb_unknown = sum([(t==unk_idx).sum() for t in self.token_ids]) - # logger.info(f'{nb_unknown} unknown tokens (covering {100*nb_unknown/data_len:.2f}% of the data)') - - def batch_sequences(self, batch): - """ - Do the padding and transform into torch.tensor. - """ - token_ids = [t[0] for t in batch] - lengths = [t[1] for t in batch] - assert len(token_ids) == len(lengths) - - # Max for paddings - max_seq_len_ = max(lengths) - - # Pad token ids - if self.params.mlm: - pad_idx = self.params.special_tok_ids["pad_token"] - else: - pad_idx = self.params.special_tok_ids["unk_token"] - tk_ = [list(t.astype(int)) + [pad_idx] * (max_seq_len_ - len(t)) for t in token_ids] - assert len(tk_) == len(token_ids) - assert all(len(t) == max_seq_len_ for t in tk_) - - tk_t = torch.tensor(tk_) # (bs, max_seq_len_) - lg_t = torch.tensor(lengths) # (bs) - return tk_t, lg_t diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/alignment.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/alignment.py deleted file mode 100644 index 55b477f5ee967a9409d4efc4dc052e893618f44c..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/alignment.py +++ /dev/null @@ -1,223 +0,0 @@ -# Parts of the code are adapted from the snippets provided in the TorchAudio Wav2Vec forced alignment tutorial. -# The full tutorial can be found here: https://pytorch.org/audio/stable/tutorials/forced_alignment_tutorial.html - -import argparse -import os -from dataclasses import dataclass - -import torch -import torchaudio -from tqdm import tqdm - -from transformers import AutoConfig, AutoModelForCTC, AutoProcessor - - -class Wav2Vec2Aligner: - def __init__(self, model_name, input_wavs_sr, cuda): - self.cuda = cuda - self.config = AutoConfig.from_pretrained(model_name) - self.model = AutoModelForCTC.from_pretrained(model_name) - self.model.eval() - if self.cuda: - self.model.to(device="cuda") - self.processor = AutoProcessor.from_pretrained(model_name) - self.resampler = torchaudio.transforms.Resample(input_wavs_sr, 16_000) - blank_id = 0 - vocab = list(self.processor.tokenizer.get_vocab().keys()) - for i in range(len(vocab)): - if vocab[i] == "[PAD]" or vocab[i] == "": - blank_id = i - print("Blank Token id [PAD]/", blank_id) - self.blank_id = blank_id - - def speech_file_to_array_fn(self, wav_path): - speech_array, sampling_rate = torchaudio.load(wav_path) - speech = self.resampler(speech_array).squeeze().numpy() - return speech - - def align_single_sample(self, item): - blank_id = self.blank_id - transcript = "|".join(item["sent"].split(" ")) - if not os.path.isfile(item["wav_path"]): - print(item["wav_path"], "not found in wavs directory") - - speech_array = self.speech_file_to_array_fn(item["wav_path"]) - inputs = self.processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True) - if self.cuda: - inputs = inputs.to(device="cuda") - - with torch.no_grad(): - logits = self.model(inputs.input_values).logits - - # get the emission probability at frame level - emissions = torch.log_softmax(logits, dim=-1) - emission = emissions[0].cpu().detach() - - # get labels from vocab - labels = ([""] + list(self.processor.tokenizer.get_vocab().keys()))[ - :-1 - ] # logits don't align with the tokenizer's vocab - - dictionary = {c: i for i, c in enumerate(labels)} - tokens = [] - for c in transcript: - if c in dictionary: - tokens.append(dictionary[c]) - - def get_trellis(emission, tokens, blank_id=0): - """ - Build a trellis matrix of shape (num_frames + 1, num_tokens + 1) - that represents the probabilities of each source token being at a certain time step - """ - num_frames = emission.size(0) - num_tokens = len(tokens) - - # Trellis has extra diemsions for both time axis and tokens. - # The extra dim for tokens represents (start-of-sentence) - # The extra dim for time axis is for simplification of the code. - trellis = torch.full((num_frames + 1, num_tokens + 1), -float("inf")) - trellis[:, 0] = 0 - for t in range(num_frames): - trellis[t + 1, 1:] = torch.maximum( - # Score for staying at the same token - trellis[t, 1:] + emission[t, blank_id], - # Score for changing to the next token - trellis[t, :-1] + emission[t, tokens], - ) - return trellis - - trellis = get_trellis(emission, tokens, blank_id) - - @dataclass - class Point: - token_index: int - time_index: int - score: float - - def backtrack(trellis, emission, tokens, blank_id=0): - """ - Walk backwards from the last (sentence_token, time_step) pair to build the optimal sequence alignment path - """ - # Note: - # j and t are indices for trellis, which has extra dimensions - # for time and tokens at the beginning. - # When referring to time frame index `T` in trellis, - # the corresponding index in emission is `T-1`. - # Similarly, when referring to token index `J` in trellis, - # the corresponding index in transcript is `J-1`. - j = trellis.size(1) - 1 - t_start = torch.argmax(trellis[:, j]).item() - - path = [] - for t in range(t_start, 0, -1): - # 1. Figure out if the current position was stay or change - # Note (again): - # `emission[J-1]` is the emission at time frame `J` of trellis dimension. - # Score for token staying the same from time frame J-1 to T. - stayed = trellis[t - 1, j] + emission[t - 1, blank_id] - # Score for token changing from C-1 at T-1 to J at T. - changed = trellis[t - 1, j - 1] + emission[t - 1, tokens[j - 1]] - - # 2. Store the path with frame-wise probability. - prob = emission[t - 1, tokens[j - 1] if changed > stayed else 0].exp().item() - # Return token index and time index in non-trellis coordinate. - path.append(Point(j - 1, t - 1, prob)) - - # 3. Update the token - if changed > stayed: - j -= 1 - if j == 0: - break - else: - raise ValueError("Failed to align") - return path[::-1] - - path = backtrack(trellis, emission, tokens, blank_id) - - @dataclass - class Segment: - label: str - start: int - end: int - score: float - - def __repr__(self): - return f"{self.label}\t{self.score:4.2f}\t{self.start*20:5d}\t{self.end*20:5d}" - - @property - def length(self): - return self.end - self.start - - def merge_repeats(path): - """ - Merge repeated tokens into a single segment. Note: this shouldn't affect repeated characters from the - original sentences (e.g. `ll` in `hello`) - """ - i1, i2 = 0, 0 - segments = [] - while i1 < len(path): - while i2 < len(path) and path[i1].token_index == path[i2].token_index: - i2 += 1 - score = sum(path[k].score for k in range(i1, i2)) / (i2 - i1) - segments.append( - Segment( - transcript[path[i1].token_index], - path[i1].time_index, - path[i2 - 1].time_index + 1, - score, - ) - ) - i1 = i2 - return segments - - segments = merge_repeats(path) - with open(item["out_path"], "w") as out_align: - for seg in segments: - out_align.write(str(seg) + "\n") - - def align_data(self, wav_dir, text_file, output_dir): - if not os.path.exists(output_dir): - os.makedirs(output_dir) - - # load text file - lines = open(text_file, encoding="utf8").readlines() - - items = [] - for line in lines: - if len(line.strip().split("\t")) != 2: - print("Script must be in format: 00001 this is my sentence") - exit() - - wav_name, sentence = line.strip().split("\t") - wav_path = os.path.join(wav_dir, wav_name + ".wav") - out_path = os.path.join(output_dir, wav_name + ".txt") - - items.append({"sent": sentence, "wav_path": wav_path, "out_path": out_path}) - print("Number of samples found in script file", len(items)) - - for item in tqdm(items): - self.align_single_sample(item) - - -def main(): - parser = argparse.ArgumentParser() - - parser.add_argument( - "--model_name", type=str, default="arijitx/wav2vec2-xls-r-300m-bengali", help="wav2vec model name" - ) - parser.add_argument("--wav_dir", type=str, default="./wavs", help="directory containing wavs") - parser.add_argument("--text_file", type=str, default="script.txt", help="file containing text") - parser.add_argument("--input_wavs_sr", type=int, default=16000, help="sampling rate of input audios") - parser.add_argument( - "--output_dir", type=str, default="./out_alignment", help="output directory containing the alignment files" - ) - parser.add_argument("--cuda", action="store_true") - - args = parser.parse_args() - - aligner = Wav2Vec2Aligner(args.model_name, args.input_wavs_sr, args.cuda) - aligner.align_data(args.wav_dir, args.text_file, args.output_dir) - - -if __name__ == "__main__": - main() diff --git a/spaces/cihyFjudo/fairness-paper-search/FULL SDL Trados Studio 2009 SP3 The Ultimate Guide for Professional Translators.md b/spaces/cihyFjudo/fairness-paper-search/FULL SDL Trados Studio 2009 SP3 The Ultimate Guide for Professional Translators.md deleted file mode 100644 index 22c0a1a2c365e608649132e77d673cc2fe1a55a6..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/FULL SDL Trados Studio 2009 SP3 The Ultimate Guide for Professional Translators.md +++ /dev/null @@ -1,6 +0,0 @@ -
        -

        Trados Studio 2011 brings the Microsoft Spell Checker back. Hunspell is still available but users can now configure which checker to use for each language. This is to resolve issues present in the Studio 2009 Spell Checkers which were not fully accurate for certain languages, notably Scandinavian ones.

        -

        FULL SDL Trados Studio 2009 SP3


        Downloadhttps://tinurli.com/2uwj5b



        -

        The issue might with the operating system, as old versions of Trados studio like Trados 2009 have not been tested on the new Windows OS. Maybe this link would help =000001184 but just to test please install Trados 2009 on a older Windows version .

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/IcoImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/IcoImagePlugin.py deleted file mode 100644 index a188f8fdcea46e5cb9423a3c4572d88d93890fc6..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/IcoImagePlugin.py +++ /dev/null @@ -1,358 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Windows Icon support for PIL -# -# History: -# 96-05-27 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# - -# This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis -# . -# https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki -# -# Icon format references: -# * https://en.wikipedia.org/wiki/ICO_(file_format) -# * https://msdn.microsoft.com/en-us/library/ms997538.aspx - - -import warnings -from io import BytesIO -from math import ceil, log - -from . import BmpImagePlugin, Image, ImageFile, PngImagePlugin -from ._binary import i16le as i16 -from ._binary import i32le as i32 -from ._binary import o8 -from ._binary import o16le as o16 -from ._binary import o32le as o32 - -# -# -------------------------------------------------------------------- - -_MAGIC = b"\0\0\1\0" - - -def _save(im, fp, filename): - fp.write(_MAGIC) # (2+2) - bmp = im.encoderinfo.get("bitmap_format") == "bmp" - sizes = im.encoderinfo.get( - "sizes", - [(16, 16), (24, 24), (32, 32), (48, 48), (64, 64), (128, 128), (256, 256)], - ) - frames = [] - provided_ims = [im] + im.encoderinfo.get("append_images", []) - width, height = im.size - for size in sorted(set(sizes)): - if size[0] > width or size[1] > height or size[0] > 256 or size[1] > 256: - continue - - for provided_im in provided_ims: - if provided_im.size != size: - continue - frames.append(provided_im) - if bmp: - bits = BmpImagePlugin.SAVE[provided_im.mode][1] - bits_used = [bits] - for other_im in provided_ims: - if other_im.size != size: - continue - bits = BmpImagePlugin.SAVE[other_im.mode][1] - if bits not in bits_used: - # Another image has been supplied for this size - # with a different bit depth - frames.append(other_im) - bits_used.append(bits) - break - else: - # TODO: invent a more convenient method for proportional scalings - frame = provided_im.copy() - frame.thumbnail(size, Image.Resampling.LANCZOS, reducing_gap=None) - frames.append(frame) - fp.write(o16(len(frames))) # idCount(2) - offset = fp.tell() + len(frames) * 16 - for frame in frames: - width, height = frame.size - # 0 means 256 - fp.write(o8(width if width < 256 else 0)) # bWidth(1) - fp.write(o8(height if height < 256 else 0)) # bHeight(1) - - bits, colors = BmpImagePlugin.SAVE[frame.mode][1:] if bmp else (32, 0) - fp.write(o8(colors)) # bColorCount(1) - fp.write(b"\0") # bReserved(1) - fp.write(b"\0\0") # wPlanes(2) - fp.write(o16(bits)) # wBitCount(2) - - image_io = BytesIO() - if bmp: - frame.save(image_io, "dib") - - if bits != 32: - and_mask = Image.new("1", size) - ImageFile._save( - and_mask, image_io, [("raw", (0, 0) + size, 0, ("1", 0, -1))] - ) - else: - frame.save(image_io, "png") - image_io.seek(0) - image_bytes = image_io.read() - if bmp: - image_bytes = image_bytes[:8] + o32(height * 2) + image_bytes[12:] - bytes_len = len(image_bytes) - fp.write(o32(bytes_len)) # dwBytesInRes(4) - fp.write(o32(offset)) # dwImageOffset(4) - current = fp.tell() - fp.seek(offset) - fp.write(image_bytes) - offset = offset + bytes_len - fp.seek(current) - - -def _accept(prefix): - return prefix[:4] == _MAGIC - - -class IcoFile: - def __init__(self, buf): - """ - Parse image from file-like object containing ico file data - """ - - # check magic - s = buf.read(6) - if not _accept(s): - msg = "not an ICO file" - raise SyntaxError(msg) - - self.buf = buf - self.entry = [] - - # Number of items in file - self.nb_items = i16(s, 4) - - # Get headers for each item - for i in range(self.nb_items): - s = buf.read(16) - - icon_header = { - "width": s[0], - "height": s[1], - "nb_color": s[2], # No. of colors in image (0 if >=8bpp) - "reserved": s[3], - "planes": i16(s, 4), - "bpp": i16(s, 6), - "size": i32(s, 8), - "offset": i32(s, 12), - } - - # See Wikipedia - for j in ("width", "height"): - if not icon_header[j]: - icon_header[j] = 256 - - # See Wikipedia notes about color depth. - # We need this just to differ images with equal sizes - icon_header["color_depth"] = ( - icon_header["bpp"] - or ( - icon_header["nb_color"] != 0 - and ceil(log(icon_header["nb_color"], 2)) - ) - or 256 - ) - - icon_header["dim"] = (icon_header["width"], icon_header["height"]) - icon_header["square"] = icon_header["width"] * icon_header["height"] - - self.entry.append(icon_header) - - self.entry = sorted(self.entry, key=lambda x: x["color_depth"]) - # ICO images are usually squares - # self.entry = sorted(self.entry, key=lambda x: x['width']) - self.entry = sorted(self.entry, key=lambda x: x["square"]) - self.entry.reverse() - - def sizes(self): - """ - Get a list of all available icon sizes and color depths. - """ - return {(h["width"], h["height"]) for h in self.entry} - - def getentryindex(self, size, bpp=False): - for i, h in enumerate(self.entry): - if size == h["dim"] and (bpp is False or bpp == h["color_depth"]): - return i - return 0 - - def getimage(self, size, bpp=False): - """ - Get an image from the icon - """ - return self.frame(self.getentryindex(size, bpp)) - - def frame(self, idx): - """ - Get an image from frame idx - """ - - header = self.entry[idx] - - self.buf.seek(header["offset"]) - data = self.buf.read(8) - self.buf.seek(header["offset"]) - - if data[:8] == PngImagePlugin._MAGIC: - # png frame - im = PngImagePlugin.PngImageFile(self.buf) - Image._decompression_bomb_check(im.size) - else: - # XOR + AND mask bmp frame - im = BmpImagePlugin.DibImageFile(self.buf) - Image._decompression_bomb_check(im.size) - - # change tile dimension to only encompass XOR image - im._size = (im.size[0], int(im.size[1] / 2)) - d, e, o, a = im.tile[0] - im.tile[0] = d, (0, 0) + im.size, o, a - - # figure out where AND mask image starts - bpp = header["bpp"] - if 32 == bpp: - # 32-bit color depth icon image allows semitransparent areas - # PIL's DIB format ignores transparency bits, recover them. - # The DIB is packed in BGRX byte order where X is the alpha - # channel. - - # Back up to start of bmp data - self.buf.seek(o) - # extract every 4th byte (eg. 3,7,11,15,...) - alpha_bytes = self.buf.read(im.size[0] * im.size[1] * 4)[3::4] - - # convert to an 8bpp grayscale image - mask = Image.frombuffer( - "L", # 8bpp - im.size, # (w, h) - alpha_bytes, # source chars - "raw", # raw decoder - ("L", 0, -1), # 8bpp inverted, unpadded, reversed - ) - else: - # get AND image from end of bitmap - w = im.size[0] - if (w % 32) > 0: - # bitmap row data is aligned to word boundaries - w += 32 - (im.size[0] % 32) - - # the total mask data is - # padded row size * height / bits per char - - total_bytes = int((w * im.size[1]) / 8) - and_mask_offset = header["offset"] + header["size"] - total_bytes - - self.buf.seek(and_mask_offset) - mask_data = self.buf.read(total_bytes) - - # convert raw data to image - mask = Image.frombuffer( - "1", # 1 bpp - im.size, # (w, h) - mask_data, # source chars - "raw", # raw decoder - ("1;I", int(w / 8), -1), # 1bpp inverted, padded, reversed - ) - - # now we have two images, im is XOR image and mask is AND image - - # apply mask image as alpha channel - im = im.convert("RGBA") - im.putalpha(mask) - - return im - - -## -# Image plugin for Windows Icon files. - - -class IcoImageFile(ImageFile.ImageFile): - """ - PIL read-only image support for Microsoft Windows .ico files. - - By default the largest resolution image in the file will be loaded. This - can be changed by altering the 'size' attribute before calling 'load'. - - The info dictionary has a key 'sizes' that is a list of the sizes available - in the icon file. - - Handles classic, XP and Vista icon formats. - - When saving, PNG compression is used. Support for this was only added in - Windows Vista. If you are unable to view the icon in Windows, convert the - image to "RGBA" mode before saving. - - This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis - . - https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki - """ - - format = "ICO" - format_description = "Windows Icon" - - def _open(self): - self.ico = IcoFile(self.fp) - self.info["sizes"] = self.ico.sizes() - self.size = self.ico.entry[0]["dim"] - self.load() - - @property - def size(self): - return self._size - - @size.setter - def size(self, value): - if value not in self.info["sizes"]: - msg = "This is not one of the allowed sizes of this image" - raise ValueError(msg) - self._size = value - - def load(self): - if self.im is not None and self.im.size == self.size: - # Already loaded - return Image.Image.load(self) - im = self.ico.getimage(self.size) - # if tile is PNG, it won't really be loaded yet - im.load() - self.im = im.im - self.pyaccess = None - self.mode = im.mode - if im.size != self.size: - warnings.warn("Image was not the expected size") - - index = self.ico.getentryindex(self.size) - sizes = list(self.info["sizes"]) - sizes[index] = im.size - self.info["sizes"] = set(sizes) - - self.size = im.size - - def load_seek(self): - # Flag the ImageFile.Parser so that it - # just does all the decode at the end. - pass - - -# -# -------------------------------------------------------------------- - - -Image.register_open(IcoImageFile.format, IcoImageFile, _accept) -Image.register_save(IcoImageFile.format, _save) -Image.register_extension(IcoImageFile.format, ".ico") - -Image.register_mime(IcoImageFile.format, "image/x-icon") diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/eexec.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/eexec.py deleted file mode 100644 index cafa312cdaa4696b0624438e06418ade95438441..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/eexec.py +++ /dev/null @@ -1,119 +0,0 @@ -""" -PostScript Type 1 fonts make use of two types of encryption: charstring -encryption and ``eexec`` encryption. Charstring encryption is used for -the charstrings themselves, while ``eexec`` is used to encrypt larger -sections of the font program, such as the ``Private`` and ``CharStrings`` -dictionaries. Despite the different names, the algorithm is the same, -although ``eexec`` encryption uses a fixed initial key R=55665. - -The algorithm uses cipher feedback, meaning that the ciphertext is used -to modify the key. Because of this, the routines in this module return -the new key at the end of the operation. - -""" - -from fontTools.misc.textTools import bytechr, bytesjoin, byteord - - -def _decryptChar(cipher, R): - cipher = byteord(cipher) - plain = ((cipher ^ (R >> 8))) & 0xFF - R = ((cipher + R) * 52845 + 22719) & 0xFFFF - return bytechr(plain), R - - -def _encryptChar(plain, R): - plain = byteord(plain) - cipher = ((plain ^ (R >> 8))) & 0xFF - R = ((cipher + R) * 52845 + 22719) & 0xFFFF - return bytechr(cipher), R - - -def decrypt(cipherstring, R): - r""" - Decrypts a string using the Type 1 encryption algorithm. - - Args: - cipherstring: String of ciphertext. - R: Initial key. - - Returns: - decryptedStr: Plaintext string. - R: Output key for subsequent decryptions. - - Examples:: - - >>> testStr = b"\0\0asdadads asds\265" - >>> decryptedStr, R = decrypt(testStr, 12321) - >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1' - True - >>> R == 36142 - True - """ - plainList = [] - for cipher in cipherstring: - plain, R = _decryptChar(cipher, R) - plainList.append(plain) - plainstring = bytesjoin(plainList) - return plainstring, int(R) - - -def encrypt(plainstring, R): - r""" - Encrypts a string using the Type 1 encryption algorithm. - - Note that the algorithm as described in the Type 1 specification requires the - plaintext to be prefixed with a number of random bytes. (For ``eexec`` the - number of random bytes is set to 4.) This routine does *not* add the random - prefix to its input. - - Args: - plainstring: String of plaintext. - R: Initial key. - - Returns: - cipherstring: Ciphertext string. - R: Output key for subsequent encryptions. - - Examples:: - - >>> testStr = b"\0\0asdadads asds\265" - >>> decryptedStr, R = decrypt(testStr, 12321) - >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1' - True - >>> R == 36142 - True - - >>> testStr = b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1' - >>> encryptedStr, R = encrypt(testStr, 12321) - >>> encryptedStr == b"\0\0asdadads asds\265" - True - >>> R == 36142 - True - """ - cipherList = [] - for plain in plainstring: - cipher, R = _encryptChar(plain, R) - cipherList.append(cipher) - cipherstring = bytesjoin(cipherList) - return cipherstring, int(R) - - -def hexString(s): - import binascii - - return binascii.hexlify(s) - - -def deHexString(h): - import binascii - - h = bytesjoin(h.split()) - return binascii.unhexlify(h) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/otTraverse.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/otTraverse.py deleted file mode 100644 index bf22dcfdb500cd50525fce749562384a82b1cb0f..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/otTraverse.py +++ /dev/null @@ -1,161 +0,0 @@ -"""Methods for traversing trees of otData-driven OpenType tables.""" -from collections import deque -from typing import Callable, Deque, Iterable, List, Optional, Tuple -from .otBase import BaseTable - - -__all__ = [ - "bfs_base_table", - "dfs_base_table", - "SubTablePath", -] - - -class SubTablePath(Tuple[BaseTable.SubTableEntry, ...]): - def __str__(self) -> str: - path_parts = [] - for entry in self: - path_part = entry.name - if entry.index is not None: - path_part += f"[{entry.index}]" - path_parts.append(path_part) - return ".".join(path_parts) - - -# Given f(current frontier, new entries) add new entries to frontier -AddToFrontierFn = Callable[[Deque[SubTablePath], List[SubTablePath]], None] - - -def dfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Depth-first search tree of BaseTables. - - Args: - root (BaseTable): the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extendleft(reversed(new)), - iter_subtables_fn, - ) - - -def bfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Breadth-first search tree of BaseTables. - - Args: - the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extend(new), - iter_subtables_fn, - ) - - -def _traverse_ot_data( - root: BaseTable, - root_accessor: Optional[str], - skip_root: bool, - predicate: Optional[Callable[[SubTablePath], bool]], - add_to_frontier_fn: AddToFrontierFn, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - # no visited because general otData cannot cycle (forward-offset only) - if root_accessor is None: - root_accessor = type(root).__name__ - - if predicate is None: - - def predicate(path): - return True - - if iter_subtables_fn is None: - - def iter_subtables_fn(table): - return table.iterSubTables() - - frontier: Deque[SubTablePath] = deque() - - root_entry = BaseTable.SubTableEntry(root_accessor, root) - if not skip_root: - frontier.append((root_entry,)) - else: - add_to_frontier_fn( - frontier, - [ - (root_entry, subtable_entry) - for subtable_entry in iter_subtables_fn(root) - ], - ) - - while frontier: - # path is (value, attr_name) tuples. attr_name is attr of parent to get value - path = frontier.popleft() - current = path[-1].value - - if not predicate(path): - continue - - yield SubTablePath(path) - - new_entries = [ - path + (subtable_entry,) for subtable_entry in iter_subtables_fn(current) - ] - - add_to_frontier_fn(frontier, new_entries) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722dsp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722dsp.c deleted file mode 100644 index c770bfbdff05b9948aca3c9b34d7eb0117c84bbb..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g722dsp.c +++ /dev/null @@ -1,77 +0,0 @@ -/* - * Copyright (c) 2015 Peter Meerwald - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "g722dsp.h" -#include "mathops.h" - -/* - * quadrature mirror filter (QMF) coefficients (ITU-T G.722 Table 11) inlined - * in code below: 3, -11, 12, 32, -210, 951, 3876, -805, 362, -156, 53, -11 - */ - -static void g722_apply_qmf(const int16_t *prev_samples, int xout[2]) -{ - xout[1] = MUL16(*prev_samples++, 3); - xout[0] = MUL16(*prev_samples++, -11); - - MAC16(xout[1], *prev_samples++, -11); - MAC16(xout[0], *prev_samples++, 53); - - MAC16(xout[1], *prev_samples++, 12); - MAC16(xout[0], *prev_samples++, -156); - - MAC16(xout[1], *prev_samples++, 32); - MAC16(xout[0], *prev_samples++, 362); - - MAC16(xout[1], *prev_samples++, -210); - MAC16(xout[0], *prev_samples++, -805); - - MAC16(xout[1], *prev_samples++, 951); - MAC16(xout[0], *prev_samples++, 3876); - - MAC16(xout[1], *prev_samples++, 3876); - MAC16(xout[0], *prev_samples++, 951); - - MAC16(xout[1], *prev_samples++, -805); - MAC16(xout[0], *prev_samples++, -210); - - MAC16(xout[1], *prev_samples++, 362); - MAC16(xout[0], *prev_samples++, 32); - - MAC16(xout[1], *prev_samples++, -156); - MAC16(xout[0], *prev_samples++, 12); - - MAC16(xout[1], *prev_samples++, 53); - MAC16(xout[0], *prev_samples++, -11); - - MAC16(xout[1], *prev_samples++, -11); - MAC16(xout[0], *prev_samples++, 3); -} - -av_cold void ff_g722dsp_init(G722DSPContext *c) -{ - c->apply_qmf = g722_apply_qmf; - -#if ARCH_ARM - ff_g722dsp_init_arm(c); -#elif ARCH_X86 - ff_g722dsp_init_x86(c); -#endif -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevcpred_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevcpred_template.c deleted file mode 100644 index 16d1c7f35ff5882ac45af6079806066f50d940e5..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevcpred_template.c +++ /dev/null @@ -1,552 +0,0 @@ -/* - * HEVC video decoder - * - * Copyright (C) 2012 - 2013 Guillaume Martres - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/pixdesc.h" - -#include "bit_depth_template.c" -#include "hevcpred.h" - -#define POS(x, y) src[(x) + stride * (y)] - -static av_always_inline void FUNC(intra_pred)(HEVCLocalContext *lc, int x0, int y0, - int log2_size, int c_idx) -{ -#define PU(x) \ - ((x) >> s->ps.sps->log2_min_pu_size) -#define MVF(x, y) \ - (s->ref->tab_mvf[(x) + (y) * min_pu_width]) -#define MVF_PU(x, y) \ - MVF(PU(x0 + ((x) * (1 << hshift))), PU(y0 + ((y) * (1 << vshift)))) -#define IS_INTRA(x, y) \ - (MVF_PU(x, y).pred_flag == PF_INTRA) -#define MIN_TB_ADDR_ZS(x, y) \ - s->ps.pps->min_tb_addr_zs[(y) * (s->ps.sps->tb_mask+2) + (x)] -#define EXTEND(ptr, val, len) \ -do { \ - pixel4 pix = PIXEL_SPLAT_X4(val); \ - for (i = 0; i < (len); i += 4) \ - AV_WN4P(ptr + i, pix); \ -} while (0) - -#define EXTEND_RIGHT_CIP(ptr, start, length) \ - for (i = start; i < (start) + (length); i += 4) \ - if (!IS_INTRA(i, -1)) \ - AV_WN4P(&ptr[i], a); \ - else \ - a = PIXEL_SPLAT_X4(ptr[i+3]) -#define EXTEND_LEFT_CIP(ptr, start, length) \ - for (i = start; i > (start) - (length); i--) \ - if (!IS_INTRA(i - 1, -1)) \ - ptr[i - 1] = ptr[i] -#define EXTEND_UP_CIP(ptr, start, length) \ - for (i = (start); i > (start) - (length); i -= 4) \ - if (!IS_INTRA(-1, i - 3)) \ - AV_WN4P(&ptr[i - 3], a); \ - else \ - a = PIXEL_SPLAT_X4(ptr[i - 3]) -#define EXTEND_DOWN_CIP(ptr, start, length) \ - for (i = start; i < (start) + (length); i += 4) \ - if (!IS_INTRA(-1, i)) \ - AV_WN4P(&ptr[i], a); \ - else \ - a = PIXEL_SPLAT_X4(ptr[i + 3]) - - const HEVCContext *const s = lc->parent; - int i; - int hshift = s->ps.sps->hshift[c_idx]; - int vshift = s->ps.sps->vshift[c_idx]; - int size = (1 << log2_size); - int size_in_luma_h = size << hshift; - int size_in_tbs_h = size_in_luma_h >> s->ps.sps->log2_min_tb_size; - int size_in_luma_v = size << vshift; - int size_in_tbs_v = size_in_luma_v >> s->ps.sps->log2_min_tb_size; - int x = x0 >> hshift; - int y = y0 >> vshift; - int x_tb = (x0 >> s->ps.sps->log2_min_tb_size) & s->ps.sps->tb_mask; - int y_tb = (y0 >> s->ps.sps->log2_min_tb_size) & s->ps.sps->tb_mask; - int spin = c_idx && !size_in_tbs_v && ((2 * y0) & (1 << s->ps.sps->log2_min_tb_size)); - - int cur_tb_addr = MIN_TB_ADDR_ZS(x_tb, y_tb); - - ptrdiff_t stride = s->frame->linesize[c_idx] / sizeof(pixel); - pixel *src = (pixel*)s->frame->data[c_idx] + x + y * stride; - - int min_pu_width = s->ps.sps->min_pu_width; - - enum IntraPredMode mode = c_idx ? lc->tu.intra_pred_mode_c : - lc->tu.intra_pred_mode; - pixel4 a; - pixel left_array[2 * MAX_TB_SIZE + 1]; - pixel filtered_left_array[2 * MAX_TB_SIZE + 1]; - pixel top_array[2 * MAX_TB_SIZE + 1]; - pixel filtered_top_array[2 * MAX_TB_SIZE + 1]; - - pixel *left = left_array + 1; - pixel *top = top_array + 1; - pixel *filtered_left = filtered_left_array + 1; - pixel *filtered_top = filtered_top_array + 1; - int cand_bottom_left = lc->na.cand_bottom_left && cur_tb_addr > MIN_TB_ADDR_ZS( x_tb - 1, (y_tb + size_in_tbs_v + spin) & s->ps.sps->tb_mask); - int cand_left = lc->na.cand_left; - int cand_up_left = lc->na.cand_up_left; - int cand_up = lc->na.cand_up; - int cand_up_right = lc->na.cand_up_right && !spin && cur_tb_addr > MIN_TB_ADDR_ZS((x_tb + size_in_tbs_h) & s->ps.sps->tb_mask, y_tb - 1); - - int bottom_left_size = (FFMIN(y0 + 2 * size_in_luma_v, s->ps.sps->height) - - (y0 + size_in_luma_v)) >> vshift; - int top_right_size = (FFMIN(x0 + 2 * size_in_luma_h, s->ps.sps->width) - - (x0 + size_in_luma_h)) >> hshift; - - if (s->ps.pps->constrained_intra_pred_flag == 1) { - int size_in_luma_pu_v = PU(size_in_luma_v); - int size_in_luma_pu_h = PU(size_in_luma_h); - int on_pu_edge_x = !av_mod_uintp2(x0, s->ps.sps->log2_min_pu_size); - int on_pu_edge_y = !av_mod_uintp2(y0, s->ps.sps->log2_min_pu_size); - if (!size_in_luma_pu_h) - size_in_luma_pu_h++; - if (cand_bottom_left == 1 && on_pu_edge_x) { - int x_left_pu = PU(x0 - 1); - int y_bottom_pu = PU(y0 + size_in_luma_v); - int max = FFMIN(size_in_luma_pu_v, s->ps.sps->min_pu_height - y_bottom_pu); - cand_bottom_left = 0; - for (i = 0; i < max; i += 2) - cand_bottom_left |= (MVF(x_left_pu, y_bottom_pu + i).pred_flag == PF_INTRA); - } - if (cand_left == 1 && on_pu_edge_x) { - int x_left_pu = PU(x0 - 1); - int y_left_pu = PU(y0); - int max = FFMIN(size_in_luma_pu_v, s->ps.sps->min_pu_height - y_left_pu); - cand_left = 0; - for (i = 0; i < max; i += 2) - cand_left |= (MVF(x_left_pu, y_left_pu + i).pred_flag == PF_INTRA); - } - if (cand_up_left == 1) { - int x_left_pu = PU(x0 - 1); - int y_top_pu = PU(y0 - 1); - cand_up_left = MVF(x_left_pu, y_top_pu).pred_flag == PF_INTRA; - } - if (cand_up == 1 && on_pu_edge_y) { - int x_top_pu = PU(x0); - int y_top_pu = PU(y0 - 1); - int max = FFMIN(size_in_luma_pu_h, s->ps.sps->min_pu_width - x_top_pu); - cand_up = 0; - for (i = 0; i < max; i += 2) - cand_up |= (MVF(x_top_pu + i, y_top_pu).pred_flag == PF_INTRA); - } - if (cand_up_right == 1 && on_pu_edge_y) { - int y_top_pu = PU(y0 - 1); - int x_right_pu = PU(x0 + size_in_luma_h); - int max = FFMIN(size_in_luma_pu_h, s->ps.sps->min_pu_width - x_right_pu); - cand_up_right = 0; - for (i = 0; i < max; i += 2) - cand_up_right |= (MVF(x_right_pu + i, y_top_pu).pred_flag == PF_INTRA); - } - memset(left, 128, 2 * MAX_TB_SIZE*sizeof(pixel)); - memset(top , 128, 2 * MAX_TB_SIZE*sizeof(pixel)); - top[-1] = 128; - } - if (cand_up_left) { - left[-1] = POS(-1, -1); - top[-1] = left[-1]; - } - if (cand_up) - memcpy(top, src - stride, size * sizeof(pixel)); - if (cand_up_right) { - memcpy(top + size, src - stride + size, size * sizeof(pixel)); - EXTEND(top + size + top_right_size, POS(size + top_right_size - 1, -1), - size - top_right_size); - } - if (cand_left) - for (i = 0; i < size; i++) - left[i] = POS(-1, i); - if (cand_bottom_left) { - for (i = size; i < size + bottom_left_size; i++) - left[i] = POS(-1, i); - EXTEND(left + size + bottom_left_size, POS(-1, size + bottom_left_size - 1), - size - bottom_left_size); - } - - if (s->ps.pps->constrained_intra_pred_flag == 1) { - if (cand_bottom_left || cand_left || cand_up_left || cand_up || cand_up_right) { - int size_max_x = x0 + ((2 * size) << hshift) < s->ps.sps->width ? - 2 * size : (s->ps.sps->width - x0) >> hshift; - int size_max_y = y0 + ((2 * size) << vshift) < s->ps.sps->height ? - 2 * size : (s->ps.sps->height - y0) >> vshift; - int j = size + (cand_bottom_left? bottom_left_size: 0) -1; - if (!cand_up_right) { - size_max_x = x0 + ((size) << hshift) < s->ps.sps->width ? - size : (s->ps.sps->width - x0) >> hshift; - } - if (!cand_bottom_left) { - size_max_y = y0 + (( size) << vshift) < s->ps.sps->height ? - size : (s->ps.sps->height - y0) >> vshift; - } - if (cand_bottom_left || cand_left || cand_up_left) { - while (j > -1 && !IS_INTRA(-1, j)) - j--; - if (!IS_INTRA(-1, j)) { - j = 0; - while (j < size_max_x && !IS_INTRA(j, -1)) - j++; - EXTEND_LEFT_CIP(top, j, j + 1); - left[-1] = top[-1]; - } - } else { - j = 0; - while (j < size_max_x && !IS_INTRA(j, -1)) - j++; - if (j > 0) - if (cand_up_left) { - EXTEND_LEFT_CIP(top, j, j + 1); - } else { - EXTEND_LEFT_CIP(top, j, j); - top[-1] = top[0]; - } - left[-1] = top[-1]; - } - left[-1] = top[-1]; - if (cand_bottom_left || cand_left) { - a = PIXEL_SPLAT_X4(left[-1]); - EXTEND_DOWN_CIP(left, 0, size_max_y); - } - if (!cand_left) - EXTEND(left, left[-1], size); - if (!cand_bottom_left) - EXTEND(left + size, left[size - 1], size); - if (x0 != 0 && y0 != 0) { - a = PIXEL_SPLAT_X4(left[size_max_y - 1]); - EXTEND_UP_CIP(left, size_max_y - 1, size_max_y); - if (!IS_INTRA(-1, - 1)) - left[-1] = left[0]; - } else if (x0 == 0) { - EXTEND(left, 0, size_max_y); - } else { - a = PIXEL_SPLAT_X4(left[size_max_y - 1]); - EXTEND_UP_CIP(left, size_max_y - 1, size_max_y); - } - top[-1] = left[-1]; - if (y0 != 0) { - a = PIXEL_SPLAT_X4(left[-1]); - EXTEND_RIGHT_CIP(top, 0, size_max_x); - } - } - } - // Infer the unavailable samples - if (!cand_bottom_left) { - if (cand_left) { - EXTEND(left + size, left[size - 1], size); - } else if (cand_up_left) { - EXTEND(left, left[-1], 2 * size); - cand_left = 1; - } else if (cand_up) { - left[-1] = top[0]; - EXTEND(left, left[-1], 2 * size); - cand_up_left = 1; - cand_left = 1; - } else if (cand_up_right) { - EXTEND(top, top[size], size); - left[-1] = top[size]; - EXTEND(left, left[-1], 2 * size); - cand_up = 1; - cand_up_left = 1; - cand_left = 1; - } else { // No samples available - left[-1] = (1 << (BIT_DEPTH - 1)); - EXTEND(top, left[-1], 2 * size); - EXTEND(left, left[-1], 2 * size); - } - } - - if (!cand_left) - EXTEND(left, left[size], size); - if (!cand_up_left) { - left[-1] = left[0]; - } - if (!cand_up) - EXTEND(top, left[-1], size); - if (!cand_up_right) - EXTEND(top + size, top[size - 1], size); - - top[-1] = left[-1]; - - // Filtering process - if (!s->ps.sps->intra_smoothing_disabled_flag && (c_idx == 0 || s->ps.sps->chroma_format_idc == 3)) { - if (mode != INTRA_DC && size != 4){ - int intra_hor_ver_dist_thresh[] = { 7, 1, 0 }; - int min_dist_vert_hor = FFMIN(FFABS((int)(mode - 26U)), - FFABS((int)(mode - 10U))); - if (min_dist_vert_hor > intra_hor_ver_dist_thresh[log2_size - 3]) { - int threshold = 1 << (BIT_DEPTH - 5); - if (s->ps.sps->sps_strong_intra_smoothing_enable_flag && c_idx == 0 && - log2_size == 5 && - FFABS(top[-1] + top[63] - 2 * top[31]) < threshold && - FFABS(left[-1] + left[63] - 2 * left[31]) < threshold) { - // We can't just overwrite values in top because it could be - // a pointer into src - filtered_top[-1] = top[-1]; - filtered_top[63] = top[63]; - for (i = 0; i < 63; i++) - filtered_top[i] = ((64 - (i + 1)) * top[-1] + - (i + 1) * top[63] + 32) >> 6; - for (i = 0; i < 63; i++) - left[i] = ((64 - (i + 1)) * left[-1] + - (i + 1) * left[63] + 32) >> 6; - top = filtered_top; - } else { - filtered_left[2 * size - 1] = left[2 * size - 1]; - filtered_top[2 * size - 1] = top[2 * size - 1]; - for (i = 2 * size - 2; i >= 0; i--) - filtered_left[i] = (left[i + 1] + 2 * left[i] + - left[i - 1] + 2) >> 2; - filtered_top[-1] = - filtered_left[-1] = (left[0] + 2 * left[-1] + top[0] + 2) >> 2; - for (i = 2 * size - 2; i >= 0; i--) - filtered_top[i] = (top[i + 1] + 2 * top[i] + - top[i - 1] + 2) >> 2; - left = filtered_left; - top = filtered_top; - } - } - } - } - - switch (mode) { - case INTRA_PLANAR: - s->hpc.pred_planar[log2_size - 2]((uint8_t *)src, (uint8_t *)top, - (uint8_t *)left, stride); - break; - case INTRA_DC: - s->hpc.pred_dc((uint8_t *)src, (uint8_t *)top, - (uint8_t *)left, stride, log2_size, c_idx); - break; - default: - s->hpc.pred_angular[log2_size - 2]((uint8_t *)src, (uint8_t *)top, - (uint8_t *)left, stride, c_idx, - mode); - break; - } -} - -#define INTRA_PRED(size) \ -static void FUNC(intra_pred_ ## size)(HEVCLocalContext *lc, int x0, int y0, int c_idx) \ -{ \ - FUNC(intra_pred)(lc, x0, y0, size, c_idx); \ -} - -INTRA_PRED(2) -INTRA_PRED(3) -INTRA_PRED(4) -INTRA_PRED(5) - -#undef INTRA_PRED - -static av_always_inline void FUNC(pred_planar)(uint8_t *_src, const uint8_t *_top, - const uint8_t *_left, ptrdiff_t stride, - int trafo_size) -{ - int x, y; - pixel *src = (pixel *)_src; - const pixel *top = (const pixel *)_top; - const pixel *left = (const pixel *)_left; - int size = 1 << trafo_size; - for (y = 0; y < size; y++) - for (x = 0; x < size; x++) - POS(x, y) = ((size - 1 - x) * left[y] + (x + 1) * top[size] + - (size - 1 - y) * top[x] + (y + 1) * left[size] + size) >> (trafo_size + 1); -} - -#define PRED_PLANAR(size)\ -static void FUNC(pred_planar_ ## size)(uint8_t *src, const uint8_t *top, \ - const uint8_t *left, ptrdiff_t stride) \ -{ \ - FUNC(pred_planar)(src, top, left, stride, size + 2); \ -} - -PRED_PLANAR(0) -PRED_PLANAR(1) -PRED_PLANAR(2) -PRED_PLANAR(3) - -#undef PRED_PLANAR - -static void FUNC(pred_dc)(uint8_t *_src, const uint8_t *_top, - const uint8_t *_left, - ptrdiff_t stride, int log2_size, int c_idx) -{ - int i, j, x, y; - int size = (1 << log2_size); - pixel *src = (pixel *)_src; - const pixel *top = (const pixel *)_top; - const pixel *left = (const pixel *)_left; - int dc = size; - pixel4 a; - for (i = 0; i < size; i++) - dc += left[i] + top[i]; - - dc >>= log2_size + 1; - - a = PIXEL_SPLAT_X4(dc); - - for (i = 0; i < size; i++) - for (j = 0; j < size; j+=4) - AV_WN4P(&POS(j, i), a); - - if (c_idx == 0 && size < 32) { - POS(0, 0) = (left[0] + 2 * dc + top[0] + 2) >> 2; - for (x = 1; x < size; x++) - POS(x, 0) = (top[x] + 3 * dc + 2) >> 2; - for (y = 1; y < size; y++) - POS(0, y) = (left[y] + 3 * dc + 2) >> 2; - } -} - -static av_always_inline void FUNC(pred_angular)(uint8_t *_src, - const uint8_t *_top, - const uint8_t *_left, - ptrdiff_t stride, int c_idx, - int mode, int size) -{ - int x, y; - pixel *src = (pixel *)_src; - const pixel *top = (const pixel *)_top; - const pixel *left = (const pixel *)_left; - - static const int intra_pred_angle[] = { - 32, 26, 21, 17, 13, 9, 5, 2, 0, -2, -5, -9, -13, -17, -21, -26, -32, - -26, -21, -17, -13, -9, -5, -2, 0, 2, 5, 9, 13, 17, 21, 26, 32 - }; - static const int inv_angle[] = { - -4096, -1638, -910, -630, -482, -390, -315, -256, -315, -390, -482, - -630, -910, -1638, -4096 - }; - - int angle = intra_pred_angle[mode - 2]; - pixel ref_array[3 * MAX_TB_SIZE + 4]; - pixel *ref_tmp = ref_array + size; - const pixel *ref; - int last = (size * angle) >> 5; - - if (mode >= 18) { - ref = top - 1; - if (angle < 0 && last < -1) { - for (x = 0; x <= size; x += 4) - AV_WN4P(&ref_tmp[x], AV_RN4P(&top[x - 1])); - for (x = last; x <= -1; x++) - ref_tmp[x] = left[-1 + ((x * inv_angle[mode - 11] + 128) >> 8)]; - ref = ref_tmp; - } - - for (y = 0; y < size; y++) { - int idx = ((y + 1) * angle) >> 5; - int fact = ((y + 1) * angle) & 31; - if (fact) { - for (x = 0; x < size; x += 4) { - POS(x , y) = ((32 - fact) * ref[x + idx + 1] + - fact * ref[x + idx + 2] + 16) >> 5; - POS(x + 1, y) = ((32 - fact) * ref[x + 1 + idx + 1] + - fact * ref[x + 1 + idx + 2] + 16) >> 5; - POS(x + 2, y) = ((32 - fact) * ref[x + 2 + idx + 1] + - fact * ref[x + 2 + idx + 2] + 16) >> 5; - POS(x + 3, y) = ((32 - fact) * ref[x + 3 + idx + 1] + - fact * ref[x + 3 + idx + 2] + 16) >> 5; - } - } else { - for (x = 0; x < size; x += 4) - AV_WN4P(&POS(x, y), AV_RN4P(&ref[x + idx + 1])); - } - } - if (mode == 26 && c_idx == 0 && size < 32) { - for (y = 0; y < size; y++) - POS(0, y) = av_clip_pixel(top[0] + ((left[y] - left[-1]) >> 1)); - } - } else { - ref = left - 1; - if (angle < 0 && last < -1) { - for (x = 0; x <= size; x += 4) - AV_WN4P(&ref_tmp[x], AV_RN4P(&left[x - 1])); - for (x = last; x <= -1; x++) - ref_tmp[x] = top[-1 + ((x * inv_angle[mode - 11] + 128) >> 8)]; - ref = ref_tmp; - } - - for (x = 0; x < size; x++) { - int idx = ((x + 1) * angle) >> 5; - int fact = ((x + 1) * angle) & 31; - if (fact) { - for (y = 0; y < size; y++) { - POS(x, y) = ((32 - fact) * ref[y + idx + 1] + - fact * ref[y + idx + 2] + 16) >> 5; - } - } else { - for (y = 0; y < size; y++) - POS(x, y) = ref[y + idx + 1]; - } - } - if (mode == 10 && c_idx == 0 && size < 32) { - for (x = 0; x < size; x += 4) { - POS(x, 0) = av_clip_pixel(left[0] + ((top[x ] - top[-1]) >> 1)); - POS(x + 1, 0) = av_clip_pixel(left[0] + ((top[x + 1] - top[-1]) >> 1)); - POS(x + 2, 0) = av_clip_pixel(left[0] + ((top[x + 2] - top[-1]) >> 1)); - POS(x + 3, 0) = av_clip_pixel(left[0] + ((top[x + 3] - top[-1]) >> 1)); - } - } - } -} - -static void FUNC(pred_angular_0)(uint8_t *src, const uint8_t *top, - const uint8_t *left, - ptrdiff_t stride, int c_idx, int mode) -{ - FUNC(pred_angular)(src, top, left, stride, c_idx, mode, 1 << 2); -} - -static void FUNC(pred_angular_1)(uint8_t *src, const uint8_t *top, - const uint8_t *left, - ptrdiff_t stride, int c_idx, int mode) -{ - FUNC(pred_angular)(src, top, left, stride, c_idx, mode, 1 << 3); -} - -static void FUNC(pred_angular_2)(uint8_t *src, const uint8_t *top, - const uint8_t *left, - ptrdiff_t stride, int c_idx, int mode) -{ - FUNC(pred_angular)(src, top, left, stride, c_idx, mode, 1 << 4); -} - -static void FUNC(pred_angular_3)(uint8_t *src, const uint8_t *top, - const uint8_t *left, - ptrdiff_t stride, int c_idx, int mode) -{ - FUNC(pred_angular)(src, top, left, stride, c_idx, mode, 1 << 5); -} - -#undef EXTEND_LEFT_CIP -#undef EXTEND_RIGHT_CIP -#undef EXTEND_UP_CIP -#undef EXTEND_DOWN_CIP -#undef IS_INTRA -#undef MVF_PU -#undef MVF -#undef PU -#undef EXTEND -#undef MIN_TB_ADDR_ZS -#undef POS diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/kbdwin.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/kbdwin.h deleted file mode 100644 index 4185c4206f8314fedfd4b0bcc678269349ea4eee..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/kbdwin.h +++ /dev/null @@ -1,38 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_KBDWIN_H -#define AVCODEC_KBDWIN_H - -#include - -/** - * Maximum window size for ff_kbd_window_init. - */ -#define FF_KBD_WINDOW_MAX 1024 - -/** - * Generate a Kaiser-Bessel Derived Window. - * @param window pointer to half window - * @param alpha determines window shape - * @param n size of half window, max FF_KBD_WINDOW_MAX - */ -void ff_kbd_window_init(float *window, float alpha, int n); -void ff_kbd_window_init_fixed(int32_t *window, float alpha, int n); - -#endif /* AVCODEC_KBDWIN_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Become a Legend APK Mod for Android - Unlimited Money and Gems.md b/spaces/congsaPfin/Manga-OCR/logs/Download Become a Legend APK Mod for Android - Unlimited Money and Gems.md deleted file mode 100644 index 6f75ba84f2533d396a9e64052b897de693f28fa2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Become a Legend APK Mod for Android - Unlimited Money and Gems.md +++ /dev/null @@ -1,52 +0,0 @@ -
        -

        Become a Legend APK Mod: How to Download and Play This Amazing Soccer Game

        | Introduction: what is Become a Legend, why it is popular, what are the features and benefits of the mod version | |

        What is Become a Legend?

        | Description: a soccer game where you create your own player and lead him to glory, available for Android devices | |

        Why is Become a Legend Popular?

        | Reasons: realistic graphics, gameplay, and physics, customization options, online mode, challenges, achievements, etc. | |

        What are the Features and Benefits of the Mod Version?

        | Advantages: unlimited money, unlocked items, no ads, no root required, easy installation, etc. | |

        Unlimited Money

        | Explanation: how to get unlimited money in the game, what you can buy with it, how it enhances your experience | |

        Unlocked Items

        | Explanation: how to access all the items in the game, such as jerseys, boots, accessories, skills, etc., how they improve your performance | |

        No Ads

        | Explanation: how to remove ads from the game, why they are annoying and distracting, how they affect your gameplay | |

        No Root Required

        | Explanation: how to install the mod without rooting your device, why rooting is risky and complicated, how it voids your warranty | |

        Easy Installation

        | Explanation: how to download and install the mod in a few simple steps, what you need to do before and after installation | |

        How to Download and Install Become a Legend APK Mod?

        | Steps: provide a link to a reliable source for downloading the mod file, explain how to enable unknown sources on your device settings, explain how to locate and install the mod file, explain how to launch and enjoy the game | |

        Tips and Tricks for Playing Become a Legend APK Mod

        | Suggestions: provide some useful tips and tricks for playing the game, such as how to create your player, how to train your skills, how to choose your team, how to win matches, etc. | |

        Frequently Asked Questions about Become a Legend APK Mod

        | FAQs: provide some common questions and answers about the game and the mod, such as is it safe, is it legal, is it compatible with other devices or versions, etc. | |

        Conclusion

        | Summary: recap the main points of the article, restate the benefits of playing Become a Legend APK Mod, invite the reader to try it out and share their feedback | Third, I would write the article based on the outline I have created. I would use a conversational style as written by a human (use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors). I would also use visuals (such as images or tables) to enhance my article and make it more appealing. I would also edit my work and check for grammatical mistakes and unnecessary repetitions. Here is an example of how the article table could look like: | HTML Code | Preview | | --- | --- | | ``
        ``
        `Become a Legend APK Mod: How to Download and Play This Amazing Soccer Game`
        ``
        ``
        `

        Become a Legend APK Mod: How to Download and Play This Amazing Soccer Game

        `< `

        `If you are a soccer fan and you love playing games on your Android device, then you must have heard of Become a Legend. It is one of the most popular and realistic soccer games available on the Google Play Store. But what if I told you that you can make it even better with a simple mod? Yes, you heard me right. You can download and install Become a Legend APK Mod and enjoy unlimited money, unlocked items, no ads, and many more features that will make your gaming experience more fun and exciting. In this article, I will tell you everything you need to know about this amazing mod, how to download and install it, and some tips and tricks for playing it. So, are you ready to become a legend? Let's get started!

        -

        become a legend apk mod


        Download Zip ····· https://urlca.com/2uObiE



        `
        `

        What is Become a Legend?

        `
        `

        `Become a Legend is a soccer game where you create your own player and lead him to glory. You can customize your player's appearance, name, nationality, position, skills, and attributes. You can also choose your team from over 100 clubs from different countries and leagues. You can play in various modes, such as career mode, online mode, challenge mode, and training mode. You can also compete with other players from around the world and earn achievements and rewards.

        `
        `

        `The game has realistic graphics, gameplay, and physics that will make you feel like you are on the pitch. You can control your player with simple gestures and buttons, and perform amazing moves and shots. You can also see your player's stats and progress as he improves his skills and reputation. You can also unlock new items and features as you advance in the game.

        ` |

        Become a Legend APK Mod: How to Download and Play This Amazing Soccer Game

        If you are a soccer fan and you love playing games on your Android device, then you must have heard of Become a Legend. It is one of the most popular and realistic soccer games available on the Google Play Store. But what if I told you that you can make it even better with a simple mod? Yes, you heard me right. You can download and install Become a Legend APK Mod and enjoy unlimited money, unlocked items, no ads, and many more features that will make your gaming experience more fun and exciting. In this article, I will tell you everything you need to know about this amazing mod, how to download and install it, and some tips and tricks for playing it. So, are you ready to become a legend? Let's get started!

        What is Become a Legend?

        Become a Legend is a soccer game where you create your own player and lead him to glory. You can customize your player's appearance, name, nationality, position, skills, and attributes. You can also choose your team from over 100 clubs from different countries and leagues. You can play in various modes, such as career mode, online mode, challenge mode, and training mode. You can also compete with other players from around the world and earn achievements and rewards.

        -

        become a legend mod apk unlimited money
        -become a legend mod apk latest version
        -become a legend mod apk android 1
        -become a legend mod apk download
        -become a legend mod apk revdl
        -become a legend mod apk hack
        -become a legend mod apk offline
        -become a legend mod apk free shopping
        -become a legend mod apk no ads
        -become a legend mod apk 1.0.17
        -become a legend mod apk rexdl
        -become a legend mod apk happymod
        -become a legend mod apk 2023
        -become a legend mod apk unlimited gems
        -become a legend mod apk unlocked
        -become a legend mod apk obb
        -become a legend mod apk all cars
        -become a legend mod apk unlimited nitro
        -become a legend mod apk unlimited coins
        -become a legend mod apk vip
        -become a legend mod apk full version
        -become a legend mod apk premium
        -become a legend mod apk pro
        -become a legend mod apk mega
        -become a legend mod apk god mode
        -become a legend racing game mod apk
        -download game become a legend mod apk
        -how to install become a legend mod apk
        -how to play become a legend mod apk
        -how to update become a legend mod apk
        -is become a legend mod apk safe
        -where to download become a legend mod apk
        -why download become a legend mod apk
        -what is the best site to download become a legend mod apk
        -what is the difference between become a legend and become a legend mod apk
        -what are the features of become a legend mod apk
        -what are the requirements for playing become a legend mod apk
        -what are the benefits of using become a legend mod apk
        -what are the drawbacks of using become a legend mod apk
        -what are the alternatives to become a legend mod apk
        -what are the reviews of become a legend mod apk
        -what are the ratings of become a legend mod apk
        -what are the tips and tricks for playing become a legend mod apk
        -what are the cheats and hacks for playing become a legend mod apk
        -what are the new updates for become a legend mod apk
        -what are the bugs and glitches in become a legend mod apk
        -what are the solutions for fixing the problems in become a legend mod apk

        The game has realistic graphics, gameplay, and physics that will make you feel like you are on the pitch. You can control your player with simple gestures and buttons, and perform amazing moves and shots. You can also see your player's stats and progress as he improves his skills and reputation. You can also unlock new items and features as you advance in the game.

        | `

        Why is Become a Legend Popular?

        `
        `

        `Become a Legend is not just another soccer game. It is a game that lets you live your dream of becoming a soccer star. You can create your own player and make him look like you or your favorite player. You can also choose your team and play in different leagues and tournaments. You can also challenge other players online and show off your skills and achievements.

        `
        `

        `The game is popular because it has realistic graphics, gameplay, and physics that make you feel like you are on the pitch. The game also has a lot of customization options that let you personalize your player and your team. You can also upgrade your skills and attributes as you progress in the game. You can also unlock new items and features that make the game more fun and exciting.

        `
        `

        What are the Features and Benefits of the Mod Version?

        `
        `

        `If you think that Become a Legend is already awesome, wait until you try the mod version. The mod version is a modified version of the game that gives you access to unlimited money, unlocked items, no ads, and many more features that will make your gaming experience more enjoyable and satisfying. Here are some of the features and benefits of the mod version:

        `
        `

        Unlimited Money

        `
        `

        `With unlimited money, you can buy anything you want in the game without worrying about running out of cash. You can buy new jerseys, boots, accessories, skills, and more. You can also upgrade your player and your team to the max level. You can also spend money on training, scouting, transfers, and other aspects of the game. With unlimited money, you can have the best player and the best team in the game.

        ` |

        Why is Become a Legend Popular?

        Become a Legend is not just another soccer game. It is a game that lets you live your dream of becoming a soccer star. You can create your own player and make him look like you or your favorite player. You can also choose your team and play in different leagues and tournaments. You can also challenge other players online and show off your skills and achievements.

        The game is popular because it has realistic graphics, gameplay, and physics that make you feel like you are on the pitch. The game also has a lot of customization options that let you personalize your player and your team. You can also upgrade your skills and attributes as you progress in the game. You can also unlock new items and features that make the game more fun and exciting.

        What are the Features and Benefits of the Mod Version?

        If you think that Become a Legend is already awesome, wait until you try the mod version. The mod version is a modified version of the game that gives you access to unlimited money, unlocked items, no ads, and many more features that will make your gaming experience more enjoyable and satisfying. Here are some of the features and benefits of the mod version:

        Unlimited Money

        With unlimited money, you can buy anything you want in the game without worrying about running out of cash. You can buy new jerseys, boots, accessories, skills, and more. You can also upgrade your player and your team to the max level. You can also spend money on training, scouting, transfers, and other aspects of the game. With unlimited money, you can have the best player and the best team in the game.

        | `

        Unlocked Items

        `
        `

        `With unlocked items, you can access all the items in the game without having to unlock them by playing or paying. You can choose from hundreds of jerseys, boots, accessories, skills, and more. You can also customize your player and your team to your liking. You can also use different items for different matches and situations. With unlocked items, you can have the most diverse and unique player and team in the game.

        `
        `

        No Ads

        `
        `

        `With no ads, you can play the game without any interruptions or distractions. You can enjoy the game without having to watch annoying and irrelevant ads that pop up every few minutes. You can also save your data and battery by not loading ads. You can also avoid clicking on malicious or scammy ads that might harm your device or steal your information. With no ads, you can have the most smooth and pleasant gaming experience.

        `
        `

        No Root Required

        `
        `

        `With no root required, you can install the mod without having to root your device. Rooting is a process that gives you full access to your device's system, but it also comes with many risks and complications. Rooting can void your warranty, expose your device to viruses and malware, cause errors and crashes, and make your device incompatible with some apps and updates. With no root required, you can install the mod safely and easily without any worries.

        ` |

        Unlocked Items

        With unlocked items, you can access all the items in the game without having to unlock them by playing or paying. You can choose from hundreds of jerseys, boots, accessories, skills, and more. You can also customize your player and your team to your liking. You can also use different items for different matches and situations. With unlocked items, you can have the most diverse and unique player and team in the game.

        No Ads

        With no ads, you can play the game without any interruptions or distractions. You can enjoy the game without having to watch annoying and irrelevant ads that pop up every few minutes. You can also save your data and battery by not loading ads. You can also avoid clicking on malicious or scammy ads that might harm your device or steal your information. With no ads, you can have the most smooth and pleasant gaming experience.

        No Root Required

        With no root required, you can install the mod without having to root your device. Rooting is a process that gives you full access to your device's system, but it also comes with many risks and complications. Rooting can void your warranty, expose your device to viruses and malware, cause errors and crashes, and make your device incompatible with some apps and updates. With no root required, you can install the mod safely and easily without any worries.

        | `

        Easy Installation

        `
        `

        `With easy installation, you can download and install the mod in a few simple steps. You don't need any special skills or tools to do it. You just need to follow the instructions that I will provide you in the next section. You can also uninstall the mod anytime you want without any problems. With easy installation, you can enjoy the mod without any hassle.

        `
        `

        How to Download and Install Become a Legend APK Mod?

        `
        `

        `Now that you know what the mod can do for you, you must be eager to try it out. Well, you are in luck, because I will show you how to download and install the mod in a few simple steps. Here is what you need to do:

        `
        `
          `
          `
        1. First, you need to download the mod file from a reliable source. You can use this link to download the latest version of the mod. The file size is about 100 MB, so make sure you have enough space on your device.
        2. `
          `
        3. Second, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        4. `
          `
        5. Third, you need to locate and install the mod file. You can use a file manager app to find the file in your downloads folder. Tap on the file and follow the instructions on the screen to install it.
        6. `
          `
        7. Fourth, you need to launch and enjoy the game. You can find the game icon on your home screen or app drawer. Tap on it and start playing. You will see that you have unlimited money, unlocked items, no ads, and other features enabled.
        8. `
          `
        ` |

        Easy Installation

        With easy installation, you can download and install the mod in a few simple steps. You don't need any special skills or tools to do it. You just need to follow the instructions that I will provide you in the next section. You can also uninstall the mod anytime you want without any problems. With easy installation, you can enjoy the mod without any hassle.

        How to Download and Install Become a Legend APK Mod?

        Now that you know what the mod can do for you, you must be eager to try it out. Well, you are in luck, because I will show you how to download and install the mod in a few simple steps. Here is what you need to do:

        1. First, you need to download the mod file from a reliable source. You can use this link to download the latest version of the mod. The file size is about 100 MB, so make sure you have enough space on your device.
        2. Second, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        3. Third, you need to locate and install the mod file. You can use a file manager app to find the file in your downloads folder. Tap on the file and follow the instructions on the screen to install it.
        4. Fourth, you need to launch and enjoy the game. You can find the game icon on your home screen or app drawer. Tap on it and start playing. You will see that you have unlimited money, unlocked items, no ads, and other features enabled.
        | `

        Tips and Tricks for Playing Become a Legend APK Mod

        `
        `

        `Now that you have downloaded and installed the mod, you might be wondering how to play it and make the most of it. Well, don't worry, because I have some tips and tricks for you that will help you become a legend in no time. Here are some of them:

        `
        `
          `
          `
        • Create your player wisely. You can customize your player's appearance, name, nationality, position, skills, and attributes. You can also use the mod's unlimited money to buy and upgrade your skills and attributes. However, you should also consider your player's strengths and weaknesses, and choose a position and a team that suits your style and preferences.
        • `
          `
        • Train your skills regularly. You can use the training mode to improve your skills and attributes. You can also use the mod's unlocked items to equip your player with the best jerseys, boots, accessories, and skills. However, you should also practice your skills in different situations and scenarios, such as dribbling, passing, shooting, defending, etc.
        • `
          `
        • Choose your team carefully. You can choose your team from over 100 clubs from different countries and leagues. You can also use the mod's unlimited money to buy and sell players, scout new talents, and manage your team. However, you should also consider your team's chemistry, formation, strategy, and performance, and choose a team that matches your goals and ambitions.
        • `
          `
        ` |

        Tips and Tricks for Playing Become a Legend APK Mod

        Now that you have downloaded and installed the mod, you might be wondering how to play it and make the most of it. Well, don't worry, because I have some tips and tricks for you that will help you become a legend in no time. Here are some of them:

        • Create your player wisely. You can customize your player's appearance, name, nationality, position, skills, and attributes. You can also use the mod's unlimited money to buy and upgrade your skills and attributes. However, you should also consider your player's strengths and weaknesses, and choose a position and a team that suits your style and preferences.
        • Train your skills regularly. You can use the training mode to improve your skills and attributes. You can also use the mod's unlocked items to equip your player with the best jerseys, boots, accessories, and skills. However, you should also practice your skills in different situations and scenarios, such as dribbling, passing, shooting, defending, etc.
        • Choose your team carefully. You can choose your team from over 100 clubs from different countries and leagues. You can also use the mod's unlimited money to buy and sell players, scout new talents, and manage your team. However, you should also consider your team's chemistry, formation, strategy, and performance, and choose a team that matches your goals and ambitions.
        | `
          `
          `
        • Win matches and trophies. You can play in various modes, such as career mode, online mode, challenge mode, and training mode. You can also use the mod's unlimited money to boost your team and your player. However, you should also play with skill and strategy, and try to win as many matches and trophies as possible. You can also earn achievements and rewards for your performance.
        • `
          `
        • Have fun and enjoy the game. You can play the game however you want, and enjoy the realistic graphics, gameplay, and physics. You can also interact with other players online, and share your feedback and opinions. You can also explore the game's features and options, and discover new things and surprises. With the mod, you can have the most fun and enjoyable gaming experience.
        • `
          `
        `
        `

        Frequently Asked Questions about Become a Legend APK Mod

        `
        `

        `You might have some questions about the game and the mod that you want to know the answers to. Well, don't worry, because I have prepared some frequently asked questions and answers for you that will clear your doubts and queries. Here are some of them:

        `
        ``
        ``
        ``
        `` |
        • Win matches and trophies. You can play in various modes, such as career mode, online mode, challenge mode, and training mode. You can also use the mod's unlimited money to boost your team and your player. However, you should also play with skill and strategy, and try to win as many matches and trophies as possible. You can also earn achievements and rewards for your performance.
        • Have fun and enjoy the game. You can play the game however you want, and enjoy the realistic graphics, gameplay, and physics. You can also interact with other players online, and share your feedback and opinions. You can also explore the game's features and options, and discover new things and surprises. With the mod, you can have the most fun and enjoyable gaming experience.

        Frequently Asked Questions about Become a Legend APK Mod

        You might have some questions about the game and the mod that you want to know the answers to. Well, don't worry, because I have prepared some frequently asked questions and answers for you that will clear your doubts and queries. Here are some of them:

        QuestionAnswer
        Is Become a Legend APK Mod safe to download and install?Yes, it is safe to download and install the mod from a reliable source. The mod does not contain any viruses or malware that might harm your device or steal your information. However, you should always scan the file before installing it, and use a VPN or proxy to protect your privacy.
        Is Become a Legend APK Mod legal to use?It depends on your location and the laws of your country. The mod is not an official version of the game, and it modifies the original game's features and functions. This might violate the game's terms of service and the developer's rights. Therefore, you should use the mod at your own risk, and be aware of the possible consequences.
        | ``
        ``
        `
        QuestionAnswer
        Is Become a Legend APK Mod safe to download and install?Yes, it is safe to download and install the mod from a reliable source. The mod does not contain any viruses or malware that might harm your device or steal your information. However, you should always scan the file before installing it, and use a VPN or proxy to protect your privacy.
        Is Become a Legend APK Mod legal to use?It depends on your location and the laws of your country. The mod is not an official version of the game, and it modifies the original game's features and functions. This might violate the game's terms of service and the developer's rights. Therefore, you should use the mod at your own risk, and be aware of the possible consequences.
        Is Become a Legend APK Mod compatible with other devices or versions?Yes, it is compatible with most Android devices and versions. The mod requires Android 4.4 or higher to run smoothly. However, you should also check the compatibility of your device and the mod before downloading and installing it. You can also check the reviews and ratings of other users who have tried the mod on their devices.
        Is Become a Legend APK Mod free to use?Yes, it is free to use. You don't need to pay anything to download and install the mod. You also don't need to spend any money to buy or upgrade anything in the game. However, you should also respect the developer's work and support them if you like the game and the mod.
        `
        `

        Conclusion

        `
        `

        `Become a Legend APK Mod is a great way to enjoy one of the best soccer games on your Android device. You can create your own player and lead him to glory. You can also enjoy unlimited money, unlocked items, no ads, and many more features that will make your gaming experience more fun and exciting. You can also download and install the mod easily and safely without any hassle. You can also play the game with skill and strategy, and win matches and trophies. You can also have fun and enjoy the game with realistic graphics, gameplay, and physics.

        `
        `

        `So, what are you waiting for? Download Become a Legend APK Mod now and become a legend yourself. You won't regret it. And don't forget to share your feedback and opinions with me and other players. I would love to hear from you.

        `
        `` | Is Become a Legend APK Mod compatible with other devices or versions?Yes, it is compatible with most Android devices and versions. The mod requires Android 4.4 or higher to run smoothly. However, you should also check the compatibility of your device and the mod before downloading and installing it. You can also check the reviews and ratings of other users who have tried the mod on their devices.Is Become a Legend APK Mod free to use?Yes, it is free to use. You don't need to pay anything to download and install the mod. You also don't need to spend any money to buy or upgrade anything in the game. However, you should also respect the developer's work and support them if you like the game and the mod.

        Conclusion

        Become a Legend APK Mod is a great way to enjoy one of the best soccer games on your Android device. You can create your own player and lead him to glory. You can also enjoy unlimited money, unlocked items, no ads, and many more features that will make your gaming experience more fun and exciting. You can also download and install the mod easily and safely without any hassle. You can also play the game with skill and strategy, and win matches and trophies. You can also have fun and enjoy the game with realistic graphics, gameplay, and physics.

        So, what are you waiting for? Download Become a Legend APK Mod now and become a legend yourself. You won't regret it. And don't forget to share your feedback and opinions with me and other players. I would love to hear from you.

        |

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download King Von 100 Bricks The Rap Song That Shook the Industry.md b/spaces/congsaPfin/Manga-OCR/logs/Download King Von 100 Bricks The Rap Song That Shook the Industry.md deleted file mode 100644 index 17078f076913359a3241191191b7bc5124b41330..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download King Von 100 Bricks The Rap Song That Shook the Industry.md +++ /dev/null @@ -1,138 +0,0 @@ - -

        Download King Von 100 Bricks: A Guide for Hip-Hop Fans

        -

        If you are a fan of hip-hop music, you might have heard of King Von, a rising star in the rap scene. He is known for his storytelling skills, his street credibility, and his collaborations with other artists. One of his most popular songs is 100 Bricks, a track he made with Lil Berete, a Canadian rapper. In this article, we will tell you everything you need to know about King Von 100 Bricks, including what it is, who made it, what it means, and how to download it legally and safely.

        -

        download king von 100 bricks


        Download File === https://urlca.com/2uOdmc



        -

        Who is King Von and what is his music style?

        -

        King Von, whose real name is Dayvon Daquan Bennett, was born on August 9, 1994, in Chicago, Illinois. He grew up in a notorious neighborhood called O'Block, where he was involved in gang activities and violence. He was arrested several times for various charges, including murder, attempted murder, and robbery. He spent more than three years in jail before being acquitted in 2017.

        -

        While in jail, he started rapping as a hobby and wrote some songs. He was inspired by his childhood friend Lil Durk, who was already a successful rapper. After he got out of jail, he signed with Lil Durk's label OTF (Only The Family) and released his debut single Crazy Story in 2018. The song went viral on YouTube and Spotify, earning him millions of views and streams. He followed up with two more parts of Crazy Story, featuring Lil Durk and Polo G respectively.

        -

        King Von's music style is characterized by his vivid storytelling, his authentic street experiences, his catchy hooks, and his aggressive delivery. He often raps about his life in O'Block, his encounters with the law, his loyalty to his friends, and his beefs with his enemies. Some of his influences include Tupac Shakur, DMX, Chief Keef, and Young Thug. Some of his other popular songs include Took Her To The O, Grandson For President, Gleesh Place, and Wayne's Story.

        -

        What is 100 Bricks and how did it come about?

        -

        100 Bricks is a song by King Von featuring Lil Berete, a rapper from Toronto, Canada. The song was released on July 19, 2020, as part of Lil Berete's album Icebreaker 2. The song was produced by 210zn & Alottazeros.

        -

        The song is about the rappers' ambition to make money by selling drugs. They boast about their lifestyle, their weapons, their women, and their enemies. They also express their love for money and their disdain for snitches. The title of the song refers to the amount of cocaine they claim to have.

        -

        The song came about as a result of the friendship between King Von and Lil Berete, who met through social media and decided to work together. They recorded the song in Atlanta, where King Von was based at the time. The song was well-received by the fans of both artists, who praised their chemistry and their energy. The song also attracted some controversy, as some people accused Lil Berete of copying King Von's style and flow.

        -

        download king von 100 bricks mp3
        -download king von 100 bricks video
        -download king von 100 bricks lyrics
        -download king von 100 bricks soundcloud
        -download king von 100 bricks feat lil berete
        -download king von 100 bricks instrumental
        -download king von 100 bricks remix
        -download king von 100 bricks clean
        -download king von 100 bricks album
        -download king von 100 bricks song
        -download king von 100 bricks free
        -download king von 100 bricks online
        -download king von 100 bricks official
        -download king von 100 bricks audio
        -download king von 100 bricks youtube
        -download king von 100 bricks spotify
        -download king von 100 bricks genius
        -download king von 100 bricks rap genius
        -download king von 100 bricks apple music
        -download king von 100 bricks amazon music
        -download king von 100 bricks tidal
        -download king von 100 bricks deezer
        -download king von 100 bricks pandora
        -download king von 100 bricks napster
        -download king von 100 bricks iheartradio
        -download king von 100 bricks google play music
        -download king von 100 bricks soundclick
        -download king von 100 bricks datpiff
        -download king von 100 bricks audiomack
        -download king von 100 bricks mymixtapez
        -download king von 100 bricks spinrilla
        -download king von 100 bricks livemixtapes
        -download king von 100 bricks hotnewhiphop
        -download king von 100 bricks worldstarhiphop
        -download king von 100 bricks complex
        -download king von 100 bricks xxl magazine
        -download king von 100 bricks the source magazine
        -download king von 100 bricks vibe magazine
        -download king von 100 bricks pitchfork media
        -download king von 100 bricks rolling stone magazine.

        -

        What are the lyrics and meaning of 100 Bricks?

        -

        The lyrics of 100 Bricks are mostly composed of braggadocious and violent bars, typical of the drill genre. The rappers use a lot of slang, metaphors, and references to their personal lives. Here are some of the notable lines and their meanings:

        -
          -
        • "I got a hundred bricks, I'm tryna sell 'em all" - This is the chorus of the song, repeated by both rappers. It means that they have a lot of cocaine that they want to sell for profit.
        • -
        • "I'm from O'Block, we don't do no talkin'" - This is King Von's opening line, where he mentions his neighborhood in Chicago, known for its violence and gang activity. He implies that he and his crew are not afraid to act on their threats.
        • -
        • "I got a Draco with a drum, I call it Tommy Lee" - This is Lil Berete's first line, where he refers to his firearm, a Draco AK-47 with a large magazine. He compares it to Tommy Lee, the drummer of the rock band Motley Crue, known for his wild antics.
        • -
        • "I got a bad bitch from overseas, she don't speak no English" - This is another line by Lil Berete, where he boasts about his girlfriend, who is from a foreign country and does not speak his language. He suggests that he does not care about communication, as long as she is attractive.
        • -
        • "I'm on parole, I can't leave the state" - This is a line by King Von, where he reveals that he is under legal supervision and cannot travel outside his state. He implies that this does not stop him from making money and living his life.
        • -
        -

        How to download King Von 100 Bricks legally and safely?

        -

        If you want to listen to King Von 100 Bricks on your device, you might be tempted to search for it on Google and download it from any website that offers it. However, this is not a good idea, as you might end up with malware, viruses, or legal issues. Instead, you should use one of the following websites or platforms that offer free music downloads legally and safely. We will compare them based on their features, pros, and cons.

        -

        SoundCloud

        -

        SoundCloud is one of the most popular platforms for streaming and downloading music online. It has millions of songs from various genres and artists, including King Von 100 Bricks. You can access SoundCloud from your browser or download its app on your phone or tablet. To download King Von 100 Bricks from SoundCloud, you need to follow these steps:

        -
          -
        1. Go to https://soundcloud.com/ or open the SoundCloud app on your device.
        2. -
        3. Search for "King Von 100 Bricks" in the search bar.
        4. -
        5. Select the song from the results and click on the "More" button (three dots).
        6. -
        7. Click on "Download file" and choose a location to save it on your device.
        8. -
        -

        Pros and cons of SoundCloud

        - - - - - - -
        ProsCons
        - Easy to use and navigate- Not all songs are available for download
        - High-quality audio- Some downloads require a subscription or payment
        - Supports various formats (MP3, WAV, FLAC)- Some downloads have limited plays or downloads
        - Allows you to discover new music and artists- Some downloads have ads or watermarks
        -

        DatPiff

        -

        DatPiff is another platform for streaming and downloading music online. It specializes in hip-hop and rap music, especially mixtapes and singles. It has thousands of songs from various artists, including King Von 100 Bricks. You can access DatPiff from your browser or download its app on your phone or tablet. To download King Von 100 Bricks from DatPiff, you need to follow these steps:

        -
          -
        1. Go to < a href="">https://www.datpiff.com/ or open the DatPiff app on your device.
        2. -
        3. Search for "King Von 100 Bricks" in the search bar.
        4. -
        5. Select the song from the results and click on the "Download" button.
        6. -
        7. Choose a location to save it on your device.
        8. -
        -

        Pros and cons of DatPiff

        - - - - - - -
        ProsCons
        - Dedicated to hip-hop and rap music- Not all songs are available for download
        - High-quality audio- Some downloads require a registration or login
        - Supports various formats (MP3, ZIP, M4A)- Some downloads have limited plays or downloads
        - Allows you to discover new music and artists- Some downloads have ads or watermarks
        -

        Bandcamp

        -

        Bandcamp is another platform for streaming and downloading music online. It supports independent artists and labels, who can upload their music and set their own prices. It has a wide range of genres and styles, including King Von 100 Bricks. You can access Bandcamp from your browser or download its app on your phone or tablet. To download King Von 100 Bricks from Bandcamp, you need to follow these steps:

        -
          -
        1. Go to https://bandcamp.com/ or open the Bandcamp app on your device.
        2. -
        3. Search for "King Von 100 Bricks" in the search bar.
        4. -
        5. Select the song from the results and click on the "Buy Digital Track" button.
        6. -
        7. Enter the amount you want to pay (or enter zero for free) and click on "Check out now".
        8. -
        9. Enter your email address and click on "Send download link".
        10. -
        11. Check your email and click on the link to download the song.
        12. -
        13. Choose a location to save it on your device.
        14. -
        -

        Pros and cons of Bandcamp

        - - - - - - -
        ProsCons
        - Supports independent artists and labels- Not all songs are available for free or download
        - High-quality audio- Some downloads require a payment or donation
        - Supports various formats (MP3, FLAC, AAC, OGG, ALAC, WAV, AIFF)- Some downloads have limited plays or downloads
        - Allows you to discover new music and artists- Some downloads have ads or watermarks
        -

        Conclusion

        -

        In conclusion, King Von 100 Bricks is a great song for hip-hop fans who want to enjoy some hard-hitting and catchy rap music. The song showcases the talent and charisma of King Von and Lil Berete, who deliver some impressive bars and hooks. The song is also about their ambition and hustle, as they rap about making money by selling drugs. If you want to download King Von 100 Bricks legally and safely, you can use one of the platforms we mentioned above, such as SoundCloud, DatPiff, or Bandcamp. Each platform has its own features, pros, and cons, so you can choose the one that suits you best. We hope you found this article helpful and informative. Now go ahead and download King Von 100 Bricks and enjoy!

        -

        FAQs

        -
          -
        • Q: When did King Von die?
        • -
        • A: King Von was shot and killed on November 6, 2020, in Atlanta, Georgia. He was involved in a fight with another rapper's entourage outside a nightclub. He was 26 years old.
        • -
        • Q: Who is Lil Berete?
        • -
        • A: Lil Berete is a rapper from Toronto, Canada. He was born on July 29, 2000, in Guinea-Bissau. He moved to Canada with his family when he was young. He started rapping at the age of 15 and released his debut mixtape Icebreaker in 2018. He is known for his melodic flow and his fusion of rap, dancehall, and afrobeat.
        • -
        • Q: What is drill music?
        • -
        • A: Drill music is a subgenre of hip-hop music that originated in Chicago in the early 2010 s. It is characterized by its dark, violent, and nihilistic lyrics, its fast and aggressive beats, and its use of slang and street names. Some of the pioneers of drill music include Chief Keef, Lil Durk, King Louie, and Young Chop. Drill music has influenced other scenes, such as UK drill, Brooklyn drill, and Australian drill.
        • -
        • Q: How can I support King Von's family and legacy?
        • -
        • A: You can support King Von's family and legacy by streaming his music, buying his merchandise, donating to his foundation, and following his social media accounts. You can also pay tribute to him by sharing his music and stories with others.
        • -
        • Q: What are some other songs by King Von that I should check out?
        • -
        • A: Some other songs by King Von that you should check out are:
        • -
            -
          • Crazy Story (Parts 1, 2, and 3)
          • -
          • Took Her To The O
          • -
          • Grandson For President
          • -
          • Gleesh Place
          • -
          • Wayne's Story
          • -
          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download My Talking Tom Friends APK and Customize Your Pets with Cool Outfits.md b/spaces/congsaPfin/Manga-OCR/logs/Download My Talking Tom Friends APK and Customize Your Pets with Cool Outfits.md deleted file mode 100644 index 602d9d4997a7d497428efcb7a35aab22c9156ed8..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download My Talking Tom Friends APK and Customize Your Pets with Cool Outfits.md +++ /dev/null @@ -1,93 +0,0 @@ -
        -

        Download My Talking Tom and Friends APK: A Fun and Interactive Game for All Ages

        -

        If you are looking for a fun and interactive game that you can play with your friends and family, you should try My Talking Tom and Friends. This is a popular game that lets you take care of six adorable virtual pets, each with their own personality and style. You can interact, customize, and enjoy managing this team of pet friends in their cozy house. In this article, we will tell you what My Talking Tom and Friends is, why you should download the APK version, and how to do it.

        -

        download my talking tom and friends apk


        DOWNLOADhttps://urlca.com/2uOf40



        -

        What is My Talking Tom and Friends?

        -

        My Talking Tom and Friends is a game developed by Outfit7, the creators of the famous Talking Tom Cat series. The game was released in 2020 and has been downloaded over 100 million times on Google Play. The game is suitable for all ages, as it has no violence or inappropriate content.

        -

        The characters

        -

        The game features six characters that you can take care of: Tom, Angela, Hank, Ginger, Ben, and Becca. Each character has their own voice, appearance, and hobbies. You can dress them up in different outfits, accessories, and hairstyles. You can also feed them, bathe them, play with them, and put them to bed.

        -

        The gameplay

        -

        The game is based on a sandbox-style gameplay, which means you can explore the house and interact with different objects and items. You can also play mini-games with your pet friends, such as puzzles, racing, cooking, gardening, and more. You can earn coins and rewards by completing tasks and challenges. You can use the coins to buy more items for your house and your pets.

        -

        How to download my talking tom friends apk for free
        -My talking tom friends apk mod unlimited money and stars
        -Download my talking tom friends apk latest version 2023
        -My talking tom friends apk offline installer
        -Download my talking tom friends apk for android 10
        -My talking tom friends apk hack no root
        -Download my talking tom friends apk for pc windows 7
        -My talking tom friends apk obb file download
        -Download my talking tom friends apk from uptodown
        -My talking tom friends apk pure download link
        -Download my talking tom friends apk for ios devices
        -My talking tom friends apk full unlocked download
        -Download my talking tom friends apk for fire tablet
        -My talking tom friends apk revdl download site
        -Download my talking tom friends apk for chromebook
        -My talking tom friends apk rexdl download page
        -Download my talking tom friends apk for macbook air
        -My talking tom friends apk mirror download source
        -Download my talking tom friends apk for nokia phone
        -My talking tom friends apk mob.org download portal
        -Download my talking tom friends apk for samsung galaxy s10
        -My talking tom friends apk apkmody download option
        -Download my talking tom friends apk for huawei p30 pro
        -My talking tom friends apk happymod download choice
        -Download my talking tom friends apk for lg g8 thinq
        -My talking tom friends apk an1 download alternative
        -Download my talking tom friends apk for xiaomi mi 9t
        -My talking tom friends apk apkpure.com download link[^1^]
        -Download my talking tom friends apk for oppo reno 2z
        -My talking tom friends apk android 1 download site
        -Download my talking tom friends apk for vivo v17 pro
        -My talking tom friends apk apkmirror.com download source
        -Download my talking tom friends apk for oneplus 7t pro
        -My talking tom friends apk apknite.com download option
        -Download my talking tom friends apk for realme x2 pro
        -My talking tom friends apk apksfull.com download choice
        -Download my talking tom friends apk for motorola one zoom
        -My talking tom friends apk apktada.com download alternative
        -Download my talking tom friends apk for sony xperia 5 ii
        -My talking tom friends apk apksfree.com download link

        -

        The features

        -

        The game has many features that make it fun and engaging. Some of the features are:

        -
          -
        • You can talk to your pet friends and they will repeat what you say in a funny voice.
        • -
        • You can record videos of your pet friends and share them with your friends on social media.
        • -
        • You can visit your friends' houses and see how they decorate their rooms.
        • -
        • You can join events and competitions to win prizes and trophies.
        • -
        • You can unlock new items and levels as you progress in the game.
        • -
        -

        Why download My Talking Tom and Friends APK?

        -

        If you want to enjoy the full features of My Talking Tom and Friends, you should download the APK version of the game. APK stands for Android Package Kit, which is a file format that allows you to install apps that are not available on Google Play. There are many benefits of downloading the APK version of My Talking Tom and Friends, such as:

        -

        It's free and easy to install

        -

        You don't have to pay anything to download the APK version of My Talking Tom and Friends. You just need to find a reliable source that offers the latest version of the game. You also don't need to sign up or register to download the APK file. You just need to follow a few simple steps to install it on your device.

        -

        It's safe and secure

        -

        You don't have to worry about viruses or malware when you download the APK version of My Talking Tom and Friends. The game is developed by a reputable company that ensures the quality and safety of their products. You just need to make sure that you download the APK file from a trusted source that has positive reviews from other users.

        -

        It's compatible with most devices

        -

        You don't have to worry about compatibility issues when you download the APK version of My Talking Tom and Friends. The game is designed to work on most Android devices, regardless of the model or the version. You just need to make sure that your device has enough storage space and meets the minimum requirements of the game.

        -

        How to download My Talking Tom and Friends APK?

        -

        If you are ready to download the APK version of My Talking Tom and Friends, you just need to follow these simple steps:

        -

        Step 1: Enable unknown sources

        -

        Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps that are not from Google Play. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and toggle it on. You may see a warning message, but you can ignore it and proceed.

        -

        Step 2: Download the APK file

        -

        Next, you need to download the APK file of My Talking Tom and Friends. You can use any browser or downloader app that you prefer. Just make sure that you download the file from a reliable source that offers the latest version of the game. You can use this link to download the APK file directly.

        -

        Step 3: Install the APK file

        -

        Finally, you need to install the APK file on your device. To do this, locate the file in your downloads folder or wherever you saved it. Then, tap on the file and follow the instructions on the screen. You may need to grant some permissions to the app before it can be installed. Once the installation is complete, you can launch the game and enjoy playing with your pet friends.

        -

        Conclusion

        -

        My Talking Tom and Friends is a fun and interactive game that lets you take care of six adorable virtual pets. You can customize, play, and record videos with them. You can also visit your friends' houses and join events and competitions. If you want to experience the full features of the game, you should download the APK version of My Talking Tom and Friends. It's free, easy, safe, and compatible with most devices. Just follow the steps above and start having fun with your pet friends.

        -

        FAQs

        -

        Here are some frequently asked questions about My Talking Tom and Friends APK:

        -
          -
        • Q: Is My Talking Tom and Friends APK legal?
        • -
        • A: Yes, My Talking Tom and Friends APK is legal as long as you download it from a trusted source that does not violate any copyright laws.
        • -
        • Q: Is My Talking Tom and Friends APK modded?
        • -
        • A: No, My Talking Tom and Friends APK is not modded or hacked. It is the original version of the game that is offered by Outfit7.
        • -
        • Q: How do I update My Talking Tom and Friends APK?
        • -
        • A: To update My Talking Tom and Friends APK, you need to download the latest version of the APK file from a reliable source and install it over the existing one.
        • -
        • Q: How do I uninstall My Talking Tom and Friends APK?
        • -
        • A: To uninstall My Talking Tom and Friends APK, you need to go to your device settings and look for the apps or applications option. Then, find My Talking Tom and Friends app and tap on it. Then, tap on the uninstall button and confirm your action.
        • -
        • Q: How do I contact Outfit7 for support or feedback?
        • -
        • A: To contact Outfit7 for support or feedback, you can visit their official website or their Facebook page. You can also email them at support@outfit7.com.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Traffic Rider Hack APK v1.95 How to Get Unlimited Money Gold and Bikes in 2023.md b/spaces/congsaPfin/Manga-OCR/logs/Traffic Rider Hack APK v1.95 How to Get Unlimited Money Gold and Bikes in 2023.md deleted file mode 100644 index bf8f10e8ce8b6b699c2979d9288f0c07c2634826..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Traffic Rider Hack APK v1.95 How to Get Unlimited Money Gold and Bikes in 2023.md +++ /dev/null @@ -1,102 +0,0 @@ -
        -

        Traffic Rider Mod APK: The Next-Gen of Endless Motorbike Racing

        -

        If you love motorcycle racing, then you must have played Traffic Rider, the popular mobile game that lets you experience the thrill of speeding through city streets on two wheels. But what if you could take that experience to the next level? That's where Traffic Rider Mod APK comes in.

        -

        traffic rider mod apk new version 2023


        Downloadhttps://urlca.com/2uOaVR



        -

        Traffic Rider Mod APK is a modified version of the original Traffic Rider game that allows you to access premium features and unlockables without having to pay for them. The mod has been created by third-party developers and is not endorsed by the game's official creators.

        -

        One of the most significant advantages of using Traffic Rider Mod APK is that it removes all the restrictions and limitations that are present in the original game. This means you can access all the bikes and levels right from the start and unlock all the features without spending any money.

        -

        In this article, we will tell you everything you need to know about Traffic Rider Mod APK, including its features, how to download and install it, and some tips and tricks for playing it. So, let's get started!

        -

        Features of Traffic Rider Mod APK

        -

        Traffic Rider Mod APK offers a lot of amazing features that make the game more fun and exciting. Here are some of them:

        -

        traffic rider hack apk download latest version 2023
        -traffic rider unlimited money and gold mod apk 2023
        -traffic rider mod apk all bikes unlocked 2023
        -traffic rider mod apk no ads 2023
        -traffic rider mod apk android 1 2023
        -traffic rider mod apk revdl 2023
        -traffic rider mod apk rexdl 2023
        -traffic rider mod apk happymod 2023
        -traffic rider mod apk free download 2023
        -traffic rider mod apk offline 2023
        -traffic rider mod apk unlimited everything 2023
        -traffic rider mod apk latest update 2023
        -traffic rider mod apk new bikes 2023
        -traffic rider mod apk unlimited gems and coins 2023
        -traffic rider mod apk for pc 2023
        -traffic rider mod apk for ios 2023
        -traffic rider mod apk online play 2023
        -traffic rider mod apk unlimited keys and cash 2023
        -traffic rider mod apk premium unlocked 2023
        -traffic rider mod apk high graphics 2023
        -traffic rider mod apk mega mod 2023
        -traffic rider mod apk unlimited nitro 2023
        -traffic rider mod apk unlimited lives and health 2023
        -traffic rider mod apk full version 2023
        -traffic rider mod apk pro version 2023
        -traffic rider mod apk cheat codes 2023
        -traffic rider mod apk hack tool 2023
        -traffic rider mod apk generator online 2023
        -traffic rider mod apk no root required 2023
        -traffic rider mod apk no verification needed 2023
        -traffic rider mod apk no survey or human verification 2023
        -traffic rider mod apk direct download link 2023
        -traffic rider mod apk mirror link 2023
        -traffic rider mod apk original file download 2023
        -traffic rider mod apk obb file download 2023
        -traffic rider mod apk data file download 2023
        -traffic rider mod apk zip file download 2023
        -traffic rider mod apk mediafire download link 2023
        -traffic rider mod apk google drive download link 2023
        -traffic rider mod apk dropbox download link 2023
        -traffic rider mod apk zippyshare download link 2023
        -traffic rider mod apk uptodown download link 2023
        -traffic rider mod apk apkpure download link 2023
        -traffic rider mod apk apkmirror download link 2023
        -traffic rider mod apk apknite download link 2023
        -traffic rider mod apk apksfree download link 2023
        -traffic rider mod apk apktada download link 2023

        -
          -
        • Ad-Free Experience: No annoying ads to interrupt your gameplay. You can enjoy the game without any distractions or interruptions.
        • -
        • Unlimited Money: Buy and upgrade any bike you want. You don't have to worry about running out of money or saving up for a better bike. You can choose from over 30 different bikes, each with its own speed, handling, and sound.
        • -
        • New Bikes: Ride different bikes with unique characteristics and abilities. You can unlock new bikes as you progress through the game, or use the unlimited money feature to buy them instantly. Some of the new bikes include a police bike, a chopper, a dirt bike, and more.
        • -
        • Unlimited Resources: Customize your bike with skins and colors. You can change the appearance of your bike to suit your style and preferences. You can also upgrade your bike's performance by improving its engine, brakes, tires, and suspension.
        • -
        • Enhanced Graphics: Enjoy realistic and immersive visuals and sounds. The game has improved graphics that make the riding experience more realistic and immersive. You will feel like you are riding a real motorcycle

          with the sound of the engine and the wind in your ears. You will also see realistic environments and weather effects, such as day and night cycles, rain, snow, fog, and more.

        • -
        -

        How to Download and Install Traffic Rider Mod APK

        -

        Downloading and installing Traffic Rider Mod APK is very easy and simple. Just follow these steps:

        -
          -
        1. Step 1: Download the modded APK file from a trusted source. You can find many websites that offer the download link for Traffic Rider Mod APK, but make sure you choose a reliable and safe one. You can also use this link to download the latest version of Traffic Rider Mod APK: Traffic Rider Mod APK Download
        2. -
        3. Step 2: Enable unknown sources on your device settings. Before you can install the APK file, you need to allow your device to install apps from unknown sources. To do this, go to your device settings, then security, then enable unknown sources.
        4. -
        5. Step 3: Install the APK file and launch the game. Once you have downloaded the APK file, locate it in your device storage and tap on it to install it. After the installation is complete, you can launch the game and enjoy the modded features.
        6. -
        -

        Tips and Tricks for Playing Traffic Rider Mod APK

        -

        Traffic Rider Mod APK is a fun and addictive game that will keep you hooked for hours. However, if you want to master the game and get the best scores and achievements, you need to know some tips and tricks that will help you improve your skills and performance. Here are some of them:

        -
          -
        • Tip 1: Drive faster to get more scores and cash. The faster you drive, the more scores and cash you will earn. However, driving faster also means more risk of crashing or getting caught by the police. So, be careful and balance your speed with your safety.
        • -
        • Tip 2: Overtake traffic cars closely to get bonus scores and cash. When you overtake a traffic car, you will get a bonus score and cash depending on how close you are to the car. The closer you are, the higher the bonus. However, overtaking too closely also increases the chance of colliding with the car or losing control of your bike. So, be cautious and avoid unnecessary risks.
        • -
        • Tip 3: Drive in opposite direction in two-way mode to get extra score and cash. In two-way mode, you can drive in either direction of the road. If you choose to drive in the opposite direction of the traffic flow, you will get extra score and cash for every car you pass. However, driving in the opposite direction also means more danger of head-on collisions or being chased by the police. So, be brave but smart.
        • -
        • Tip 4: Do wheelies to get extra score and cash. Wheelies are when you lift your front wheel off the ground while driving. You can do wheelies by tapping and holding the brake button while accelerating. Doing wheelies will give you extra score and cash for every second you keep your front wheel up. However, doing wheelies also makes your bike less stable and more prone to falling over or crashing into obstacles. So, be skillful but careful.
        • -
        -

        Conclusion

        -

        Traffic Rider Mod APK is a fun and exciting motorcycle racing game that offers unlimited features and resources that make the game more enjoyable and satisfying. You can download and install Traffic Rider Mod APK easily and safely from a trusted source and enjoy the ad-free experience, unlimited money, new bikes, unlimited resources, enhanced graphics, and more.

        -

        If you love motorcycle racing games, then Traffic Rider Mod APK is a must-have for you. It will give you hours of entertainment and challenge as you ride through different scenarios and modes. You can also improve your skills and performance by following some tips and tricks that we have shared with you in this article.

        -

        So, what are you waiting for? Download Traffic Rider Mod APK now and start your endless motorbike racing adventure!

        -

        FAQs

        -

        Here are some frequently asked questions about Traffic Rider Mod APK:

        -
          -
        • Q1: Is Traffic Rider Mod APK safe to use?
        • -

          A1: Yes, Traffic Rider Mod APK is safe to use as long as you download it from a trusted source that does not contain any viruses or malware. However, since it is a modded version of the original game, it is not endorsed by the game's official creators and may not be compatible with some devices or updates.

          -
        • Q2: Do I need to root my device to use Traffic Rider Mod APK?
        • -

          A2: No, you do not need to root your device to use Traffic Rider Mod APK. You can install and run the modded game without any root access or permissions.

          -
        • Q3: Can I play Traffic Rider Mod APK online with other players?
        • -

          A3: No, you cannot play Traffic Rider Mod APK online with other players. The modded game is only for offline mode and does not support multiplayer or online features.

          -
        • Q4: How can I update Traffic Rider Mod APK to the latest version?
        • -

          A4: To update Traffic Rider Mod APK to the latest version, you need to download and install the new version of the modded APK file from the same source that you used before. You may also need to uninstall the previous version of the modded game before installing the new one.

          -
        • Q5: What are the minimum requirements to run Traffic Rider Mod APK?
        • -

          A5: The minimum requirements to run Traffic Rider Mod APK are:

          -
            -
          • Android 5.0 or higher
          • -
          • At least 1 GB of RAM
          • -
          • At least 100 MB of free storage space
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlock All Shark Features with Hungry Shark World MOD APK Download.md b/spaces/congsaPfin/Manga-OCR/logs/Unlock All Shark Features with Hungry Shark World MOD APK Download.md deleted file mode 100644 index 9aacb6c20060eed6fadd5d4167d5d7400fb7480d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Unlock All Shark Features with Hungry Shark World MOD APK Download.md +++ /dev/null @@ -1,86 +0,0 @@ -
          -

          Download Hungry Shark World Mod APK Unlock All Shark

          -

          Do you love playing as a shark and eating everything in your path? If so, you should try Hungry Shark World, a popular action game where you can unleash your inner predator and dominate the ocean. But what if you want to unlock all the sharks and get unlimited money and gems without spending real cash? Well, you can do that by downloading Hungry Shark World Mod APK, a modified version of the game that gives you access to all the features and content for free. In this article, we will tell you what Hungry Shark World is, why you should download the mod apk, and how to do it. Let's dive in!

          -

          What is Hungry Shark World?

          -

          A fun and addictive action game where you control a hungry shark

          -

          Hungry Shark World is an aquatic adventure where you control a shark and eat everything in your path. The objective is to survive as long as you can and devour as much prey as possible to score tons of points before you eventually die. True to its name, your hungry shark is constantly losing HP due to its insatiable hunger and must continue to eat in order to stay alive. However, the sea is riddled with plenty of hazards and hostile fish that don't just let themselves get eaten, so you have to play smart in order to score high.

          -

          download hungry shark world mod apk unlock all shark


          Download Ziphttps://urlca.com/2uO66M



          -

          Features of the game

          -

          Different sharks to unlock and upgrade

          -

          One of the best features of Hungry Shark World is that it offers a wide range of different sharks to unlock and upgrade. You can start with a small shark like a Blacktip Reef Shark or a Whitetip Reef Shark, and work your way up to bigger and more powerful sharks like a Great White Shark or a Megalodon. Each shark has its own stats, abilities, evolutions, and costumes that you can customize. You can also equip your shark with pets that will help you in your journey by providing varied boosts.

          -

          Various locations to explore and eat

          -

          Another great feature of Hungry Shark World is that it allows you to travel the ocean and explore various locations across the world. You can visit tropical islands, sunken temples, vast cities, frozen icebergs, and more. Each location has its own unique scenery, creatures, secrets, and challenges. You can also find letters that spell out HUNGRY, which will trigger a special mode where you can eat anything, regardless of your size.

          -

          Daily chests, boosters, and bonuses to enhance your gameplay

          -

          Last but not least, Hungry Shark World also offers daily chests, boosters, and bonuses that will enhance your gameplay. You can find up to five daily chests in each map that contain various goodies such as gold, gems, bonuses, or boosters. Bonuses are items that you can activate to enjoy certain benefits such as increasing gold, food, or other stats by a certain percentage. Boosters are items that you can use during the game to gain an edge over your enemies or obstacles. For example, you can use Gold Magnet to attract gold and other valuables into your mouth, or Unstoppable to break through barriers and mines.

          -

          Why download Hungry Shark World Mod APK?

          -

          Benefits of the modded version

          -

          Unlimited money and gems

          -

          One of the main benefits of downloading Hungry Shark World Mod APK is that

          One of the main benefits of downloading Hungry Shark World Mod APK is that you will get unlimited money and gems in the game. Money and gems are the main currencies in Hungry Shark World, and you need them to unlock and upgrade your sharks, pets, costumes, and boosters. However, earning money and gems can be quite slow and tedious in the normal version of the game, and you may be tempted to spend real cash to get them faster. But with the modded version, you don't have to worry about that. You will have unlimited money and gems at your disposal, and you can use them to buy anything you want in the game.

          -

          All sharks unlocked and evolved

          -

          Another benefit of downloading Hungry Shark World Mod APK is that you will have all the sharks unlocked and evolved in the game. Normally, you have to play for a long time and complete various missions and achievements to unlock and evolve your sharks. Some sharks are also exclusive to certain events or seasons, and you may miss them if you don't play regularly. But with the modded version, you don't have to wait or miss anything. You will have access to all the sharks in the game, from the smallest to the biggest, and you can evolve them to their maximum potential.

          -

          No ads and no root required

          -

          The last benefit of downloading Hungry Shark World Mod APK is that you will not see any ads in the game, and you will not need to root your device to install it. Ads can be annoying and distracting, especially when they pop up in the middle of your gameplay or when you are trying to enjoy the graphics and sound effects of the game. But with the modded version, you will not see any ads at all, and you can enjoy the game without any interruptions. Moreover, you will not need to root your device or do any complicated steps to install the mod apk. You just need to follow some simple instructions that we will provide later in this article.

          -

          How to get hungry shark world mod apk with all sharks unlocked
          -Hungry shark world unlimited money and gems mod apk download
          -Download latest version of hungry shark world mod apk for android
          -Hungry shark world hack mod apk free download no root
          -Hungry shark world mega mod apk download all sharks unlocked
          -Best site to download hungry shark world mod apk safely
          -Hungry shark world mod apk offline download for pc
          -Hungry shark world mod apk new update download 2023
          -Download hungry shark world mod apk unlock all sharks and maps
          -Hungry shark world mod apk unlimited everything download 2023
          -Hungry shark world mod apk download for ios devices
          -Hungry shark world mod menu apk download with cheats
          -Download hungry shark world mod apk unlock all sharks and skins
          -Hungry shark world premium mod apk download with vip features
          -Hungry shark world mod apk online download with multiplayer mode
          -Download hungry shark world mod apk unlock all sharks and missions
          -Hungry shark world god mode mod apk download with invincibility
          -Hungry shark world mod apk direct download link no survey
          -Download hungry shark world mod apk unlock all sharks and accessories
          -Hungry shark world pro mod apk download with advanced settings
          -Hungry shark world full mod apk download with all levels unlocked
          -Download hungry shark world mod apk unlock all sharks and evolutions
          -Hungry shark world cracked mod apk download with anti-ban protection
          -Hungry shark world super mod apk download with unlimited coins and diamonds
          -Download hungry shark world mod apk unlock all sharks and abilities

          -

          How to download and install the mod apk

          -

          Steps to follow

          -

          If you are interested in downloading Hungry Shark World Mod APK, here are the steps that you need to follow:

          -
            -
          1. First, you need to uninstall the original version of Hungry Shark World from your device if you have it installed.
          2. -
          3. Second, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store.
          4. -
          5. Third, you need to download the mod apk file from a reliable source. You can use this link to download it safely and quickly.
          6. -
          7. Fourth, you need to locate the downloaded file on your device storage and tap on it to start the installation process.
          8. -
          9. Fifth, you need to follow the on-screen instructions and grant the necessary permissions to complete the installation.
          10. -
          11. Sixth, you need to launch the game and enjoy playing with unlimited money, gems, and sharks.
          12. -
          -

          Tips and tricks to play the game

          -

          Now that you have downloaded Hungry Shark World Mod APK, here are some tips and tricks that will help you play the game better:

          -
            -
          • Try different sharks and find out which one suits your playstyle best. Each shark has its own strengths and weaknesses, so experiment with them and see which one works for you.
          • -
          • Explore different locations and find hidden secrets and treasures. Each location has its own unique features and challenges, so don't be afraid to venture into new areas and discover new things.
          • -
          • Use boosters wisely and strategically. Boosters can give you an edge over your enemies or obstacles, but they also have a limited duration and cooldown. So use them when you really need them or when they can make a big difference.
          • -
          • Complete missions and achievements to earn extra rewards. Missions and achievements are tasks that challenge your skills and abilities in the game. Completing them will reward you with money, gems, bonuses, or even new sharks.
          • -
          • Have fun and enjoy being a hungry shark. The most important tip is to have fun and enjoy playing as a hungry shark. Don't take it too seriously or get frustrated if you die or fail. Just try again and have a blast!
          • -
          -

          Conclusion

          -

          Summary of the main points

          -

          In conclusion, Hungry Shark World is a fun and addictive action game where you can control a hungry shark

          In conclusion, Hungry Shark World is a fun and addictive action game where you can control a hungry shark and eat everything in your path. You can unlock and upgrade different sharks, explore various locations, and use boosters and bonuses to enhance your gameplay. However, if you want to enjoy the game to the fullest, you should download Hungry Shark World Mod APK, a modified version of the game that gives you unlimited money, gems, and sharks. You can download the mod apk easily and safely by following the steps that we have provided in this article. So what are you waiting for? Download Hungry Shark World Mod APK now and unleash your inner predator!

          -

          Call to action and disclaimer

          -

          If you liked this article, please share it with your friends and family who are also fans of Hungry Shark World. You can also leave a comment below and let us know what you think about the game and the mod apk. We would love to hear from you!

          -

          Disclaimer: This article is for educational and entertainment purposes only. We do not endorse or promote any illegal or unethical activities related to downloading or using modded apps or games. Please use them at your own risk and responsibility.

          -

          FAQs

          -

          What is Hungry Shark World?

          -

          Hungry Shark World is an action game where you control a hungry shark and eat everything in your path.

          -

          What is Hungry Shark World Mod APK?

          -

          Hungry Shark World Mod APK is a modified version of the game that gives you unlimited money, gems, and sharks.

          -

          How to download Hungry Shark World Mod APK?

          -

          You can download Hungry Shark World Mod APK by following the steps that we have provided in this article.

          -

          Is Hungry Shark World Mod APK safe to use?

          -

          Hungry Shark World Mod APK is safe to use as long as you download it from a reliable source and scan it with an antivirus before installing it.

          -

          What are the benefits of Hungry Shark World Mod APK?

          -

          The benefits of Hungry Shark World Mod APK are that you can unlock and upgrade all the sharks, get unlimited money and gems, and enjoy the game without any ads or root.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Barbie in a Mermaid Tale 1 Full Movie 15 The Complete Guide to the Animated Film.md b/spaces/contluForse/HuggingGPT/assets/Barbie in a Mermaid Tale 1 Full Movie 15 The Complete Guide to the Animated Film.md deleted file mode 100644 index 2031ffacfc83882317b8139694405a43798e924f..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Barbie in a Mermaid Tale 1 Full Movie 15 The Complete Guide to the Animated Film.md +++ /dev/null @@ -1,6 +0,0 @@ -

          barbie in a mermaid tale 1 full movie 15


          Download ———>>> https://ssurll.com/2uzvOd



          -
          - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/alexnet.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/alexnet.py deleted file mode 100644 index 89e36b8c7851f895d9ae7f07149f0e707456aab0..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/alexnet.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - - -class AlexNet(nn.Module): - """AlexNet backbone. - - Args: - num_classes (int): number of classes for classification. - """ - - def __init__(self, num_classes=-1): - super(AlexNet, self).__init__() - self.num_classes = num_classes - self.features = nn.Sequential( - nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(64, 192, kernel_size=5, padding=2), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - nn.Conv2d(192, 384, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(384, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 256, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.MaxPool2d(kernel_size=3, stride=2), - ) - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Dropout(), - nn.Linear(256 * 6 * 6, 4096), - nn.ReLU(inplace=True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(inplace=True), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - # use default initializer - pass - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - - x = self.features(x) - if self.num_classes > 0: - x = x.view(x.size(0), 256 * 6 * 6) - x = self.classifier(x) - - return x diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/__init__.py deleted file mode 100644 index a1116c00a17c8bd9ed7f18743baee22b3b7d3f8d..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .cgnet import CGNet -# from .fast_scnn import FastSCNN -from .hrnet import HRNet -from .mobilenet_v2 import MobileNetV2 -from .mobilenet_v3 import MobileNetV3 -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1c, ResNetV1d -from .resnext import ResNeXt -from .unet import UNet -from .vit import VisionTransformer - -__all__ = [ - 'ResNet', 'ResNetV1c', 'ResNetV1d', 'ResNeXt', 'HRNet', - 'ResNeSt', 'MobileNetV2', 'UNet', 'CGNet', 'MobileNetV3', - 'VisionTransformer' -] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/core/seg/sampler/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/core/seg/sampler/__init__.py deleted file mode 100644 index 332b242c03d1c5e80d4577df442a9a037b1816e1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/core/seg/sampler/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .base_pixel_sampler import BasePixelSampler -from .ohem_pixel_sampler import OHEMPixelSampler - -__all__ = ['BasePixelSampler', 'OHEMPixelSampler'] diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/utils/general.py b/spaces/crashedice/signify/SOURCE/yolo_files/utils/general.py deleted file mode 100644 index 524afd04cd010c2ab692e1bc2e03df1020e427e0..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/SOURCE/yolo_files/utils/general.py +++ /dev/null @@ -1,678 +0,0 @@ -# YOLOv5 general utils - -import glob -import logging -import math -import os -import platform -import random -import re -import subprocess -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import torch -import torchvision -import yaml - -from SOURCE.yolo_files.utils.google_utils import gsutil_getsize -from SOURCE.yolo_files.utils.metrics import fitness -from SOURCE.yolo_files.utils.torch_utils import init_torch_seeds - -# Settings -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -pd.options.display.max_columns = 10 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads - - -def set_logging(rank=-1, verbose=True): - logging.basicConfig( - format="%(message)s", - level=logging.INFO if (verbose and rank in [-1, 0]) else logging.WARN) - - -def init_seeds(seed=0): - # Initialize random number generator (RNG) seeds - random.seed(seed) - np.random.seed(seed) - init_torch_seeds(seed) - - -def get_latest_run(search_dir='.'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def is_docker(): - # Is environment a Docker container - return Path('/workspace').exists() # or Path('/.dockerenv').exists() - - -def is_colab(): - # Is environment a Google Colab instance - try: - import google.colab - return True - except Exception as e: - return False - - -def emojis(str=''): - # Return platform-dependent emoji-safe version of string - return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str - - -def file_size(file): - # Return file size in MB - return Path(file).stat().st_size / 1e6 - - -def check_online(): - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 443), 5) # check host accesability - return True - except OSError: - return False - - -def check_git_status(): - # Recommend 'git pull' if code is out of date - print(colorstr('github: '), end='') - try: - assert Path('.git').exists(), 'skipping check (not a git repository)' - assert not is_docker(), 'skipping check (Docker image)' - assert check_online(), 'skipping check (offline)' - - cmd = 'git fetch && git config --get remote.origin.url' - url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url - branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out - n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind - if n > 0: - s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \ - f"Use 'git pull' to update or 'git clone {url}' to download latest." - else: - s = f'up to date with {url} ✅' - print(emojis(s)) # emoji-safe - except Exception as e: - print(e) - - -def check_requirements(requirements='requirements.txt', exclude=()): - # Check installed dependencies meet requirements (pass *.txt file or list of packages) - import pkg_resources as pkg - prefix = colorstr('red', 'bold', 'requirements:') - if isinstance(requirements, (str, Path)): # requirements.txt file - file = Path(requirements) - if not file.exists(): - print(f"{prefix} {file.resolve()} not found, check failed.") - return - requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude] - else: # list or tuple of packages - requirements = [x for x in requirements if x not in exclude] - - n = 0 # number of packages updates - for r in requirements: - try: - pkg.require(r) - except Exception as e: # DistributionNotFound or VersionConflict if requirements not met - n += 1 - print(f"{prefix} {r} not found and is required by YOLOv5, attempting auto-update...") - print(subprocess.check_output(f"pip install '{r}'", shell=True).decode()) - - if n: # if packages updated - source = file.resolve() if 'file' in locals() else requirements - s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ - f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" - print(emojis(s)) # emoji-safe - - -def check_img_size(img_size, s=32): - # Verify img_size is a multiple of stride s - new_size = make_divisible(img_size, int(s)) # ceil gs-multiple - if new_size != img_size: - print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) - return new_size - - -def check_imshow(): - # Check if environment supports image displays - try: - assert not is_docker(), 'cv2.imshow() is disabled in Docker environments' - assert not is_colab(), 'cv2.imshow() is disabled in Google Colab environments' - cv2.imshow('test', np.zeros((1, 1, 3))) - cv2.waitKey(1) - cv2.destroyAllWindows() - cv2.waitKey(1) - return True - except Exception as e: - print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') - return False - - -def check_file(file): - # Search for file if not found - if Path(file).is_file() or file == '': - return file - else: - files = glob.glob('./**/' + file, recursive=True) # find file - assert len(files), f'File Not Found: {file}' # assert file was found - assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique - return files[0] # return file - - -def check_dataset(dict): - # Download dataset if not found locally - val, s = dict.get('val'), dict.get('download') - if val and len(val): - val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path - if not all(x.exists() for x in val): - print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) - if s and len(s): # download script - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - print(f'Downloading {s} ...') - torch.hub.download_url_to_file(s, f) - r = os.system(f'unzip -q {f} -d ../ && rm {f}') # unzip - elif s.startswith('bash '): # bash script - print(f'Running {s} ...') - r = os.system(s) - else: # python script - r = exec(s) # return None - print('Dataset autodownload %s\n' % ('success' if r in (0, None) else 'failure')) # print result - else: - raise Exception('Dataset not found.') - - -def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1): - # Multi-threaded file download and unzip function - def download_one(url, dir): - # Download 1 file - f = dir / Path(url).name # filename - if not f.exists(): - print(f'Downloading {url} to {f}...') - if curl: - os.system(f"curl -L '{url}' -o '{f}' --retry 9 -C -") # curl download, retry and resume on fail - else: - torch.hub.download_url_to_file(url, f, progress=True) # torch download - if unzip and f.suffix in ('.zip', '.gz'): - print(f'Unzipping {f}...') - if f.suffix == '.zip': - s = f'unzip -qo {f} -d {dir} && rm {f}' # unzip -quiet -overwrite - elif f.suffix == '.gz': - s = f'tar xfz {f} --directory {f.parent}' # unzip - if delete: # delete zip file after unzip - s += f' && rm {f}' - os.system(s) - - dir = Path(dir) - dir.mkdir(parents=True, exist_ok=True) # make directory - if threads > 1: - pool = ThreadPool(threads) - pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multi-threaded - pool.close() - pool.join() - else: - for u in tuple(url) if isinstance(url, str) else url: - download_one(u, dir) - - -def make_divisible(x, divisor): - # Returns x evenly divisible by divisor - return math.ceil(x / divisor) * divisor - - -def clean_str(s): - # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) - - -def one_cycle(y1=0.0, y2=1.0, steps=100): - # lambda function for sinusoidal ramp from y1 to y2 - return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 - - -def colorstr(*input): - # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') - *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = {'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} - return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class_weights and image contents - class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) - image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) - # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample - return image_weights - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): - # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x - y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y - y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x - y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y - return y - - -def xyn2xy(x, w=640, h=640, padw=0, padh=0): - # Convert normalized segments into pixel segments, shape (n,2) - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * x[:, 0] + padw # top left x - y[:, 1] = h * x[:, 1] + padh # top left y - return y - - -def segment2box(segment, width=640, height=640): - # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy) - x, y = segment.T # segment xy - inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) - x, y, = x[inside], y[inside] - return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy - - -def segments2boxes(segments): - # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh) - boxes = [] - for s in segments: - x, y = s.T # segment xy - boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy - return xyxy2xywh(np.array(boxes)) # cls, xywh - - -def resample_segments(segments, n=1000): - # Up-sample an (n,2) segment - for i, s in enumerate(segments): - x = np.linspace(0, len(s) - 1, n) - xp = np.arange(len(s)) - segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy - return segments - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, img_shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - boxes[:, 0].clamp_(0, img_shape[1]) # x1 - boxes[:, 1].clamp_(0, img_shape[0]) # y1 - boxes[:, 2].clamp_(0, img_shape[1]) # x2 - boxes[:, 3].clamp_(0, img_shape[0]) # y2 - - -def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - iou = inter / union - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + - (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU - else: - return iou # IoU - - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) - - -def wh_iou(wh1, wh2): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) - - -def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, - labels=()): - """Runs Non-Maximum Suppression (NMS) on inference results - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Checks - assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0' - assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0' - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - if x.get('ema'): - x['model'] = x['ema'] # replace model with ema - for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys - x[k] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB") - - -def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): - # Print mutation results to evolve.txt (for use with train.py --evolve) - a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys - b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) - - if bucket: - url = 'gs://%s/evolve.txt' % bucket - if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): - os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local - - with open('evolve.txt', 'a') as f: # append result - f.write(c + b + '\n') - x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows - x = x[np.argsort(-fitness(x))] # sort - np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness - - # Save yaml - for i, k in enumerate(hyp.keys()): - hyp[k] = float(x[0, i + 7]) - with open(yaml_file, 'w') as f: - results = tuple(x[0, :7]) - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') - yaml.safe_dump(hyp, f, sort_keys=False) - - if bucket: - os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload - - -def apply_classifier(x, model, img, im0): - # Apply a second stage classifier to yolo outputs - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for j, a in enumerate(d): # per item - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - # cv2.imwrite('test%i.jpg' % j, cutout) - - im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction - x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections - - return x - - -def save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=False, BGR=False): - # Save an image crop as {file} with crop size multiplied by {gain} and padded by {pad} pixels - xyxy = torch.tensor(xyxy).view(-1, 4) - b = xyxy2xywh(xyxy) # boxes - if square: - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square - b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad - xyxy = xywh2xyxy(b).long() - clip_coords(xyxy, im.shape) - crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2])] - cv2.imwrite(str(increment_path(file, mkdir=True).with_suffix('.jpg')), crop if BGR else crop[..., ::-1]) - - -def increment_path(path, exist_ok=False, sep='', mkdir=False): - # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc. - path = Path(path) # os-agnostic - if path.exists() and not exist_ok: - suffix = path.suffix - path = path.with_suffix('') - dirs = glob.glob(f"{path}{sep}*") # similar paths - matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] - i = [int(m.groups()[0]) for m in matches if m] # indices - n = max(i) + 1 if i else 2 # increment number - path = Path(f"{path}{sep}{n}{suffix}") # update path - dir = path if path.suffix == '' else path.parent # directory - if not dir.exists() and mkdir: - dir.mkdir(parents=True, exist_ok=True) # make directory - return path diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/ldsr_model.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/ldsr_model.py deleted file mode 100644 index 1c1070fc6bc0ee361d386a9cca9cfa7c1774141e..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/ldsr_model.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import sys -import traceback - -from basicsr.utils.download_util import load_file_from_url - -from modules.upscaler import Upscaler, UpscalerData -from modules.ldsr_model_arch import LDSR -from modules import shared -from modules.paths import models_path - - -class UpscalerLDSR(Upscaler): - def __init__(self, user_path): - self.name = "LDSR" - self.model_path = os.path.join(models_path, self.name) - self.user_path = user_path - self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" - self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1" - super().__init__() - scaler_data = UpscalerData("LDSR", None, self) - self.scalers = [scaler_data] - - def load_model(self, path: str): - # Remove incorrect project.yaml file if too big - yaml_path = os.path.join(self.model_path, "project.yaml") - old_model_path = os.path.join(self.model_path, "model.pth") - new_model_path = os.path.join(self.model_path, "model.ckpt") - if os.path.exists(yaml_path): - statinfo = os.stat(yaml_path) - if statinfo.st_size >= 10485760: - print("Removing invalid LDSR YAML file.") - os.remove(yaml_path) - if os.path.exists(old_model_path): - print("Renaming model from model.pth to model.ckpt") - os.rename(old_model_path, new_model_path) - model = load_file_from_url(url=self.model_url, model_dir=self.model_path, - file_name="model.ckpt", progress=True) - yaml = load_file_from_url(url=self.yaml_url, model_dir=self.model_path, - file_name="project.yaml", progress=True) - - try: - return LDSR(model, yaml) - - except Exception: - print("Error importing LDSR:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - return None - - def do_upscale(self, img, path): - ldsr = self.load_model(path) - if ldsr is None: - print("NO LDSR!") - return img - ddim_steps = shared.opts.ldsr_steps - return ldsr.super_resolution(img, ddim_steps, self.scale) diff --git a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/generator.py b/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/generator.py deleted file mode 100644 index a31155d685013ac24ef5fa0e12569b46c9c74ae0..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/generator.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from .util import ResBlock2d, SameBlock2d, UpBlock2d, DownBlock2d -from .dense_motion import DenseMotionNetwork - - -class OcclusionAwareGenerator(nn.Module): - """ - Generator that given source image and and keypoints try to transform image according to movement trajectories - induced by keypoints. Generator follows Johnson architecture. - """ - - def __init__(self, num_channels, num_kp, block_expansion, max_features, num_down_blocks, - num_bottleneck_blocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, num_channels=num_channels, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(num_channels, block_expansion, kernel_size=(7, 7), padding=(3, 3)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - up_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** (num_down_blocks - i))) - out_features = min(max_features, block_expansion * (2 ** (num_down_blocks - i - 1))) - up_blocks.append(UpBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.up_blocks = nn.ModuleList(up_blocks) - - self.bottleneck = torch.nn.Sequential() - in_features = min(max_features, block_expansion * (2 ** num_down_blocks)) - for i in range(num_bottleneck_blocks): - self.bottleneck.add_module('r' + str(i), ResBlock2d(in_features, kernel_size=(3, 3), padding=(1, 1))) - - self.final = nn.Conv2d(block_expansion, num_channels, kernel_size=(7, 7), padding=(3, 3)) - self.estimate_occlusion_map = estimate_occlusion_map - self.num_channels = num_channels - - def deform_input(self, inp, deformation): - _, h_old, w_old, _ = deformation.shape - _, _, h, w = inp.shape - if h_old != h or w_old != w: - deformation = deformation.permute(0, 3, 1, 2) - deformation = F.interpolate(deformation, size=(h, w), mode='bilinear') - deformation = deformation.permute(0, 2, 3, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) #[4,64,H,W] - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) #[4,256,H/4,W/4] - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(source_image=source_image, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - output_dict['sparse_deformed'] = dense_motion['sparse_deformed'] - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(out, deformation) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - output_dict["deformed"] = self.deform_input(source_image, deformation) - - # Decoding part - out = self.bottleneck(out) #[4,256,64,64] - for i in range(len(self.up_blocks)): - out = self.up_blocks[i](out) - out = self.final(out) - out = torch.sigmoid(out) #[4,3,256,256] - - output_dict["prediction"] = out - - return output_dict diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/cli.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/cli.py deleted file mode 100644 index e8f671ca2ea2ee85b63e7ecc919224f9f6181983..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/cli.py +++ /dev/null @@ -1,296 +0,0 @@ -""" -The ``jsonschema`` command line. -""" - -from importlib import metadata -from json import JSONDecodeError -from textwrap import dedent -import argparse -import json -import sys -import traceback -import warnings - -try: - from pkgutil import resolve_name -except ImportError: - from pkgutil_resolve_name import resolve_name # type: ignore - -from attrs import define, field - -from jsonschema.exceptions import SchemaError -from jsonschema.validators import _RefResolver, validator_for - -warnings.warn( - ( - "The jsonschema CLI is deprecated and will be removed in a future " - "version. Please use check-jsonschema instead, which can be installed " - "from https://pypi.org/project/check-jsonschema/" - ), - DeprecationWarning, - stacklevel=2, -) - - -class _CannotLoadFile(Exception): - pass - - -@define -class _Outputter: - - _formatter = field() - _stdout = field() - _stderr = field() - - @classmethod - def from_arguments(cls, arguments, stdout, stderr): - if arguments["output"] == "plain": - formatter = _PlainFormatter(arguments["error_format"]) - elif arguments["output"] == "pretty": - formatter = _PrettyFormatter() - return cls(formatter=formatter, stdout=stdout, stderr=stderr) - - def load(self, path): - try: - file = open(path) - except FileNotFoundError: - self.filenotfound_error(path=path, exc_info=sys.exc_info()) - raise _CannotLoadFile() - - with file: - try: - return json.load(file) - except JSONDecodeError: - self.parsing_error(path=path, exc_info=sys.exc_info()) - raise _CannotLoadFile() - - def filenotfound_error(self, **kwargs): - self._stderr.write(self._formatter.filenotfound_error(**kwargs)) - - def parsing_error(self, **kwargs): - self._stderr.write(self._formatter.parsing_error(**kwargs)) - - def validation_error(self, **kwargs): - self._stderr.write(self._formatter.validation_error(**kwargs)) - - def validation_success(self, **kwargs): - self._stdout.write(self._formatter.validation_success(**kwargs)) - - -@define -class _PrettyFormatter: - - _ERROR_MSG = dedent( - """\ - ===[{type}]===({path})=== - - {body} - ----------------------------- - """, - ) - _SUCCESS_MSG = "===[SUCCESS]===({path})===\n" - - def filenotfound_error(self, path, exc_info): - return self._ERROR_MSG.format( - path=path, - type="FileNotFoundError", - body="{!r} does not exist.".format(path), - ) - - def parsing_error(self, path, exc_info): - exc_type, exc_value, exc_traceback = exc_info - exc_lines = "".join( - traceback.format_exception(exc_type, exc_value, exc_traceback), - ) - return self._ERROR_MSG.format( - path=path, - type=exc_type.__name__, - body=exc_lines, - ) - - def validation_error(self, instance_path, error): - return self._ERROR_MSG.format( - path=instance_path, - type=error.__class__.__name__, - body=error, - ) - - def validation_success(self, instance_path): - return self._SUCCESS_MSG.format(path=instance_path) - - -@define -class _PlainFormatter: - - _error_format = field() - - def filenotfound_error(self, path, exc_info): - return "{!r} does not exist.\n".format(path) - - def parsing_error(self, path, exc_info): - return "Failed to parse {}: {}\n".format( - "" if path == "" else repr(path), - exc_info[1], - ) - - def validation_error(self, instance_path, error): - return self._error_format.format(file_name=instance_path, error=error) - - def validation_success(self, instance_path): - return "" - - -def _resolve_name_with_default(name): - if "." not in name: - name = "jsonschema." + name - return resolve_name(name) - - -parser = argparse.ArgumentParser( - description="JSON Schema Validation CLI", -) -parser.add_argument( - "-i", "--instance", - action="append", - dest="instances", - help=""" - a path to a JSON instance (i.e. filename.json) to validate (may - be specified multiple times). If no instances are provided via this - option, one will be expected on standard input. - """, -) -parser.add_argument( - "-F", "--error-format", - help=""" - the format to use for each validation error message, specified - in a form suitable for str.format. This string will be passed - one formatted object named 'error' for each ValidationError. - Only provide this option when using --output=plain, which is the - default. If this argument is unprovided and --output=plain is - used, a simple default representation will be used. - """, -) -parser.add_argument( - "-o", "--output", - choices=["plain", "pretty"], - default="plain", - help=""" - an output format to use. 'plain' (default) will produce minimal - text with one line for each error, while 'pretty' will produce - more detailed human-readable output on multiple lines. - """, -) -parser.add_argument( - "-V", "--validator", - type=_resolve_name_with_default, - help=""" - the fully qualified object name of a validator to use, or, for - validators that are registered with jsonschema, simply the name - of the class. - """, -) -parser.add_argument( - "--base-uri", - help=""" - a base URI to assign to the provided schema, even if it does not - declare one (via e.g. $id). This option can be used if you wish to - resolve relative references to a particular URI (or local path) - """, -) -parser.add_argument( - "--version", - action="version", - version=metadata.version("jsonschema"), -) -parser.add_argument( - "schema", - help="the path to a JSON Schema to validate with (i.e. schema.json)", -) - - -def parse_args(args): - arguments = vars(parser.parse_args(args=args or ["--help"])) - if arguments["output"] != "plain" and arguments["error_format"]: - raise parser.error( - "--error-format can only be used with --output plain", - ) - if arguments["output"] == "plain" and arguments["error_format"] is None: - arguments["error_format"] = "{error.instance}: {error.message}\n" - return arguments - - -def _validate_instance(instance_path, instance, validator, outputter): - invalid = False - for error in validator.iter_errors(instance): - invalid = True - outputter.validation_error(instance_path=instance_path, error=error) - - if not invalid: - outputter.validation_success(instance_path=instance_path) - return invalid - - -def main(args=sys.argv[1:]): - sys.exit(run(arguments=parse_args(args=args))) - - -def run(arguments, stdout=sys.stdout, stderr=sys.stderr, stdin=sys.stdin): - outputter = _Outputter.from_arguments( - arguments=arguments, - stdout=stdout, - stderr=stderr, - ) - - try: - schema = outputter.load(arguments["schema"]) - except _CannotLoadFile: - return 1 - - Validator = arguments["validator"] - if Validator is None: - Validator = validator_for(schema) - - try: - Validator.check_schema(schema) - except SchemaError as error: - outputter.validation_error( - instance_path=arguments["schema"], - error=error, - ) - return 1 - - if arguments["instances"]: - load, instances = outputter.load, arguments["instances"] - else: - def load(_): - try: - return json.load(stdin) - except JSONDecodeError: - outputter.parsing_error( - path="", exc_info=sys.exc_info(), - ) - raise _CannotLoadFile() - instances = [""] - - resolver = _RefResolver( - base_uri=arguments["base_uri"], - referrer=schema, - ) if arguments["base_uri"] is not None else None - - validator = Validator(schema, resolver=resolver) - exit_code = 0 - for each in instances: - try: - instance = load(each) - except _CannotLoadFile: - exit_code = 1 - else: - exit_code |= _validate_instance( - instance_path=each, - instance=instance, - validator=validator, - outputter=outputter, - ) - - return exit_code diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/unet_2d_blocks.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/unet_2d_blocks.py deleted file mode 100644 index 70cc75b51200b53a89f48bec92fa5dd66209f43e..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/unet_2d_blocks.py +++ /dev/null @@ -1,2775 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import Any, Dict, Optional, Tuple - -import numpy as np -import torch -from torch import nn - -from .attention import AdaGroupNorm, AttentionBlock -from .attention_processor import Attention, AttnAddedKVProcessor -from .dual_transformer_2d import DualTransformer2DModel -from .resnet import Downsample2D, FirDownsample2D, FirUpsample2D, KDownsample2D, KUpsample2D, ResnetBlock2D, Upsample2D -from .transformer_2d import Transformer2DModel, Transformer2DModelOutput - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlock2D": - return DownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "ResnetDownsampleBlock2D": - return ResnetDownsampleBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "AttnDownBlock2D": - return AttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "CrossAttnDownBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D") - return CrossAttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "SimpleCrossAttnDownBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for SimpleCrossAttnDownBlock2D") - return SimpleCrossAttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "SkipDownBlock2D": - return SkipDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "AttnSkipDownBlock2D": - return AttnSkipDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "DownEncoderBlock2D": - return DownEncoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "AttnDownEncoderBlock2D": - return AttnDownEncoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "KDownBlock2D": - return KDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - ) - elif down_block_type == "KCrossAttnDownBlock2D": - return KCrossAttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - add_self_attention=True if not add_downsample else False, - ) - raise ValueError(f"{down_block_type} does not exist.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlock2D": - return UpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "ResnetUpsampleBlock2D": - return ResnetUpsampleBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "CrossAttnUpBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock2D") - return CrossAttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "SimpleCrossAttnUpBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for SimpleCrossAttnUpBlock2D") - return SimpleCrossAttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "AttnUpBlock2D": - return AttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "SkipUpBlock2D": - return SkipUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "AttnSkipUpBlock2D": - return AttnSkipUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "UpDecoderBlock2D": - return UpDecoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "AttnUpDecoderBlock2D": - return AttnUpDecoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "KUpBlock2D": - return KUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - ) - elif up_block_type == "KCrossAttnUpBlock2D": - return KCrossAttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - ) - - raise ValueError(f"{up_block_type} does not exist.") - - -class UNetMidBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - add_attention: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - ): - super().__init__() - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - self.add_attention = add_attention - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if self.add_attention: - attentions.append( - AttentionBlock( - in_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - else: - attentions.append(None) - - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - if attn is not None: - hidden_states = attn(hidden_states) - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class UNetMidBlock2DCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - upcast_attention=False, - ): - super().__init__() - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward( - self, - hidden_states: torch.FloatTensor, - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ) -> torch.FloatTensor: - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - output: Transformer2DModelOutput = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ) - hidden_states = output.sample - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class UNetMidBlock2DSimpleCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - ): - super().__init__() - - self.has_cross_attention = True - - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - self.num_heads = in_channels // self.attn_num_head_channels - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - attentions.append( - Attention( - query_dim=in_channels, - cross_attention_dim=in_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=AttnAddedKVProcessor(), - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - # resnet - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class AttnDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class CrossAttnDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.FloatTensor, - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - None, # timestep - None, # class_labels - cross_attention_kwargs, - attention_mask, - encoder_attention_mask, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ).sample - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownEncoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb=None) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states - - -class AttnDownEncoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states): - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb=None) - hidden_states = attn(hidden_states) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states - - -class AttnSkipDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=np.sqrt(2.0), - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - self.attentions = nn.ModuleList([]) - self.resnets = nn.ModuleList([]) - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - self.resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(in_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - self.attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - if add_downsample: - self.resnet_down = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - down=True, - kernel="fir", - ) - self.downsamplers = nn.ModuleList([FirDownsample2D(out_channels, out_channels=out_channels)]) - self.skip_conv = nn.Conv2d(3, out_channels, kernel_size=(1, 1), stride=(1, 1)) - else: - self.resnet_down = None - self.downsamplers = None - self.skip_conv = None - - def forward(self, hidden_states, temb=None, skip_sample=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - hidden_states = self.resnet_down(hidden_states, temb) - for downsampler in self.downsamplers: - skip_sample = downsampler(skip_sample) - - hidden_states = self.skip_conv(skip_sample) + hidden_states - - output_states += (hidden_states,) - - return hidden_states, output_states, skip_sample - - -class SkipDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - output_scale_factor=np.sqrt(2.0), - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - self.resnets = nn.ModuleList([]) - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - self.resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(in_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - if add_downsample: - self.resnet_down = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - down=True, - kernel="fir", - ) - self.downsamplers = nn.ModuleList([FirDownsample2D(out_channels, out_channels=out_channels)]) - self.skip_conv = nn.Conv2d(3, out_channels, kernel_size=(1, 1), stride=(1, 1)) - else: - self.resnet_down = None - self.downsamplers = None - self.skip_conv = None - - def forward(self, hidden_states, temb=None, skip_sample=None): - output_states = () - - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb) - output_states += (hidden_states,) - - if self.downsamplers is not None: - hidden_states = self.resnet_down(hidden_states, temb) - for downsampler in self.downsamplers: - skip_sample = downsampler(skip_sample) - - hidden_states = self.skip_conv(skip_sample) + hidden_states - - output_states += (hidden_states,) - - return hidden_states, output_states, skip_sample - - -class ResnetDownsampleBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - down=True, - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states, temb) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class SimpleCrossAttnDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_downsample=True, - ): - super().__init__() - - self.has_cross_attention = True - - resnets = [] - attentions = [] - - self.attn_num_head_channels = attn_num_head_channels - self.num_heads = out_channels // self.attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - Attention( - query_dim=out_channels, - cross_attention_dim=out_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=AttnAddedKVProcessor(), - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - down=True, - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - output_states = () - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - - for resnet, attn in zip(self.resnets, self.attentions): - # resnet - hidden_states = resnet(hidden_states, temb) - - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states, temb) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class KDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 4, - resnet_eps: float = 1e-5, - resnet_act_fn: str = "gelu", - resnet_group_size: int = 32, - add_downsample=False, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - groups = in_channels // resnet_group_size - groups_out = out_channels // resnet_group_size - - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - dropout=dropout, - temb_channels=temb_channels, - groups=groups, - groups_out=groups_out, - eps=resnet_eps, - non_linearity=resnet_act_fn, - time_embedding_norm="ada_group", - conv_shortcut_bias=False, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - # YiYi's comments- might be able to use FirDownsample2D, look into details later - self.downsamplers = nn.ModuleList([KDownsample2D()]) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states, output_states - - -class KCrossAttnDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - cross_attention_dim: int, - dropout: float = 0.0, - num_layers: int = 4, - resnet_group_size: int = 32, - add_downsample=True, - attn_num_head_channels: int = 64, - add_self_attention: bool = False, - resnet_eps: float = 1e-5, - resnet_act_fn: str = "gelu", - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - groups = in_channels // resnet_group_size - groups_out = out_channels // resnet_group_size - - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - dropout=dropout, - temb_channels=temb_channels, - groups=groups, - groups_out=groups_out, - eps=resnet_eps, - non_linearity=resnet_act_fn, - time_embedding_norm="ada_group", - conv_shortcut_bias=False, - ) - ) - attentions.append( - KAttentionBlock( - out_channels, - out_channels // attn_num_head_channels, - attn_num_head_channels, - cross_attention_dim=cross_attention_dim, - temb_channels=temb_channels, - attention_bias=True, - add_self_attention=add_self_attention, - cross_attention_norm=True, - group_size=resnet_group_size, - ) - ) - - self.resnets = nn.ModuleList(resnets) - self.attentions = nn.ModuleList(attentions) - - if add_downsample: - self.downsamplers = nn.ModuleList([KDownsample2D()]) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - attention_mask, - cross_attention_kwargs, - ) - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - emb=temb, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - ) - - if self.downsamplers is None: - output_states += (None,) - else: - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states, output_states - - -class AttnUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class CrossAttnUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.FloatTensor, - res_hidden_states_tuple: Tuple[torch.FloatTensor, ...], - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - upsample_size: Optional[int] = None, - attention_mask: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ): - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - None, # timestep - None, # class_labels - cross_attention_kwargs, - attention_mask, - encoder_attention_mask, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ).sample - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpDecoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb=None) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class AttnUpDecoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states): - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb=None) - hidden_states = attn(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class AttnSkipUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=np.sqrt(2.0), - upsample_padding=1, - add_upsample=True, - ): - super().__init__() - self.attentions = nn.ModuleList([]) - self.resnets = nn.ModuleList([]) - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - self.resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(resnet_in_channels + res_skip_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels) - if add_upsample: - self.resnet_up = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - up=True, - kernel="fir", - ) - self.skip_conv = nn.Conv2d(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.skip_norm = torch.nn.GroupNorm( - num_groups=min(out_channels // 4, 32), num_channels=out_channels, eps=resnet_eps, affine=True - ) - self.act = nn.SiLU() - else: - self.resnet_up = None - self.skip_conv = None - self.skip_norm = None - self.act = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - - hidden_states = self.attentions[0](hidden_states) - - if skip_sample is not None: - skip_sample = self.upsampler(skip_sample) - else: - skip_sample = 0 - - if self.resnet_up is not None: - skip_sample_states = self.skip_norm(hidden_states) - skip_sample_states = self.act(skip_sample_states) - skip_sample_states = self.skip_conv(skip_sample_states) - - skip_sample = skip_sample + skip_sample_states - - hidden_states = self.resnet_up(hidden_states, temb) - - return hidden_states, skip_sample - - -class SkipUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - output_scale_factor=np.sqrt(2.0), - add_upsample=True, - upsample_padding=1, - ): - super().__init__() - self.resnets = nn.ModuleList([]) - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - self.resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min((resnet_in_channels + res_skip_channels) // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels) - if add_upsample: - self.resnet_up = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - up=True, - kernel="fir", - ) - self.skip_conv = nn.Conv2d(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.skip_norm = torch.nn.GroupNorm( - num_groups=min(out_channels // 4, 32), num_channels=out_channels, eps=resnet_eps, affine=True - ) - self.act = nn.SiLU() - else: - self.resnet_up = None - self.skip_conv = None - self.skip_norm = None - self.act = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - - if skip_sample is not None: - skip_sample = self.upsampler(skip_sample) - else: - skip_sample = 0 - - if self.resnet_up is not None: - skip_sample_states = self.skip_norm(hidden_states) - skip_sample_states = self.act(skip_sample_states) - skip_sample_states = self.skip_conv(skip_sample_states) - - skip_sample = skip_sample + skip_sample_states - - hidden_states = self.resnet_up(hidden_states, temb) - - return hidden_states, skip_sample - - -class ResnetUpsampleBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - up=True, - ) - ] - ) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, temb) - - return hidden_states - - -class SimpleCrossAttnUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - self.num_heads = out_channels // self.attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - Attention( - query_dim=out_channels, - cross_attention_dim=out_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=AttnAddedKVProcessor(), - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - up=True, - ) - ] - ) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - upsample_size=None, - attention_mask=None, - cross_attention_kwargs=None, - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - for resnet, attn in zip(self.resnets, self.attentions): - # resnet - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, temb) - - return hidden_states - - -class KUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 5, - resnet_eps: float = 1e-5, - resnet_act_fn: str = "gelu", - resnet_group_size: Optional[int] = 32, - add_upsample=True, - ): - super().__init__() - resnets = [] - k_in_channels = 2 * out_channels - k_out_channels = in_channels - num_layers = num_layers - 1 - - for i in range(num_layers): - in_channels = k_in_channels if i == 0 else out_channels - groups = in_channels // resnet_group_size - groups_out = out_channels // resnet_group_size - - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=k_out_channels if (i == num_layers - 1) else out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=groups, - groups_out=groups_out, - dropout=dropout, - non_linearity=resnet_act_fn, - time_embedding_norm="ada_group", - conv_shortcut_bias=False, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([KUpsample2D()]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - res_hidden_states_tuple = res_hidden_states_tuple[-1] - if res_hidden_states_tuple is not None: - hidden_states = torch.cat([hidden_states, res_hidden_states_tuple], dim=1) - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class KCrossAttnUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 4, - resnet_eps: float = 1e-5, - resnet_act_fn: str = "gelu", - resnet_group_size: int = 32, - attn_num_head_channels=1, # attention dim_head - cross_attention_dim: int = 768, - add_upsample: bool = True, - upcast_attention: bool = False, - ): - super().__init__() - resnets = [] - attentions = [] - - is_first_block = in_channels == out_channels == temb_channels - is_middle_block = in_channels != out_channels - add_self_attention = True if is_first_block else False - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - # in_channels, and out_channels for the block (k-unet) - k_in_channels = out_channels if is_first_block else 2 * out_channels - k_out_channels = in_channels - - num_layers = num_layers - 1 - - for i in range(num_layers): - in_channels = k_in_channels if i == 0 else out_channels - groups = in_channels // resnet_group_size - groups_out = out_channels // resnet_group_size - - if is_middle_block and (i == num_layers - 1): - conv_2d_out_channels = k_out_channels - else: - conv_2d_out_channels = None - - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - conv_2d_out_channels=conv_2d_out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=groups, - groups_out=groups_out, - dropout=dropout, - non_linearity=resnet_act_fn, - time_embedding_norm="ada_group", - conv_shortcut_bias=False, - ) - ) - attentions.append( - KAttentionBlock( - k_out_channels if (i == num_layers - 1) else out_channels, - k_out_channels // attn_num_head_channels - if (i == num_layers - 1) - else out_channels // attn_num_head_channels, - attn_num_head_channels, - cross_attention_dim=cross_attention_dim, - temb_channels=temb_channels, - attention_bias=True, - add_self_attention=add_self_attention, - cross_attention_norm=True, - upcast_attention=upcast_attention, - ) - ) - - self.resnets = nn.ModuleList(resnets) - self.attentions = nn.ModuleList(attentions) - - if add_upsample: - self.upsamplers = nn.ModuleList([KUpsample2D()]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - cross_attention_kwargs=None, - upsample_size=None, - attention_mask=None, - ): - res_hidden_states_tuple = res_hidden_states_tuple[-1] - if res_hidden_states_tuple is not None: - hidden_states = torch.cat([hidden_states, res_hidden_states_tuple], dim=1) - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - attention_mask, - cross_attention_kwargs, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - emb=temb, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - ) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -# can potentially later be renamed to `No-feed-forward` attention -class KAttentionBlock(nn.Module): - r""" - A basic Transformer block. - - Parameters: - dim (`int`): The number of channels in the input and output. - num_attention_heads (`int`): The number of heads to use for multi-head attention. - attention_head_dim (`int`): The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The size of the encoder_hidden_states vector for cross attention. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm (: - obj: `int`, *optional*): The number of diffusion steps used during training. See `Transformer2DModel`. - attention_bias (: - obj: `bool`, *optional*, defaults to `False`): Configure if the attentions should contain a bias parameter. - """ - - def __init__( - self, - dim: int, - num_attention_heads: int, - attention_head_dim: int, - dropout: float = 0.0, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - upcast_attention: bool = False, - temb_channels: int = 768, # for ada_group_norm - add_self_attention: bool = False, - cross_attention_norm: bool = False, - group_size: int = 32, - ): - super().__init__() - self.add_self_attention = add_self_attention - - # 1. Self-Attn - if add_self_attention: - self.norm1 = AdaGroupNorm(temb_channels, dim, max(1, dim // group_size)) - self.attn1 = Attention( - query_dim=dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - cross_attention_dim=None, - cross_attention_norm=False, - ) - - # 2. Cross-Attn - self.norm2 = AdaGroupNorm(temb_channels, dim, max(1, dim // group_size)) - self.attn2 = Attention( - query_dim=dim, - cross_attention_dim=cross_attention_dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - upcast_attention=upcast_attention, - cross_attention_norm=cross_attention_norm, - ) - - def _to_3d(self, hidden_states, height, weight): - return hidden_states.permute(0, 2, 3, 1).reshape(hidden_states.shape[0], height * weight, -1) - - def _to_4d(self, hidden_states, height, weight): - return hidden_states.permute(0, 2, 1).reshape(hidden_states.shape[0], -1, height, weight) - - def forward( - self, - hidden_states, - encoder_hidden_states=None, - emb=None, - attention_mask=None, - cross_attention_kwargs=None, - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - - # 1. Self-Attention - if self.add_self_attention: - norm_hidden_states = self.norm1(hidden_states, emb) - - height, weight = norm_hidden_states.shape[2:] - norm_hidden_states = self._to_3d(norm_hidden_states, height, weight) - - attn_output = self.attn1( - norm_hidden_states, - encoder_hidden_states=None, - **cross_attention_kwargs, - ) - attn_output = self._to_4d(attn_output, height, weight) - - hidden_states = attn_output + hidden_states - - # 2. Cross-Attention/None - norm_hidden_states = self.norm2(hidden_states, emb) - - height, weight = norm_hidden_states.shape[2:] - norm_hidden_states = self._to_3d(norm_hidden_states, height, weight) - attn_output = self.attn2( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states, - **cross_attention_kwargs, - ) - attn_output = self._to_4d(attn_output, height, weight) - - hidden_states = attn_output + hidden_states - - return hidden_states diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/hello.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/hello.py deleted file mode 100644 index 2eb4c31f0a3c5158853ae3798764c7f09bd34074..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/hello.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/2 16:03 -@Author : mashenquan -@File : hello.py -@Desc : Implement the OpenAPI Specification 3.0 demo and use the following command to test the HTTP service: - - curl -X 'POST' \ - 'http://localhost:8080/openapi/greeting/dave' \ - -H 'accept: text/plain' \ - -H 'Content-Type: application/json' \ - -d '{}' -""" - -import connexion - - -# openapi implement -async def post_greeting(name: str) -> str: - return f"Hello {name}\n" - - -if __name__ == "__main__": - app = connexion.AioHttpApp(__name__, specification_dir='../../.well-known/') - app.add_api("openapi.yaml", arguments={"title": "Hello World Example"}) - app.run(port=8080) diff --git a/spaces/deniandriancode/zephyr-7b-alpha-chatbot/app.py b/spaces/deniandriancode/zephyr-7b-alpha-chatbot/app.py deleted file mode 100644 index 31fab80ffe5a090b5fbf97798bcfcea8cb21e36c..0000000000000000000000000000000000000000 --- a/spaces/deniandriancode/zephyr-7b-alpha-chatbot/app.py +++ /dev/null @@ -1,102 +0,0 @@ -from huggingface_hub import InferenceClient -import gradio as gr - -client = InferenceClient( - "HuggingFaceH4/zephyr-7b-alpha" -) - - -def format_prompt(message, history): - system = "<|system|>\nYou are a helpful virtual assistant that answer user's question with easy to understand words.\n" - prompt = "" - for user_prompt, bot_response in history: - prompt += f"<|user|>\n{user_prompt}\n" - prompt += f"<|assistant|>\n{bot_response}\n" - prompt += f"<|user|>\n{message}\n" - return prompt - -def generate( - prompt, history, temperature=0.9, max_new_tokens=500, top_p=0.95, repetition_penalty=1.0, -): - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - generate_kwargs = dict( - temperature=temperature, - max_new_tokens=max_new_tokens, - top_p=top_p, - repetition_penalty=repetition_penalty, - do_sample=True, - seed=42, - ) - - formatted_prompt = format_prompt(prompt, history) - - stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - - for response in stream: - output += response.token.text - yield output - return output - - -additional_inputs=[ - gr.Slider( - label="Temperature", - value=0.9, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=256, - minimum=0, - maximum=1048, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.90, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - -css = """ - #mkd { - height: 500px; - overflow: auto; - border: 1px solid #ccc; - } -""" - -with gr.Blocks(css=css) as inf: - gr.HTML("

          zephyr-7b-alpha

          ") - gr.HTML("

          In this demo, you can chat with zephyr-7b-alpha model. 💬

          ") - gr.ChatInterface( - generate, - additional_inputs=additional_inputs, - examples=[["Can squirrel swims?"], ["Write a poem about squirrel."]] - ) - -inf.queue().launch() \ No newline at end of file diff --git a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/python/dqn/dqn.py b/spaces/derful/Chatgpt-academic/crazy_functions/test_project/python/dqn/dqn.py deleted file mode 100644 index 6cea64d39baa7ff4c1e549869aaa4b0ae17779a9..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/python/dqn/dqn.py +++ /dev/null @@ -1,245 +0,0 @@ -from typing import Any, Dict, List, Optional, Tuple, Type, Union - -import gym -import numpy as np -import torch as th -from torch.nn import functional as F - -from stable_baselines3.common import logger -from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm -from stable_baselines3.common.preprocessing import maybe_transpose -from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule -from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update -from stable_baselines3.dqn.policies import DQNPolicy - - -class DQN(OffPolicyAlgorithm): - """ - Deep Q-Network (DQN) - - Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236 - Default hyperparameters are taken from the nature paper, - except for the optimizer and learning rate that were taken from Stable Baselines defaults. - - :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...) - :param env: The environment to learn from (if registered in Gym, can be str) - :param learning_rate: The learning rate, it can be a function - of the current progress remaining (from 1 to 0) - :param buffer_size: size of the replay buffer - :param learning_starts: how many steps of the model to collect transitions for before learning starts - :param batch_size: Minibatch size for each gradient update - :param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update - :param gamma: the discount factor - :param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit - like ``(5, "step")`` or ``(2, "episode")``. - :param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``) - Set to ``-1`` means to do as many gradient steps as steps done in the environment - during the rollout. - :param optimize_memory_usage: Enable a memory efficient variant of the replay buffer - at a cost of more complexity. - See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195 - :param target_update_interval: update the target network every ``target_update_interval`` - environment steps. - :param exploration_fraction: fraction of entire training period over which the exploration rate is reduced - :param exploration_initial_eps: initial value of random action probability - :param exploration_final_eps: final value of random action probability - :param max_grad_norm: The maximum value for the gradient clipping - :param tensorboard_log: the log location for tensorboard (if None, no logging) - :param create_eval_env: Whether to create a second environment that will be - used for evaluating the agent periodically. (Only available when passing string for the environment) - :param policy_kwargs: additional arguments to be passed to the policy on creation - :param verbose: the verbosity level: 0 no output, 1 info, 2 debug - :param seed: Seed for the pseudo random generators - :param device: Device (cpu, cuda, ...) on which the code should be run. - Setting it to auto, the code will be run on the GPU if possible. - :param _init_setup_model: Whether or not to build the network at the creation of the instance - """ - - def __init__( - self, - policy: Union[str, Type[DQNPolicy]], - env: Union[GymEnv, str], - learning_rate: Union[float, Schedule] = 1e-4, - buffer_size: int = 1000000, - learning_starts: int = 50000, - batch_size: Optional[int] = 32, - tau: float = 1.0, - gamma: float = 0.99, - train_freq: Union[int, Tuple[int, str]] = 4, - gradient_steps: int = 1, - optimize_memory_usage: bool = False, - target_update_interval: int = 10000, - exploration_fraction: float = 0.1, - exploration_initial_eps: float = 1.0, - exploration_final_eps: float = 0.05, - max_grad_norm: float = 10, - tensorboard_log: Optional[str] = None, - create_eval_env: bool = False, - policy_kwargs: Optional[Dict[str, Any]] = None, - verbose: int = 0, - seed: Optional[int] = None, - device: Union[th.device, str] = "auto", - _init_setup_model: bool = True, - ): - - super(DQN, self).__init__( - policy, - env, - DQNPolicy, - learning_rate, - buffer_size, - learning_starts, - batch_size, - tau, - gamma, - train_freq, - gradient_steps, - action_noise=None, # No action noise - policy_kwargs=policy_kwargs, - tensorboard_log=tensorboard_log, - verbose=verbose, - device=device, - create_eval_env=create_eval_env, - seed=seed, - sde_support=False, - optimize_memory_usage=optimize_memory_usage, - supported_action_spaces=(gym.spaces.Discrete,), - ) - - self.exploration_initial_eps = exploration_initial_eps - self.exploration_final_eps = exploration_final_eps - self.exploration_fraction = exploration_fraction - self.target_update_interval = target_update_interval - self.max_grad_norm = max_grad_norm - # "epsilon" for the epsilon-greedy exploration - self.exploration_rate = 0.0 - # Linear schedule will be defined in `_setup_model()` - self.exploration_schedule = None - self.q_net, self.q_net_target = None, None - - if _init_setup_model: - self._setup_model() - - def _setup_model(self) -> None: - super(DQN, self)._setup_model() - self._create_aliases() - self.exploration_schedule = get_linear_fn( - self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction - ) - - def _create_aliases(self) -> None: - self.q_net = self.policy.q_net - self.q_net_target = self.policy.q_net_target - - def _on_step(self) -> None: - """ - Update the exploration rate and target network if needed. - This method is called in ``collect_rollouts()`` after each step in the environment. - """ - if self.num_timesteps % self.target_update_interval == 0: - polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau) - - self.exploration_rate = self.exploration_schedule(self._current_progress_remaining) - logger.record("rollout/exploration rate", self.exploration_rate) - - def train(self, gradient_steps: int, batch_size: int = 100) -> None: - # Update learning rate according to schedule - self._update_learning_rate(self.policy.optimizer) - - losses = [] - for _ in range(gradient_steps): - # Sample replay buffer - replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env) - - with th.no_grad(): - # Compute the next Q-values using the target network - next_q_values = self.q_net_target(replay_data.next_observations) - # Follow greedy policy: use the one with the highest value - next_q_values, _ = next_q_values.max(dim=1) - # Avoid potential broadcast issue - next_q_values = next_q_values.reshape(-1, 1) - # 1-step TD target - target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values - - # Get current Q-values estimates - current_q_values = self.q_net(replay_data.observations) - - # Retrieve the q-values for the actions from the replay buffer - current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long()) - - # Compute Huber loss (less sensitive to outliers) - loss = F.smooth_l1_loss(current_q_values, target_q_values) - losses.append(loss.item()) - - # Optimize the policy - self.policy.optimizer.zero_grad() - loss.backward() - # Clip gradient norm - th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm) - self.policy.optimizer.step() - - # Increase update counter - self._n_updates += gradient_steps - - logger.record("train/n_updates", self._n_updates, exclude="tensorboard") - logger.record("train/loss", np.mean(losses)) - - def predict( - self, - observation: np.ndarray, - state: Optional[np.ndarray] = None, - mask: Optional[np.ndarray] = None, - deterministic: bool = False, - ) -> Tuple[np.ndarray, Optional[np.ndarray]]: - """ - Overrides the base_class predict function to include epsilon-greedy exploration. - - :param observation: the input observation - :param state: The last states (can be None, used in recurrent policies) - :param mask: The last masks (can be None, used in recurrent policies) - :param deterministic: Whether or not to return deterministic actions. - :return: the model's action and the next state - (used in recurrent policies) - """ - if not deterministic and np.random.rand() < self.exploration_rate: - if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space): - n_batch = observation.shape[0] - action = np.array([self.action_space.sample() for _ in range(n_batch)]) - else: - action = np.array(self.action_space.sample()) - else: - action, state = self.policy.predict(observation, state, mask, deterministic) - return action, state - - def learn( - self, - total_timesteps: int, - callback: MaybeCallback = None, - log_interval: int = 4, - eval_env: Optional[GymEnv] = None, - eval_freq: int = -1, - n_eval_episodes: int = 5, - tb_log_name: str = "DQN", - eval_log_path: Optional[str] = None, - reset_num_timesteps: bool = True, - ) -> OffPolicyAlgorithm: - - return super(DQN, self).learn( - total_timesteps=total_timesteps, - callback=callback, - log_interval=log_interval, - eval_env=eval_env, - eval_freq=eval_freq, - n_eval_episodes=n_eval_episodes, - tb_log_name=tb_log_name, - eval_log_path=eval_log_path, - reset_num_timesteps=reset_num_timesteps, - ) - - def _excluded_save_params(self) -> List[str]: - return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"] - - def _get_torch_save_params(self) -> Tuple[List[str], List[str]]: - state_dicts = ["policy", "policy.optimizer"] - - return state_dicts, [] diff --git a/spaces/diacanFperku/AutoGPT/Adobe Acrobat Pro Dc 2019 Crack With Activation Code Free Download.md b/spaces/diacanFperku/AutoGPT/Adobe Acrobat Pro Dc 2019 Crack With Activation Code Free Download.md deleted file mode 100644 index 5d05a2c89877b65152ae4228aff1acc1b20786ea..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Adobe Acrobat Pro Dc 2019 Crack With Activation Code Free Download.md +++ /dev/null @@ -1,22 +0,0 @@ -
          -

          How to Get Adobe Acrobat Pro DC 2019 for Free with a Crack and Activation Code

          -

          Adobe Acrobat Pro DC 2019 is one of the most popular and powerful PDF editing and creation software in the market. It offers a range of features and tools to help you create, edit, sign, share, and protect your PDF documents. However, it also comes with a hefty price tag that may not be affordable for everyone.

          -

          Adobe Acrobat Pro Dc 2019 Crack With Activation Code Free Download


          Download Ziphttps://gohhs.com/2uFVC8



          -

          If you are looking for a way to get Adobe Acrobat Pro DC 2019 for free, you may be tempted to download a crack and an activation code from the internet. A crack is a program that modifies the original software to bypass its security and licensing mechanisms. An activation code is a series of numbers and letters that you enter to activate the software after installing it.

          -

          However, before you do that, you should be aware of the risks and consequences of using cracked software. Here are some of the reasons why you should avoid using Adobe Acrobat Pro DC 2019 crack with activation code free download:

          -
            -
          • It is illegal. Downloading and using cracked software is a violation of the software's terms of use and copyright laws. You could face legal action from the software developer or the authorities if you are caught.
          • -
          • It is unsafe. Cracked software often contains malware, viruses, spyware, or other malicious code that can harm your computer and compromise your data and privacy. You could lose your files, expose your personal information, or even become a victim of identity theft or ransomware.
          • -
          • It is unreliable. Cracked software may not work properly or at all. You could experience errors, crashes, glitches, or compatibility issues with your system or other software. You may also miss out on important updates, patches, bug fixes, and new features that the original software provides.
          • -
          • It is unethical. Cracking software is a form of piracy that deprives the software developer of their rightful income and recognition. By using cracked software, you are not supporting the hard work and innovation that goes into creating quality software.
          • -
          -

          As you can see, using Adobe Acrobat Pro DC 2019 crack with activation code free download is not worth the risk or the hassle. Instead, you should consider using legitimate alternatives that are free or more affordable. Here are some of the options you can try:

          -

          -
            -
          • Use Adobe Acrobat Reader DC. This is the free version of Adobe Acrobat that allows you to view, print, and comment on PDF documents. It also has some basic editing and signing features. You can download it from here.
          • -
          • Use online PDF tools. There are many websites that offer free or low-cost PDF services such as converting, merging, splitting, compressing, rotating, watermarking, and more. Some examples are Smallpdf, iLovePDF, PDFescape, and Soda PDF.
          • -
          • Use alternative PDF software. There are other PDF programs that have similar or better features than Adobe Acrobat Pro DC 2019 but at a lower price or even for free. Some examples are Wondershare PDFelement, Nitro Pro, PDF-XChange Editor, and LibreOffice.
          • -
          -

          In conclusion, Adobe Acrobat Pro DC 2019 crack with activation code free download is not a good idea if you want to use PDF software safely and legally. You should look for other ways to get Adobe Acrobat Pro DC 2019 for free or use alternative PDF software that suits your needs and budget.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Adobe InDesign CC 2018 V17.1.0.91 Crack Free Download.md b/spaces/diacanFperku/AutoGPT/Adobe InDesign CC 2018 V17.1.0.91 Crack Free Download.md deleted file mode 100644 index 4edf9e9351e305ea6253349a89461959eb810c2a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Adobe InDesign CC 2018 V17.1.0.91 Crack Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Adobe InDesign CC 2018 v17.1.0.91 Crack free download


          Download ✯✯✯ https://gohhs.com/2uFT8j



          -
          -April 28, 2564 BC — . and Applications 2nd Edition, RF Circuit Design Theory and Applications 2nd pdf Adobe InDesign CC 2018 v17.1.0.91 Crack free download. Homer about how to download the score. Adobe InDesign CC 2018 v17.1.0.91 Crack .rar 8a78ff9644
          -
          -
          -

          diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/infra/utilities/create_triples.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/infra/utilities/create_triples.py deleted file mode 100644 index 3242ae09c2c6f1c84e661847dc5fac6f1a342177..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/infra/utilities/create_triples.py +++ /dev/null @@ -1,52 +0,0 @@ -import random - -from colbert.utils.utils import print_message -from utility.utils.save_metadata import save_metadata -from utility.supervision.triples import sample_for_query - -from colbert.data.ranking import Ranking -from colbert.data.examples import Examples - -MAX_NUM_TRIPLES = 40_000_000 - - -class Triples: - def __init__(self, ranking, seed=12345): - random.seed(seed) # TODO: Use internal RNG instead.. - self.qid2rankings = Ranking.cast(ranking).todict() - - def create(self, positives, depth): - assert all(len(x) == 2 for x in positives) - assert all(maxBest <= maxDepth for maxBest, maxDepth in positives), positives - - Triples = [] - NonEmptyQIDs = 0 - - for processing_idx, qid in enumerate(self.qid2rankings): - l = sample_for_query(qid, self.qid2rankings[qid], positives, depth, False, None) - NonEmptyQIDs += (len(l) > 0) - Triples.extend(l) - - if processing_idx % (10_000) == 0: - print_message(f"#> Done with {processing_idx+1} questions!\t\t " - f"{str(len(Triples) / 1000)}k triples for {NonEmptyQIDs} unqiue QIDs.") - - print_message(f"#> Sub-sample the triples (if > {MAX_NUM_TRIPLES})..") - print_message(f"#> len(Triples) = {len(Triples)}") - - if len(Triples) > MAX_NUM_TRIPLES: - Triples = random.sample(Triples, MAX_NUM_TRIPLES) - - ### Prepare the triples ### - print_message("#> Shuffling the triples...") - random.shuffle(Triples) - - self.Triples = Examples(data=Triples) - - return Triples - - def save(self, new_path): - Examples(data=self.Triples).save(new_path) - - # save_metadata(f'{output}.meta', args) # TODO: What args to save?? {seed, positives, depth, rankings if path or else whatever provenance the rankings object shares} - diff --git a/spaces/diffusers/latent-upscaler-tool/README.md b/spaces/diffusers/latent-upscaler-tool/README.md deleted file mode 100644 index a137ca5f9a53272fa2f8badff138135b6f5effac..0000000000000000000000000000000000000000 --- a/spaces/diffusers/latent-upscaler-tool/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Latent Upscaler Tool -emoji: 🏢 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/train_ms.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/dinhminh20521597/OCR_DEMO/app.py b/spaces/dinhminh20521597/OCR_DEMO/app.py deleted file mode 100644 index 7e71d758f71366ccebd1ad33ad9e0cebbe4c09b3..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import streamlit as st -from multipage import MultiPage -from app_pages import home, about, ocr_comparator - -app = MultiPage() -st.set_page_config( - page_title='OCR Comparator', layout ="wide", - initial_sidebar_state="expanded", -) - -# Add all your application here -app.add_page("App", "cast", ocr_comparator.app) -# app.add_page("Home", "house", home.app) -app.add_page("About", "info-circle", about.app) - -# The main app -app.run() \ No newline at end of file diff --git a/spaces/dma123/gpt-js/js/3rdparty/highlight.min.js b/spaces/dma123/gpt-js/js/3rdparty/highlight.min.js deleted file mode 100644 index 4cbf34939dde481ba42bcb7dbf19852dcca7e422..0000000000000000000000000000000000000000 --- a/spaces/dma123/gpt-js/js/3rdparty/highlight.min.js +++ /dev/null @@ -1,1202 +0,0 @@ -/*! - Highlight.js v11.7.0 (git: 82688fad18) - (c) 2006-2022 undefined and other contributors - License: BSD-3-Clause - */ -var hljs=function(){"use strict";var e={exports:{}};function n(e){ -return e instanceof Map?e.clear=e.delete=e.set=()=>{ -throw Error("map is read-only")}:e instanceof Set&&(e.add=e.clear=e.delete=()=>{ -throw Error("set is read-only") -}),Object.freeze(e),Object.getOwnPropertyNames(e).forEach((t=>{var a=e[t] -;"object"!=typeof a||Object.isFrozen(a)||n(a)})),e} -e.exports=n,e.exports.default=n;class t{constructor(e){ -void 0===e.data&&(e.data={}),this.data=e.data,this.isMatchIgnored=!1} -ignoreMatch(){this.isMatchIgnored=!0}}function a(e){ -return e.replace(/&/g,"&").replace(//g,">").replace(/"/g,""").replace(/'/g,"'") -}function i(e,...n){const t=Object.create(null);for(const n in e)t[n]=e[n] -;return n.forEach((e=>{for(const n in e)t[n]=e[n]})),t} -const r=e=>!!e.scope||e.sublanguage&&e.language;class s{constructor(e,n){ -this.buffer="",this.classPrefix=n.classPrefix,e.walk(this)}addText(e){ -this.buffer+=a(e)}openNode(e){if(!r(e))return;let n="" -;n=e.sublanguage?"language-"+e.language:((e,{prefix:n})=>{if(e.includes(".")){ -const t=e.split(".") -;return[`${n}${t.shift()}`,...t.map(((e,n)=>`${e}${"_".repeat(n+1)}`))].join(" ") -}return`${n}${e}`})(e.scope,{prefix:this.classPrefix}),this.span(n)} -closeNode(e){r(e)&&(this.buffer+="")}value(){return this.buffer}span(e){ -this.buffer+=``}}const o=(e={})=>{const n={children:[]} -;return Object.assign(n,e),n};class l{constructor(){ -this.rootNode=o(),this.stack=[this.rootNode]}get top(){ -return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){ -this.top.children.push(e)}openNode(e){const n=o({scope:e}) -;this.add(n),this.stack.push(n)}closeNode(){ -if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){ -for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)} -walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,n){ -return"string"==typeof n?e.addText(n):n.children&&(e.openNode(n), -n.children.forEach((n=>this._walk(e,n))),e.closeNode(n)),e}static _collapse(e){ -"string"!=typeof e&&e.children&&(e.children.every((e=>"string"==typeof e))?e.children=[e.children.join("")]:e.children.forEach((e=>{ -l._collapse(e)})))}}class c extends l{constructor(e){super(),this.options=e} -addKeyword(e,n){""!==e&&(this.openNode(n),this.addText(e),this.closeNode())} -addText(e){""!==e&&this.add(e)}addSublanguage(e,n){const t=e.root -;t.sublanguage=!0,t.language=n,this.add(t)}toHTML(){ -return new s(this,this.options).value()}finalize(){return!0}}function d(e){ -return e?"string"==typeof e?e:e.source:null}function g(e){return m("(?=",e,")")} -function u(e){return m("(?:",e,")*")}function b(e){return m("(?:",e,")?")} -function m(...e){return e.map((e=>d(e))).join("")}function p(...e){const n=(e=>{ -const n=e[e.length-1] -;return"object"==typeof n&&n.constructor===Object?(e.splice(e.length-1,1),n):{} -})(e);return"("+(n.capture?"":"?:")+e.map((e=>d(e))).join("|")+")"} -function _(e){return RegExp(e.toString()+"|").exec("").length-1} -const h=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./ -;function f(e,{joinWith:n}){let t=0;return e.map((e=>{t+=1;const n=t -;let a=d(e),i="";for(;a.length>0;){const e=h.exec(a);if(!e){i+=a;break} -i+=a.substring(0,e.index), -a=a.substring(e.index+e[0].length),"\\"===e[0][0]&&e[1]?i+="\\"+(Number(e[1])+n):(i+=e[0], -"("===e[0]&&t++)}return i})).map((e=>`(${e})`)).join(n)} -const E="[a-zA-Z]\\w*",y="[a-zA-Z_]\\w*",w="\\b\\d+(\\.\\d+)?",N="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",v="\\b(0b[01]+)",O={ -begin:"\\\\[\\s\\S]",relevance:0},k={scope:"string",begin:"'",end:"'", -illegal:"\\n",contains:[O]},x={scope:"string",begin:'"',end:'"',illegal:"\\n", -contains:[O]},M=(e,n,t={})=>{const a=i({scope:"comment",begin:e,end:n, -contains:[]},t);a.contains.push({scope:"doctag", -begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)", -end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0}) -;const r=p("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/) -;return a.contains.push({begin:m(/[ ]+/,"(",r,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),a -},S=M("//","$"),A=M("/\\*","\\*/"),C=M("#","$");var T=Object.freeze({ -__proto__:null,MATCH_NOTHING_RE:/\b\B/,IDENT_RE:E,UNDERSCORE_IDENT_RE:y, -NUMBER_RE:w,C_NUMBER_RE:N,BINARY_NUMBER_RE:v, -RE_STARTERS_RE:"!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~", -SHEBANG:(e={})=>{const n=/^#![ ]*\// -;return e.binary&&(e.begin=m(n,/.*\b/,e.binary,/\b.*/)),i({scope:"meta",begin:n, -end:/$/,relevance:0,"on:begin":(e,n)=>{0!==e.index&&n.ignoreMatch()}},e)}, -BACKSLASH_ESCAPE:O,APOS_STRING_MODE:k,QUOTE_STRING_MODE:x,PHRASAL_WORDS_MODE:{ -begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/ -},COMMENT:M,C_LINE_COMMENT_MODE:S,C_BLOCK_COMMENT_MODE:A,HASH_COMMENT_MODE:C, -NUMBER_MODE:{scope:"number",begin:w,relevance:0},C_NUMBER_MODE:{scope:"number", -begin:N,relevance:0},BINARY_NUMBER_MODE:{scope:"number",begin:v,relevance:0}, -REGEXP_MODE:{begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//, -end:/\/[gimuy]*/,illegal:/\n/,contains:[O,{begin:/\[/,end:/\]/,relevance:0, -contains:[O]}]}]},TITLE_MODE:{scope:"title",begin:E,relevance:0}, -UNDERSCORE_TITLE_MODE:{scope:"title",begin:y,relevance:0},METHOD_GUARD:{ -begin:"\\.\\s*[a-zA-Z_]\\w*",relevance:0},END_SAME_AS_BEGIN:e=>Object.assign(e,{ -"on:begin":(e,n)=>{n.data._beginMatch=e[1]},"on:end":(e,n)=>{ -n.data._beginMatch!==e[1]&&n.ignoreMatch()}})});function R(e,n){ -"."===e.input[e.index-1]&&n.ignoreMatch()}function D(e,n){ -void 0!==e.className&&(e.scope=e.className,delete e.className)}function I(e,n){ -n&&e.beginKeywords&&(e.begin="\\b("+e.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)", -e.__beforeBegin=R,e.keywords=e.keywords||e.beginKeywords,delete e.beginKeywords, -void 0===e.relevance&&(e.relevance=0))}function L(e,n){ -Array.isArray(e.illegal)&&(e.illegal=p(...e.illegal))}function B(e,n){ -if(e.match){ -if(e.begin||e.end)throw Error("begin & end are not supported with match") -;e.begin=e.match,delete e.match}}function $(e,n){ -void 0===e.relevance&&(e.relevance=1)}const z=(e,n)=>{if(!e.beforeMatch)return -;if(e.starts)throw Error("beforeMatch cannot be used with starts") -;const t=Object.assign({},e);Object.keys(e).forEach((n=>{delete e[n] -})),e.keywords=t.keywords,e.begin=m(t.beforeMatch,g(t.begin)),e.starts={ -relevance:0,contains:[Object.assign(t,{endsParent:!0})] -},e.relevance=0,delete t.beforeMatch -},F=["of","and","for","in","not","or","if","then","parent","list","value"] -;function U(e,n,t="keyword"){const a=Object.create(null) -;return"string"==typeof e?i(t,e.split(" ")):Array.isArray(e)?i(t,e):Object.keys(e).forEach((t=>{ -Object.assign(a,U(e[t],n,t))})),a;function i(e,t){ -n&&(t=t.map((e=>e.toLowerCase()))),t.forEach((n=>{const t=n.split("|") -;a[t[0]]=[e,j(t[0],t[1])]}))}}function j(e,n){ -return n?Number(n):(e=>F.includes(e.toLowerCase()))(e)?0:1}const P={},K=e=>{ -console.error(e)},H=(e,...n)=>{console.log("WARN: "+e,...n)},q=(e,n)=>{ -P[`${e}/${n}`]||(console.log(`Deprecated as of ${e}. ${n}`),P[`${e}/${n}`]=!0) -},Z=Error();function G(e,n,{key:t}){let a=0;const i=e[t],r={},s={} -;for(let e=1;e<=n.length;e++)s[e+a]=i[e],r[e+a]=!0,a+=_(n[e-1]) -;e[t]=s,e[t]._emit=r,e[t]._multi=!0}function W(e){(e=>{ -e.scope&&"object"==typeof e.scope&&null!==e.scope&&(e.beginScope=e.scope, -delete e.scope)})(e),"string"==typeof e.beginScope&&(e.beginScope={ -_wrap:e.beginScope}),"string"==typeof e.endScope&&(e.endScope={_wrap:e.endScope -}),(e=>{if(Array.isArray(e.begin)){ -if(e.skip||e.excludeBegin||e.returnBegin)throw K("skip, excludeBegin, returnBegin not compatible with beginScope: {}"), -Z -;if("object"!=typeof e.beginScope||null===e.beginScope)throw K("beginScope must be object"), -Z;G(e,e.begin,{key:"beginScope"}),e.begin=f(e.begin,{joinWith:""})}})(e),(e=>{ -if(Array.isArray(e.end)){ -if(e.skip||e.excludeEnd||e.returnEnd)throw K("skip, excludeEnd, returnEnd not compatible with endScope: {}"), -Z -;if("object"!=typeof e.endScope||null===e.endScope)throw K("endScope must be object"), -Z;G(e,e.end,{key:"endScope"}),e.end=f(e.end,{joinWith:""})}})(e)}function Q(e){ -function n(n,t){ -return RegExp(d(n),"m"+(e.case_insensitive?"i":"")+(e.unicodeRegex?"u":"")+(t?"g":"")) -}class t{constructor(){ -this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0} -addRule(e,n){ -n.position=this.position++,this.matchIndexes[this.matchAt]=n,this.regexes.push([n,e]), -this.matchAt+=_(e)+1}compile(){0===this.regexes.length&&(this.exec=()=>null) -;const e=this.regexes.map((e=>e[1]));this.matcherRe=n(f(e,{joinWith:"|" -}),!0),this.lastIndex=0}exec(e){this.matcherRe.lastIndex=this.lastIndex -;const n=this.matcherRe.exec(e);if(!n)return null -;const t=n.findIndex(((e,n)=>n>0&&void 0!==e)),a=this.matchIndexes[t] -;return n.splice(0,t),Object.assign(n,a)}}class a{constructor(){ -this.rules=[],this.multiRegexes=[], -this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(e){ -if(this.multiRegexes[e])return this.multiRegexes[e];const n=new t -;return this.rules.slice(e).forEach((([e,t])=>n.addRule(e,t))), -n.compile(),this.multiRegexes[e]=n,n}resumingScanAtSamePosition(){ -return 0!==this.regexIndex}considerAll(){this.regexIndex=0}addRule(e,n){ -this.rules.push([e,n]),"begin"===n.type&&this.count++}exec(e){ -const n=this.getMatcher(this.regexIndex);n.lastIndex=this.lastIndex -;let t=n.exec(e) -;if(this.resumingScanAtSamePosition())if(t&&t.index===this.lastIndex);else{ -const n=this.getMatcher(0);n.lastIndex=this.lastIndex+1,t=n.exec(e)} -return t&&(this.regexIndex+=t.position+1, -this.regexIndex===this.count&&this.considerAll()),t}} -if(e.compilerExtensions||(e.compilerExtensions=[]), -e.contains&&e.contains.includes("self"))throw Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.") -;return e.classNameAliases=i(e.classNameAliases||{}),function t(r,s){const o=r -;if(r.isCompiled)return o -;[D,B,W,z].forEach((e=>e(r,s))),e.compilerExtensions.forEach((e=>e(r,s))), -r.__beforeBegin=null,[I,L,$].forEach((e=>e(r,s))),r.isCompiled=!0;let l=null -;return"object"==typeof r.keywords&&r.keywords.$pattern&&(r.keywords=Object.assign({},r.keywords), -l=r.keywords.$pattern, -delete r.keywords.$pattern),l=l||/\w+/,r.keywords&&(r.keywords=U(r.keywords,e.case_insensitive)), -o.keywordPatternRe=n(l,!0), -s&&(r.begin||(r.begin=/\B|\b/),o.beginRe=n(o.begin),r.end||r.endsWithParent||(r.end=/\B|\b/), -r.end&&(o.endRe=n(o.end)), -o.terminatorEnd=d(o.end)||"",r.endsWithParent&&s.terminatorEnd&&(o.terminatorEnd+=(r.end?"|":"")+s.terminatorEnd)), -r.illegal&&(o.illegalRe=n(r.illegal)), -r.contains||(r.contains=[]),r.contains=[].concat(...r.contains.map((e=>(e=>(e.variants&&!e.cachedVariants&&(e.cachedVariants=e.variants.map((n=>i(e,{ -variants:null},n)))),e.cachedVariants?e.cachedVariants:X(e)?i(e,{ -starts:e.starts?i(e.starts):null -}):Object.isFrozen(e)?i(e):e))("self"===e?r:e)))),r.contains.forEach((e=>{t(e,o) -})),r.starts&&t(r.starts,s),o.matcher=(e=>{const n=new a -;return e.contains.forEach((e=>n.addRule(e.begin,{rule:e,type:"begin" -}))),e.terminatorEnd&&n.addRule(e.terminatorEnd,{type:"end" -}),e.illegal&&n.addRule(e.illegal,{type:"illegal"}),n})(o),o}(e)}function X(e){ -return!!e&&(e.endsWithParent||X(e.starts))}class V extends Error{ -constructor(e,n){super(e),this.name="HTMLInjectionError",this.html=n}} -const J=a,Y=i,ee=Symbol("nomatch");var ne=(n=>{ -const a=Object.create(null),i=Object.create(null),r=[];let s=!0 -;const o="Could not find the language '{}', did you forget to load/include a language module?",l={ -disableAutodetect:!0,name:"Plain text",contains:[]};let d={ -ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i, -languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-", -cssSelector:"pre code",languages:null,__emitter:c};function _(e){ -return d.noHighlightRe.test(e)}function h(e,n,t){let a="",i="" -;"object"==typeof n?(a=e, -t=n.ignoreIllegals,i=n.language):(q("10.7.0","highlight(lang, code, ...args) has been deprecated."), -q("10.7.0","Please use highlight(code, options) instead.\nhttps://github.com/highlightjs/highlight.js/issues/2277"), -i=e,a=n),void 0===t&&(t=!0);const r={code:a,language:i};x("before:highlight",r) -;const s=r.result?r.result:f(r.language,r.code,t) -;return s.code=r.code,x("after:highlight",s),s}function f(e,n,i,r){ -const l=Object.create(null);function c(){if(!k.keywords)return void M.addText(S) -;let e=0;k.keywordPatternRe.lastIndex=0;let n=k.keywordPatternRe.exec(S),t="" -;for(;n;){t+=S.substring(e,n.index) -;const i=w.case_insensitive?n[0].toLowerCase():n[0],r=(a=i,k.keywords[a]);if(r){ -const[e,a]=r -;if(M.addText(t),t="",l[i]=(l[i]||0)+1,l[i]<=7&&(A+=a),e.startsWith("_"))t+=n[0];else{ -const t=w.classNameAliases[e]||e;M.addKeyword(n[0],t)}}else t+=n[0] -;e=k.keywordPatternRe.lastIndex,n=k.keywordPatternRe.exec(S)}var a -;t+=S.substring(e),M.addText(t)}function g(){null!=k.subLanguage?(()=>{ -if(""===S)return;let e=null;if("string"==typeof k.subLanguage){ -if(!a[k.subLanguage])return void M.addText(S) -;e=f(k.subLanguage,S,!0,x[k.subLanguage]),x[k.subLanguage]=e._top -}else e=E(S,k.subLanguage.length?k.subLanguage:null) -;k.relevance>0&&(A+=e.relevance),M.addSublanguage(e._emitter,e.language) -})():c(),S=""}function u(e,n){let t=1;const a=n.length-1;for(;t<=a;){ -if(!e._emit[t]){t++;continue}const a=w.classNameAliases[e[t]]||e[t],i=n[t] -;a?M.addKeyword(i,a):(S=i,c(),S=""),t++}}function b(e,n){ -return e.scope&&"string"==typeof e.scope&&M.openNode(w.classNameAliases[e.scope]||e.scope), -e.beginScope&&(e.beginScope._wrap?(M.addKeyword(S,w.classNameAliases[e.beginScope._wrap]||e.beginScope._wrap), -S=""):e.beginScope._multi&&(u(e.beginScope,n),S="")),k=Object.create(e,{parent:{ -value:k}}),k}function m(e,n,a){let i=((e,n)=>{const t=e&&e.exec(n) -;return t&&0===t.index})(e.endRe,a);if(i){if(e["on:end"]){const a=new t(e) -;e["on:end"](n,a),a.isMatchIgnored&&(i=!1)}if(i){ -for(;e.endsParent&&e.parent;)e=e.parent;return e}} -if(e.endsWithParent)return m(e.parent,n,a)}function p(e){ -return 0===k.matcher.regexIndex?(S+=e[0],1):(R=!0,0)}function _(e){ -const t=e[0],a=n.substring(e.index),i=m(k,e,a);if(!i)return ee;const r=k -;k.endScope&&k.endScope._wrap?(g(), -M.addKeyword(t,k.endScope._wrap)):k.endScope&&k.endScope._multi?(g(), -u(k.endScope,e)):r.skip?S+=t:(r.returnEnd||r.excludeEnd||(S+=t), -g(),r.excludeEnd&&(S=t));do{ -k.scope&&M.closeNode(),k.skip||k.subLanguage||(A+=k.relevance),k=k.parent -}while(k!==i.parent);return i.starts&&b(i.starts,e),r.returnEnd?0:t.length} -let h={};function y(a,r){const o=r&&r[0];if(S+=a,null==o)return g(),0 -;if("begin"===h.type&&"end"===r.type&&h.index===r.index&&""===o){ -if(S+=n.slice(r.index,r.index+1),!s){const n=Error(`0 width match regex (${e})`) -;throw n.languageName=e,n.badRule=h.rule,n}return 1} -if(h=r,"begin"===r.type)return(e=>{ -const n=e[0],a=e.rule,i=new t(a),r=[a.__beforeBegin,a["on:begin"]] -;for(const t of r)if(t&&(t(e,i),i.isMatchIgnored))return p(n) -;return a.skip?S+=n:(a.excludeBegin&&(S+=n), -g(),a.returnBegin||a.excludeBegin||(S=n)),b(a,e),a.returnBegin?0:n.length})(r) -;if("illegal"===r.type&&!i){ -const e=Error('Illegal lexeme "'+o+'" for mode "'+(k.scope||"")+'"') -;throw e.mode=k,e}if("end"===r.type){const e=_(r);if(e!==ee)return e} -if("illegal"===r.type&&""===o)return 1 -;if(T>1e5&&T>3*r.index)throw Error("potential infinite loop, way more iterations than matches") -;return S+=o,o.length}const w=v(e) -;if(!w)throw K(o.replace("{}",e)),Error('Unknown language: "'+e+'"') -;const N=Q(w);let O="",k=r||N;const x={},M=new d.__emitter(d);(()=>{const e=[] -;for(let n=k;n!==w;n=n.parent)n.scope&&e.unshift(n.scope) -;e.forEach((e=>M.openNode(e)))})();let S="",A=0,C=0,T=0,R=!1;try{ -for(k.matcher.considerAll();;){ -T++,R?R=!1:k.matcher.considerAll(),k.matcher.lastIndex=C -;const e=k.matcher.exec(n);if(!e)break;const t=y(n.substring(C,e.index),e) -;C=e.index+t} -return y(n.substring(C)),M.closeAllNodes(),M.finalize(),O=M.toHTML(),{ -language:e,value:O,relevance:A,illegal:!1,_emitter:M,_top:k}}catch(t){ -if(t.message&&t.message.includes("Illegal"))return{language:e,value:J(n), -illegal:!0,relevance:0,_illegalBy:{message:t.message,index:C, -context:n.slice(C-100,C+100),mode:t.mode,resultSoFar:O},_emitter:M};if(s)return{ -language:e,value:J(n),illegal:!1,relevance:0,errorRaised:t,_emitter:M,_top:k} -;throw t}}function E(e,n){n=n||d.languages||Object.keys(a);const t=(e=>{ -const n={value:J(e),illegal:!1,relevance:0,_top:l,_emitter:new d.__emitter(d)} -;return n._emitter.addText(e),n})(e),i=n.filter(v).filter(k).map((n=>f(n,e,!1))) -;i.unshift(t);const r=i.sort(((e,n)=>{ -if(e.relevance!==n.relevance)return n.relevance-e.relevance -;if(e.language&&n.language){if(v(e.language).supersetOf===n.language)return 1 -;if(v(n.language).supersetOf===e.language)return-1}return 0})),[s,o]=r,c=s -;return c.secondBest=o,c}function y(e){let n=null;const t=(e=>{ -let n=e.className+" ";n+=e.parentNode?e.parentNode.className:"" -;const t=d.languageDetectRe.exec(n);if(t){const n=v(t[1]) -;return n||(H(o.replace("{}",t[1])), -H("Falling back to no-highlight mode for this block.",e)),n?t[1]:"no-highlight"} -return n.split(/\s+/).find((e=>_(e)||v(e)))})(e);if(_(t))return -;if(x("before:highlightElement",{el:e,language:t -}),e.children.length>0&&(d.ignoreUnescapedHTML||(console.warn("One of your code blocks includes unescaped HTML. This is a potentially serious security risk."), -console.warn("https://github.com/highlightjs/highlight.js/wiki/security"), -console.warn("The element with unescaped HTML:"), -console.warn(e)),d.throwUnescapedHTML))throw new V("One of your code blocks includes unescaped HTML.",e.innerHTML) -;n=e;const a=n.textContent,r=t?h(a,{language:t,ignoreIllegals:!0}):E(a) -;e.innerHTML=r.value,((e,n,t)=>{const a=n&&i[n]||t -;e.classList.add("hljs"),e.classList.add("language-"+a) -})(e,t,r.language),e.result={language:r.language,re:r.relevance, -relevance:r.relevance},r.secondBest&&(e.secondBest={ -language:r.secondBest.language,relevance:r.secondBest.relevance -}),x("after:highlightElement",{el:e,result:r,text:a})}let w=!1;function N(){ -"loading"!==document.readyState?document.querySelectorAll(d.cssSelector).forEach(y):w=!0 -}function v(e){return e=(e||"").toLowerCase(),a[e]||a[i[e]]} -function O(e,{languageName:n}){"string"==typeof e&&(e=[e]),e.forEach((e=>{ -i[e.toLowerCase()]=n}))}function k(e){const n=v(e) -;return n&&!n.disableAutodetect}function x(e,n){const t=e;r.forEach((e=>{ -e[t]&&e[t](n)}))} -"undefined"!=typeof window&&window.addEventListener&&window.addEventListener("DOMContentLoaded",(()=>{ -w&&N()}),!1),Object.assign(n,{highlight:h,highlightAuto:E,highlightAll:N, -highlightElement:y, -highlightBlock:e=>(q("10.7.0","highlightBlock will be removed entirely in v12.0"), -q("10.7.0","Please use highlightElement now."),y(e)),configure:e=>{d=Y(d,e)}, -initHighlighting:()=>{ -N(),q("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")}, -initHighlightingOnLoad:()=>{ -N(),q("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.") -},registerLanguage:(e,t)=>{let i=null;try{i=t(n)}catch(n){ -if(K("Language definition for '{}' could not be registered.".replace("{}",e)), -!s)throw n;K(n),i=l} -i.name||(i.name=e),a[e]=i,i.rawDefinition=t.bind(null,n),i.aliases&&O(i.aliases,{ -languageName:e})},unregisterLanguage:e=>{delete a[e] -;for(const n of Object.keys(i))i[n]===e&&delete i[n]}, -listLanguages:()=>Object.keys(a),getLanguage:v,registerAliases:O, -autoDetection:k,inherit:Y,addPlugin:e=>{(e=>{ -e["before:highlightBlock"]&&!e["before:highlightElement"]&&(e["before:highlightElement"]=n=>{ -e["before:highlightBlock"](Object.assign({block:n.el},n)) -}),e["after:highlightBlock"]&&!e["after:highlightElement"]&&(e["after:highlightElement"]=n=>{ -e["after:highlightBlock"](Object.assign({block:n.el},n))})})(e),r.push(e)} -}),n.debugMode=()=>{s=!1},n.safeMode=()=>{s=!0 -},n.versionString="11.7.0",n.regex={concat:m,lookahead:g,either:p,optional:b, -anyNumberOfTimes:u};for(const n in T)"object"==typeof T[n]&&e.exports(T[n]) -;return Object.assign(n,T),n})({});const te=e=>({IMPORTANT:{scope:"meta", -begin:"!important"},BLOCK_COMMENT:e.C_BLOCK_COMMENT_MODE,HEXCOLOR:{ -scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/}, -FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/}, -ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$", -contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{ -scope:"number", -begin:e.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?", -relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/} -}),ae=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video"],ie=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height"],re=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where"],se=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error"],oe=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index"].reverse(),le=re.concat(se) -;var ce="\\.([0-9](_*[0-9])*)",de="[0-9a-fA-F](_*[0-9a-fA-F])*",ge={ -className:"number",variants:[{ -begin:`(\\b([0-9](_*[0-9])*)((${ce})|\\.)?|(${ce}))[eE][+-]?([0-9](_*[0-9])*)[fFdD]?\\b` -},{begin:`\\b([0-9](_*[0-9])*)((${ce})[fFdD]?\\b|\\.([fFdD]\\b)?)`},{ -begin:`(${ce})[fFdD]?\\b`},{begin:"\\b([0-9](_*[0-9])*)[fFdD]\\b"},{ -begin:`\\b0[xX]((${de})\\.?|(${de})?\\.(${de}))[pP][+-]?([0-9](_*[0-9])*)[fFdD]?\\b` -},{begin:"\\b(0|[1-9](_*[0-9])*)[lL]?\\b"},{begin:`\\b0[xX](${de})[lL]?\\b`},{ -begin:"\\b0(_*[0-7])*[lL]?\\b"},{begin:"\\b0[bB][01](_*[01])*[lL]?\\b"}], -relevance:0};function ue(e,n,t){return-1===t?"":e.replace(n,(a=>ue(e,n,t-1)))} -const be="[A-Za-z$_][0-9A-Za-z$_]*",me=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends"],pe=["true","false","null","undefined","NaN","Infinity"],_e=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly"],he=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError"],fe=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape"],Ee=["arguments","this","super","console","window","document","localStorage","module","global"],ye=[].concat(fe,_e,he) -;function we(e){const n=e.regex,t=be,a={begin:/<[A-Za-z0-9\\._:-]+/, -end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag:(e,n)=>{ -const t=e[0].length+e.index,a=e.input[t] -;if("<"===a||","===a)return void n.ignoreMatch();let i -;">"===a&&(((e,{after:n})=>{const t="",k={ -match:[/const|var|let/,/\s+/,t,/\s*/,/=\s*/,/(async\s*)?/,n.lookahead(O)], -keywords:"async",className:{1:"keyword",3:"title.function"},contains:[_]} -;return{name:"Javascript",aliases:["js","jsx","mjs","cjs"],keywords:i,exports:{ -PARAMS_CONTAINS:p,CLASS_REFERENCE:f},illegal:/#(?![$_A-z])/, -contains:[e.SHEBANG({label:"shebang",binary:"node",relevance:5}),{ -label:"use_strict",className:"meta",relevance:10, -begin:/^\s*['"]use (strict|asm)['"]/ -},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,c,d,g,u,{match:/\$\d+/},o,f,{ -className:"attr",begin:t+n.lookahead(":"),relevance:0},k,{ -begin:"("+e.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*", -keywords:"return throw case",relevance:0,contains:[u,e.REGEXP_MODE,{ -className:"function",begin:O,returnBegin:!0,end:"\\s*=>",contains:[{ -className:"params",variants:[{begin:e.UNDERSCORE_IDENT_RE,relevance:0},{ -className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0, -excludeEnd:!0,keywords:i,contains:p}]}]},{begin:/,/,relevance:0},{match:/\s+/, -relevance:0},{variants:[{begin:"<>",end:""},{ -match:/<[A-Za-z0-9\\._:-]+\s*\/>/},{begin:a.begin, -"on:begin":a.isTrulyOpeningTag,end:a.end}],subLanguage:"xml",contains:[{ -begin:a.begin,end:a.end,skip:!0,contains:["self"]}]}]},E,{ -beginKeywords:"while if switch catch for"},{ -begin:"\\b(?!function)"+e.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{", -returnBegin:!0,label:"func.def",contains:[_,e.inherit(e.TITLE_MODE,{begin:t, -className:"title.function"})]},{match:/\.\.\./,relevance:0},N,{match:"\\$"+t, -relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"}, -contains:[_]},y,{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/, -className:"variable.constant"},h,v,{match:/\$[(.]/}]}} -const Ne=e=>m(/\b/,e,/\w$/.test(e)?/\b/:/\B/),ve=["Protocol","Type"].map(Ne),Oe=["init","self"].map(Ne),ke=["Any","Self"],xe=["actor","any","associatedtype","async","await",/as\?/,/as!/,"as","break","case","catch","class","continue","convenience","default","defer","deinit","didSet","distributed","do","dynamic","else","enum","extension","fallthrough",/fileprivate\(set\)/,"fileprivate","final","for","func","get","guard","if","import","indirect","infix",/init\?/,/init!/,"inout",/internal\(set\)/,"internal","in","is","isolated","nonisolated","lazy","let","mutating","nonmutating",/open\(set\)/,"open","operator","optional","override","postfix","precedencegroup","prefix",/private\(set\)/,"private","protocol",/public\(set\)/,"public","repeat","required","rethrows","return","set","some","static","struct","subscript","super","switch","throws","throw",/try\?/,/try!/,"try","typealias",/unowned\(safe\)/,/unowned\(unsafe\)/,"unowned","var","weak","where","while","willSet"],Me=["false","nil","true"],Se=["assignment","associativity","higherThan","left","lowerThan","none","right"],Ae=["#colorLiteral","#column","#dsohandle","#else","#elseif","#endif","#error","#file","#fileID","#fileLiteral","#filePath","#function","#if","#imageLiteral","#keyPath","#line","#selector","#sourceLocation","#warn_unqualified_access","#warning"],Ce=["abs","all","any","assert","assertionFailure","debugPrint","dump","fatalError","getVaList","isKnownUniquelyReferenced","max","min","numericCast","pointwiseMax","pointwiseMin","precondition","preconditionFailure","print","readLine","repeatElement","sequence","stride","swap","swift_unboxFromSwiftValueWithType","transcode","type","unsafeBitCast","unsafeDowncast","withExtendedLifetime","withUnsafeMutablePointer","withUnsafePointer","withVaList","withoutActuallyEscaping","zip"],Te=p(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),Re=p(Te,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),De=m(Te,Re,"*"),Ie=p(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),Le=p(Ie,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),Be=m(Ie,Le,"*"),$e=m(/[A-Z]/,Le,"*"),ze=["autoclosure",m(/convention\(/,p("swift","block","c"),/\)/),"discardableResult","dynamicCallable","dynamicMemberLookup","escaping","frozen","GKInspectable","IBAction","IBDesignable","IBInspectable","IBOutlet","IBSegueAction","inlinable","main","nonobjc","NSApplicationMain","NSCopying","NSManaged",m(/objc\(/,Be,/\)/),"objc","objcMembers","propertyWrapper","requires_stored_property_inits","resultBuilder","testable","UIApplicationMain","unknown","usableFromInline"],Fe=["iOS","iOSApplicationExtension","macOS","macOSApplicationExtension","macCatalyst","macCatalystApplicationExtension","watchOS","watchOSApplicationExtension","tvOS","tvOSApplicationExtension","swift"] -;var Ue=Object.freeze({__proto__:null,grmr_bash:e=>{const n=e.regex,t={},a={ -begin:/\$\{/,end:/\}/,contains:["self",{begin:/:-/,contains:[t]}]} -;Object.assign(t,{className:"variable",variants:[{ -begin:n.concat(/\$[\w\d#@][\w\d_]*/,"(?![\\w\\d])(?![$])")},a]});const i={ -className:"subst",begin:/\$\(/,end:/\)/,contains:[e.BACKSLASH_ESCAPE]},r={ -begin:/<<-?\s*(?=\w+)/,starts:{contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/, -end:/(\w+)/,className:"string"})]}},s={className:"string",begin:/"/,end:/"/, -contains:[e.BACKSLASH_ESCAPE,t,i]};i.contains.push(s);const o={begin:/\$?\(\(/, -end:/\)\)/,contains:[{begin:/\d+#[0-9a-f]+/,className:"number"},e.NUMBER_MODE,t] -},l=e.SHEBANG({binary:"(fish|bash|zsh|sh|csh|ksh|tcsh|dash|scsh)",relevance:10 -}),c={className:"function",begin:/\w[\w\d_]*\s*\(\s*\)\s*\{/,returnBegin:!0, -contains:[e.inherit(e.TITLE_MODE,{begin:/\w[\w\d_]*/})],relevance:0};return{ -name:"Bash",aliases:["sh"],keywords:{$pattern:/\b[a-z][a-z0-9._-]+\b/, -keyword:["if","then","else","elif","fi","for","while","in","do","done","case","esac","function"], -literal:["true","false"], -built_in:["break","cd","continue","eval","exec","exit","export","getopts","hash","pwd","readonly","return","shift","test","times","trap","umask","unset","alias","bind","builtin","caller","command","declare","echo","enable","help","let","local","logout","mapfile","printf","read","readarray","source","type","typeset","ulimit","unalias","set","shopt","autoload","bg","bindkey","bye","cap","chdir","clone","comparguments","compcall","compctl","compdescribe","compfiles","compgroups","compquote","comptags","comptry","compvalues","dirs","disable","disown","echotc","echoti","emulate","fc","fg","float","functions","getcap","getln","history","integer","jobs","kill","limit","log","noglob","popd","print","pushd","pushln","rehash","sched","setcap","setopt","stat","suspend","ttyctl","unfunction","unhash","unlimit","unsetopt","vared","wait","whence","where","which","zcompile","zformat","zftp","zle","zmodload","zparseopts","zprof","zpty","zregexparse","zsocket","zstyle","ztcp","chcon","chgrp","chown","chmod","cp","dd","df","dir","dircolors","ln","ls","mkdir","mkfifo","mknod","mktemp","mv","realpath","rm","rmdir","shred","sync","touch","truncate","vdir","b2sum","base32","base64","cat","cksum","comm","csplit","cut","expand","fmt","fold","head","join","md5sum","nl","numfmt","od","paste","ptx","pr","sha1sum","sha224sum","sha256sum","sha384sum","sha512sum","shuf","sort","split","sum","tac","tail","tr","tsort","unexpand","uniq","wc","arch","basename","chroot","date","dirname","du","echo","env","expr","factor","groups","hostid","id","link","logname","nice","nohup","nproc","pathchk","pinky","printenv","printf","pwd","readlink","runcon","seq","sleep","stat","stdbuf","stty","tee","test","timeout","tty","uname","unlink","uptime","users","who","whoami","yes"] -},contains:[l,e.SHEBANG(),c,o,e.HASH_COMMENT_MODE,r,{match:/(\/[a-z._-]+)+/},s,{ -className:"",begin:/\\"/},{className:"string",begin:/'/,end:/'/},t]}}, -grmr_c:e=>{const n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}] -}),a="[a-zA-Z_]\\w*::",i="(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={ -className:"type",variants:[{begin:"\\b[a-z\\d_]*_t\\b"},{ -match:/\batomic_[a-z]{3,6}\b/}]},s={className:"string",variants:[{ -begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{ -begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)", -end:"'",illegal:"."},e.END_SAME_AS_BEGIN({ -begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/})]},o={ -className:"number",variants:[{begin:"\\b(0b[01']+)"},{ -begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)" -},{ -begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)" -}],relevance:0},l={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{ -keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include" -},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{ -className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE]},c={ -className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0 -},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={ -keyword:["asm","auto","break","case","continue","default","do","else","enum","extern","for","fortran","goto","if","inline","register","restrict","return","sizeof","struct","switch","typedef","union","volatile","while","_Alignas","_Alignof","_Atomic","_Generic","_Noreturn","_Static_assert","_Thread_local","alignas","alignof","noreturn","static_assert","thread_local","_Pragma"], -type:["float","double","signed","unsigned","int","short","long","char","void","_Bool","_Complex","_Imaginary","_Decimal32","_Decimal64","_Decimal128","const","static","complex","bool","imaginary"], -literal:"true false NULL", -built_in:"std string wstring cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set pair bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap priority_queue make_pair array shared_ptr abort terminate abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf future isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr" -},u=[l,r,t,e.C_BLOCK_COMMENT_MODE,o,s],b={variants:[{begin:/=/,end:/;/},{ -begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/}], -keywords:g,contains:u.concat([{begin:/\(/,end:/\)/,keywords:g, -contains:u.concat(["self"]),relevance:0}]),relevance:0},m={ -begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0, -keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)", -keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[e.inherit(c,{ -className:"title.function"})],relevance:0},{relevance:0,match:/,/},{ -className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0, -contains:[t,e.C_BLOCK_COMMENT_MODE,s,o,r,{begin:/\(/,end:/\)/,keywords:g, -relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,o,r]}] -},r,t,e.C_BLOCK_COMMENT_MODE,l]};return{name:"C",aliases:["h"],keywords:g, -disableAutodetect:!0,illegal:"=]/,contains:[{ -beginKeywords:"final class struct"},e.TITLE_MODE]}]),exports:{preprocessor:l, -strings:s,keywords:g}}},grmr_cpp:e=>{const n=e.regex,t=e.COMMENT("//","$",{ -contains:[{begin:/\\\n/}] -}),a="[a-zA-Z_]\\w*::",i="(?!struct)(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={ -className:"type",begin:"\\b[a-z\\d_]*_t\\b"},s={className:"string",variants:[{ -begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{ -begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)", -end:"'",illegal:"."},e.END_SAME_AS_BEGIN({ -begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/})]},o={ -className:"number",variants:[{begin:"\\b(0b[01']+)"},{ -begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)" -},{ -begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)" -}],relevance:0},l={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{ -keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include" -},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{ -className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE]},c={ -className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0 -},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={ -type:["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static"], -keyword:["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq"], -literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"], -_type_hints:["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view"] -},u={className:"function.dispatch",relevance:0,keywords:{ -_hint:["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf"] -}, -begin:n.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,e.IDENT_RE,n.lookahead(/(<[^<>]+>|)\s*\(/)) -},b=[u,l,r,t,e.C_BLOCK_COMMENT_MODE,o,s],m={variants:[{begin:/=/,end:/;/},{ -begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/}], -keywords:g,contains:b.concat([{begin:/\(/,end:/\)/,keywords:g, -contains:b.concat(["self"]),relevance:0}]),relevance:0},p={className:"function", -begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0, -keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)", -keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[c],relevance:0},{ -begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[s,o]},{ -relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g, -relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,o,r,{begin:/\(/,end:/\)/, -keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,o,r]}] -},r,t,e.C_BLOCK_COMMENT_MODE,l]};return{name:"C++", -aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:g,illegal:"",keywords:g,contains:["self",r]},{begin:e.IDENT_RE+"::",keywords:g},{ -match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/], -className:{1:"keyword",3:"title.class"}}])}},grmr_csharp:e=>{const n={ -keyword:["abstract","as","base","break","case","catch","class","const","continue","do","else","event","explicit","extern","finally","fixed","for","foreach","goto","if","implicit","in","interface","internal","is","lock","namespace","new","operator","out","override","params","private","protected","public","readonly","record","ref","return","scoped","sealed","sizeof","stackalloc","static","struct","switch","this","throw","try","typeof","unchecked","unsafe","using","virtual","void","volatile","while"].concat(["add","alias","and","ascending","async","await","by","descending","equals","from","get","global","group","init","into","join","let","nameof","not","notnull","on","or","orderby","partial","remove","select","set","unmanaged","value|0","var","when","where","with","yield"]), -built_in:["bool","byte","char","decimal","delegate","double","dynamic","enum","float","int","long","nint","nuint","object","sbyte","short","string","ulong","uint","ushort"], -literal:["default","false","null","true"]},t=e.inherit(e.TITLE_MODE,{ -begin:"[a-zA-Z](\\.?\\w)*"}),a={className:"number",variants:[{ -begin:"\\b(0b[01']+)"},{ -begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)(u|U|l|L|ul|UL|f|F|b|B)"},{ -begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)" -}],relevance:0},i={className:"string",begin:'@"',end:'"',contains:[{begin:'""'}] -},r=e.inherit(i,{illegal:/\n/}),s={className:"subst",begin:/\{/,end:/\}/, -keywords:n},o=e.inherit(s,{illegal:/\n/}),l={className:"string",begin:/\$"/, -end:'"',illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/ -},e.BACKSLASH_ESCAPE,o]},c={className:"string",begin:/\$@"/,end:'"',contains:[{ -begin:/\{\{/},{begin:/\}\}/},{begin:'""'},s]},d=e.inherit(c,{illegal:/\n/, -contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},o]}) -;s.contains=[c,l,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.C_BLOCK_COMMENT_MODE], -o.contains=[d,l,r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.inherit(e.C_BLOCK_COMMENT_MODE,{ -illegal:/\n/})];const g={variants:[c,l,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE] -},u={begin:"<",end:">",contains:[{beginKeywords:"in out"},t] -},b=e.IDENT_RE+"(<"+e.IDENT_RE+"(\\s*,\\s*"+e.IDENT_RE+")*>)?(\\[\\])?",m={ -begin:"@"+e.IDENT_RE,relevance:0};return{name:"C#",aliases:["cs","c#"], -keywords:n,illegal:/::/,contains:[e.COMMENT("///","$",{returnBegin:!0, -contains:[{className:"doctag",variants:[{begin:"///",relevance:0},{ -begin:"\x3c!--|--\x3e"},{begin:""}]}] -}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"meta",begin:"#", -end:"$",keywords:{ -keyword:"if else elif endif define undef warning error line region endregion pragma checksum" -}},g,a,{beginKeywords:"class interface",relevance:0,end:/[{;=]/, -illegal:/[^\s:,]/,contains:[{beginKeywords:"where class" -},t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"namespace", -relevance:0,end:/[{;=]/,illegal:/[^\s:]/, -contains:[t,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{ -beginKeywords:"record",relevance:0,end:/[{;=]/,illegal:/[^\s:]/, -contains:[t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"meta", -begin:"^\\s*\\[(?=[\\w])",excludeBegin:!0,end:"\\]",excludeEnd:!0,contains:[{ -className:"string",begin:/"/,end:/"/}]},{ -beginKeywords:"new return throw await else",relevance:0},{className:"function", -begin:"("+b+"\\s+)+"+e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0, -end:/\s*[{;=]/,excludeEnd:!0,keywords:n,contains:[{ -beginKeywords:"public private protected static internal protected abstract async extern override unsafe virtual new sealed partial", -relevance:0},{begin:e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0, -contains:[e.TITLE_MODE,u],relevance:0},{match:/\(\)/},{className:"params", -begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:n,relevance:0, -contains:[g,a,e.C_BLOCK_COMMENT_MODE] -},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},m]}},grmr_css:e=>{ -const n=e.regex,t=te(e),a=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE];return{ -name:"CSS",case_insensitive:!0,illegal:/[=|'\$]/,keywords:{ -keyframePosition:"from to"},classNameAliases:{keyframePosition:"selector-tag"}, -contains:[t.BLOCK_COMMENT,{begin:/-(webkit|moz|ms|o)-(?=[a-z])/ -},t.CSS_NUMBER_MODE,{className:"selector-id",begin:/#[A-Za-z0-9_-]+/,relevance:0 -},{className:"selector-class",begin:"\\.[a-zA-Z-][a-zA-Z0-9_-]*",relevance:0 -},t.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",variants:[{ -begin:":("+re.join("|")+")"},{begin:":(:)?("+se.join("|")+")"}] -},t.CSS_VARIABLE,{className:"attribute",begin:"\\b("+oe.join("|")+")\\b"},{ -begin:/:/,end:/[;}{]/, -contains:[t.BLOCK_COMMENT,t.HEXCOLOR,t.IMPORTANT,t.CSS_NUMBER_MODE,...a,{ -begin:/(url|data-uri)\(/,end:/\)/,relevance:0,keywords:{built_in:"url data-uri" -},contains:[...a,{className:"string",begin:/[^)]/,endsWithParent:!0, -excludeEnd:!0}]},t.FUNCTION_DISPATCH]},{begin:n.lookahead(/@/),end:"[{;]", -relevance:0,illegal:/:/,contains:[{className:"keyword",begin:/@-?\w[\w]*(-\w+)*/ -},{begin:/\s/,endsWithParent:!0,excludeEnd:!0,relevance:0,keywords:{ -$pattern:/[a-z-]+/,keyword:"and or not only",attribute:ie.join(" ")},contains:[{ -begin:/[a-z-]+(?=:)/,className:"attribute"},...a,t.CSS_NUMBER_MODE]}]},{ -className:"selector-tag",begin:"\\b("+ae.join("|")+")\\b"}]}},grmr_diff:e=>{ -const n=e.regex;return{name:"Diff",aliases:["patch"],contains:[{ -className:"meta",relevance:10, -match:n.either(/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/) -},{className:"comment",variants:[{ -begin:n.either(/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/), -end:/$/},{match:/^\*{15}$/}]},{className:"addition",begin:/^\+/,end:/$/},{ -className:"deletion",begin:/^-/,end:/$/},{className:"addition",begin:/^!/, -end:/$/}]}},grmr_go:e=>{const n={ -keyword:["break","case","chan","const","continue","default","defer","else","fallthrough","for","func","go","goto","if","import","interface","map","package","range","return","select","struct","switch","type","var"], -type:["bool","byte","complex64","complex128","error","float32","float64","int8","int16","int32","int64","string","uint8","uint16","uint32","uint64","int","uint","uintptr","rune"], -literal:["true","false","iota","nil"], -built_in:["append","cap","close","complex","copy","imag","len","make","new","panic","print","println","real","recover","delete"] -};return{name:"Go",aliases:["golang"],keywords:n,illegal:"{const n=e.regex;return{name:"GraphQL",aliases:["gql"], -case_insensitive:!0,disableAutodetect:!1,keywords:{ -keyword:["query","mutation","subscription","type","input","schema","directive","interface","union","scalar","fragment","enum","on"], -literal:["true","false","null"]}, -contains:[e.HASH_COMMENT_MODE,e.QUOTE_STRING_MODE,e.NUMBER_MODE,{ -scope:"punctuation",match:/[.]{3}/,relevance:0},{scope:"punctuation", -begin:/[\!\(\)\:\=\[\]\{\|\}]{1}/,relevance:0},{scope:"variable",begin:/\$/, -end:/\W/,excludeEnd:!0,relevance:0},{scope:"meta",match:/@\w+/,excludeEnd:!0},{ -scope:"symbol",begin:n.concat(/[_A-Za-z][_0-9A-Za-z]*/,n.lookahead(/\s*:/)), -relevance:0}],illegal:[/[;<']/,/BEGIN/]}},grmr_ini:e=>{const n=e.regex,t={ -className:"number",relevance:0,variants:[{begin:/([+-]+)?[\d]+_[\d_]+/},{ -begin:e.NUMBER_RE}]},a=e.COMMENT();a.variants=[{begin:/;/,end:/$/},{begin:/#/, -end:/$/}];const i={className:"variable",variants:[{begin:/\$[\w\d"][\w\d_]*/},{ -begin:/\$\{(.*?)\}/}]},r={className:"literal", -begin:/\bon|off|true|false|yes|no\b/},s={className:"string", -contains:[e.BACKSLASH_ESCAPE],variants:[{begin:"'''",end:"'''",relevance:10},{ -begin:'"""',end:'"""',relevance:10},{begin:'"',end:'"'},{begin:"'",end:"'"}] -},o={begin:/\[/,end:/\]/,contains:[a,r,i,s,t,"self"],relevance:0 -},l=n.either(/[A-Za-z0-9_-]+/,/"(\\"|[^"])*"/,/'[^']*'/);return{ -name:"TOML, also INI",aliases:["toml"],case_insensitive:!0,illegal:/\S/, -contains:[a,{className:"section",begin:/\[+/,end:/\]+/},{ -begin:n.concat(l,"(\\s*\\.\\s*",l,")*",n.lookahead(/\s*=\s*[^#\s]/)), -className:"attr",starts:{end:/$/,contains:[a,o,r,i,s,t]}}]}},grmr_java:e=>{ -const n=e.regex,t="[\xc0-\u02b8a-zA-Z_$][\xc0-\u02b8a-zA-Z_$0-9]*",a=t+ue("(?:<"+t+"~~~(?:\\s*,\\s*"+t+"~~~)*>)?",/~~~/g,2),i={ -keyword:["synchronized","abstract","private","var","static","if","const ","for","while","strictfp","finally","protected","import","native","final","void","enum","else","break","transient","catch","instanceof","volatile","case","assert","package","default","public","try","switch","continue","throws","protected","public","private","module","requires","exports","do","sealed","yield","permits"], -literal:["false","true","null"], -type:["char","boolean","long","float","int","byte","short","double"], -built_in:["super","this"]},r={className:"meta",begin:"@"+t,contains:[{ -begin:/\(/,end:/\)/,contains:["self"]}]},s={className:"params",begin:/\(/, -end:/\)/,keywords:i,relevance:0,contains:[e.C_BLOCK_COMMENT_MODE],endsParent:!0} -;return{name:"Java",aliases:["jsp"],keywords:i,illegal:/<\/|#/, -contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{begin:/\w+@/, -relevance:0},{className:"doctag",begin:"@[A-Za-z]+"}]}),{ -begin:/import java\.[a-z]+\./,keywords:"import",relevance:2 -},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{begin:/"""/,end:/"""/, -className:"string",contains:[e.BACKSLASH_ESCAPE] -},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{ -match:[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,t],className:{ -1:"keyword",3:"title.class"}},{match:/non-sealed/,scope:"keyword"},{ -begin:[n.concat(/(?!else)/,t),/\s+/,t,/\s+/,/=(?!=)/],className:{1:"type", -3:"variable",5:"operator"}},{begin:[/record/,/\s+/,t],className:{1:"keyword", -3:"title.class"},contains:[s,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{ -beginKeywords:"new throw return else",relevance:0},{ -begin:["(?:"+a+"\\s+)",e.UNDERSCORE_IDENT_RE,/\s*(?=\()/],className:{ -2:"title.function"},keywords:i,contains:[{className:"params",begin:/\(/, -end:/\)/,keywords:i,relevance:0, -contains:[r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,ge,e.C_BLOCK_COMMENT_MODE] -},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},ge,r]}},grmr_javascript:we, -grmr_json:e=>{const n=["true","false","null"],t={scope:"literal", -beginKeywords:n.join(" ")};return{name:"JSON",keywords:{literal:n},contains:[{ -className:"attr",begin:/"(\\.|[^\\"\r\n])*"(?=\s*:)/,relevance:1.01},{ -match:/[{}[\],:]/,className:"punctuation",relevance:0 -},e.QUOTE_STRING_MODE,t,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE], -illegal:"\\S"}},grmr_kotlin:e=>{const n={ -keyword:"abstract as val var vararg get set class object open private protected public noinline crossinline dynamic final enum if else do while for when throw try catch finally import package is in fun override companion reified inline lateinit init interface annotation data sealed internal infix operator out by constructor super tailrec where const inner suspend typealias external expect actual", -built_in:"Byte Short Char Int Long Boolean Float Double Void Unit Nothing", -literal:"true false null"},t={className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"@" -},a={className:"subst",begin:/\$\{/,end:/\}/,contains:[e.C_NUMBER_MODE]},i={ -className:"variable",begin:"\\$"+e.UNDERSCORE_IDENT_RE},r={className:"string", -variants:[{begin:'"""',end:'"""(?=[^"])',contains:[i,a]},{begin:"'",end:"'", -illegal:/\n/,contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"',illegal:/\n/, -contains:[e.BACKSLASH_ESCAPE,i,a]}]};a.contains.push(r);const s={ -className:"meta", -begin:"@(?:file|property|field|get|set|receiver|param|setparam|delegate)\\s*:(?:\\s*"+e.UNDERSCORE_IDENT_RE+")?" -},o={className:"meta",begin:"@"+e.UNDERSCORE_IDENT_RE,contains:[{begin:/\(/, -end:/\)/,contains:[e.inherit(r,{className:"string"}),"self"]}] -},l=ge,c=e.COMMENT("/\\*","\\*/",{contains:[e.C_BLOCK_COMMENT_MODE]}),d={ -variants:[{className:"type",begin:e.UNDERSCORE_IDENT_RE},{begin:/\(/,end:/\)/, -contains:[]}]},g=d;return g.variants[1].contains=[d],d.variants[1].contains=[g], -{name:"Kotlin",aliases:["kt","kts"],keywords:n, -contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{className:"doctag", -begin:"@[A-Za-z]+"}]}),e.C_LINE_COMMENT_MODE,c,{className:"keyword", -begin:/\b(break|continue|return|this)\b/,starts:{contains:[{className:"symbol", -begin:/@\w+/}]}},t,s,o,{className:"function",beginKeywords:"fun",end:"[(]|$", -returnBegin:!0,excludeEnd:!0,keywords:n,relevance:5,contains:[{ -begin:e.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,relevance:0, -contains:[e.UNDERSCORE_TITLE_MODE]},{className:"type",begin://, -keywords:"reified",relevance:0},{className:"params",begin:/\(/,end:/\)/, -endsParent:!0,keywords:n,relevance:0,contains:[{begin:/:/,end:/[=,\/]/, -endsWithParent:!0,contains:[d,e.C_LINE_COMMENT_MODE,c],relevance:0 -},e.C_LINE_COMMENT_MODE,c,s,o,r,e.C_NUMBER_MODE]},c]},{ -begin:[/class|interface|trait/,/\s+/,e.UNDERSCORE_IDENT_RE],beginScope:{ -3:"title.class"},keywords:"class interface trait",end:/[:\{(]|$/,excludeEnd:!0, -illegal:"extends implements",contains:[{ -beginKeywords:"public protected internal private constructor" -},e.UNDERSCORE_TITLE_MODE,{className:"type",begin://,excludeBegin:!0, -excludeEnd:!0,relevance:0},{className:"type",begin:/[,:]\s*/,end:/[<\(,){\s]|$/, -excludeBegin:!0,returnEnd:!0},s,o]},r,{className:"meta",begin:"^#!/usr/bin/env", -end:"$",illegal:"\n"},l]}},grmr_less:e=>{ -const n=te(e),t=le,a="([\\w-]+|@\\{[\\w-]+\\})",i=[],r=[],s=e=>({ -className:"string",begin:"~?"+e+".*?"+e}),o=(e,n,t)=>({className:e,begin:n, -relevance:t}),l={$pattern:/[a-z-]+/,keyword:"and or not only", -attribute:ie.join(" ")},c={begin:"\\(",end:"\\)",contains:r,keywords:l, -relevance:0} -;r.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,s("'"),s('"'),n.CSS_NUMBER_MODE,{ -begin:"(url|data-uri)\\(",starts:{className:"string",end:"[\\)\\n]", -excludeEnd:!0} -},n.HEXCOLOR,c,o("variable","@@?[\\w-]+",10),o("variable","@\\{[\\w-]+\\}"),o("built_in","~?`[^`]*?`"),{ -className:"attribute",begin:"[\\w-]+\\s*:",end:":",returnBegin:!0,excludeEnd:!0 -},n.IMPORTANT,{beginKeywords:"and not"},n.FUNCTION_DISPATCH);const d=r.concat({ -begin:/\{/,end:/\}/,contains:i}),g={beginKeywords:"when",endsWithParent:!0, -contains:[{beginKeywords:"and not"}].concat(r)},u={begin:a+"\\s*:", -returnBegin:!0,end:/[;}]/,relevance:0,contains:[{begin:/-(webkit|moz|ms|o)-/ -},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+oe.join("|")+")\\b", -end:/(?=:)/,starts:{endsWithParent:!0,illegal:"[<=$]",relevance:0,contains:r}}] -},b={className:"keyword", -begin:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b", -starts:{end:"[;{}]",keywords:l,returnEnd:!0,contains:r,relevance:0}},m={ -className:"variable",variants:[{begin:"@[\\w-]+\\s*:",relevance:15},{ -begin:"@[\\w-]+"}],starts:{end:"[;}]",returnEnd:!0,contains:d}},p={variants:[{ -begin:"[\\.#:&\\[>]",end:"[;{}]"},{begin:a,end:/\{/}],returnBegin:!0, -returnEnd:!0,illegal:"[<='$\"]",relevance:0, -contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,g,o("keyword","all\\b"),o("variable","@\\{[\\w-]+\\}"),{ -begin:"\\b("+ae.join("|")+")\\b",className:"selector-tag" -},n.CSS_NUMBER_MODE,o("selector-tag",a,0),o("selector-id","#"+a),o("selector-class","\\."+a,0),o("selector-tag","&",0),n.ATTRIBUTE_SELECTOR_MODE,{ -className:"selector-pseudo",begin:":("+re.join("|")+")"},{ -className:"selector-pseudo",begin:":(:)?("+se.join("|")+")"},{begin:/\(/, -end:/\)/,relevance:0,contains:d},{begin:"!important"},n.FUNCTION_DISPATCH]},_={ -begin:`[\\w-]+:(:)?(${t.join("|")})`,returnBegin:!0,contains:[p]} -;return i.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,b,m,_,u,p,g,n.FUNCTION_DISPATCH), -{name:"Less",case_insensitive:!0,illegal:"[=>'/<($\"]",contains:i}}, -grmr_lua:e=>{const n="\\[=*\\[",t="\\]=*\\]",a={begin:n,end:t,contains:["self"] -},i=[e.COMMENT("--(?!\\[=*\\[)","$"),e.COMMENT("--\\[=*\\[",t,{contains:[a], -relevance:10})];return{name:"Lua",keywords:{$pattern:e.UNDERSCORE_IDENT_RE, -literal:"true false nil", -keyword:"and break do else elseif end for goto if in local not or repeat return then until while", -built_in:"_G _ENV _VERSION __index __newindex __mode __call __metatable __tostring __len __gc __add __sub __mul __div __mod __pow __concat __unm __eq __lt __le assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall arg self coroutine resume yield status wrap create running debug getupvalue debug sethook getmetatable gethook setmetatable setlocal traceback setfenv getinfo setupvalue getlocal getregistry getfenv io lines write close flush open output type read stderr stdin input stdout popen tmpfile math log max acos huge ldexp pi cos tanh pow deg tan cosh sinh random randomseed frexp ceil floor rad abs sqrt modf asin min mod fmod log10 atan2 exp sin atan os exit setlocale date getenv difftime remove time clock tmpname rename execute package preload loadlib loaded loaders cpath config path seeall string sub upper len gfind rep find match char dump gmatch reverse byte format gsub lower table setn insert getn foreachi maxn foreach concat sort remove" -},contains:i.concat([{className:"function",beginKeywords:"function",end:"\\)", -contains:[e.inherit(e.TITLE_MODE,{ -begin:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{className:"params", -begin:"\\(",endsWithParent:!0,contains:i}].concat(i) -},e.C_NUMBER_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string", -begin:n,end:t,contains:[a],relevance:5}])}},grmr_makefile:e=>{const n={ -className:"variable",variants:[{begin:"\\$\\("+e.UNDERSCORE_IDENT_RE+"\\)", -contains:[e.BACKSLASH_ESCAPE]},{begin:/\$[@%{ -const n=e.regex,t=n.concat(/[\p{L}_]/u,n.optional(/[\p{L}0-9_.-]*:/u),/[\p{L}0-9_.-]*/u),a={ -className:"symbol",begin:/&[a-z]+;|&#[0-9]+;|&#x[a-f0-9]+;/},i={begin:/\s/, -contains:[{className:"keyword",begin:/#?[a-z_][a-z1-9_-]+/,illegal:/\n/}] -},r=e.inherit(i,{begin:/\(/,end:/\)/}),s=e.inherit(e.APOS_STRING_MODE,{ -className:"string"}),o=e.inherit(e.QUOTE_STRING_MODE,{className:"string"}),l={ -endsWithParent:!0,illegal:/`]+/}]}]}]};return{ -name:"HTML, XML", -aliases:["html","xhtml","rss","atom","xjb","xsd","xsl","plist","wsf","svg"], -case_insensitive:!0,unicodeRegex:!0,contains:[{className:"meta",begin://,relevance:10,contains:[i,o,s,r,{begin:/\[/,end:/\]/,contains:[{ -className:"meta",begin://,contains:[i,r,o,s]}]}] -},e.COMMENT(//,{relevance:10}),{begin://, -relevance:10},a,{className:"meta",end:/\?>/,variants:[{begin:/<\?xml/, -relevance:10,contains:[o]},{begin:/<\?[a-z][a-z0-9]+/}]},{className:"tag", -begin:/)/,end:/>/,keywords:{name:"style"},contains:[l],starts:{ -end:/<\/style>/,returnEnd:!0,subLanguage:["css","xml"]}},{className:"tag", -begin:/)/,end:/>/,keywords:{name:"script"},contains:[l],starts:{ -end:/<\/script>/,returnEnd:!0,subLanguage:["javascript","handlebars","xml"]}},{ -className:"tag",begin:/<>|<\/>/},{className:"tag", -begin:n.concat(//,/>/,/\s/)))), -end:/\/?>/,contains:[{className:"name",begin:t,relevance:0,starts:l}]},{ -className:"tag",begin:n.concat(/<\//,n.lookahead(n.concat(t,/>/))),contains:[{ -className:"name",begin:t,relevance:0},{begin:/>/,relevance:0,endsParent:!0}]}]} -},grmr_markdown:e=>{const n={begin:/<\/?[A-Za-z_]/,end:">",subLanguage:"xml", -relevance:0},t={variants:[{begin:/\[.+?\]\[.*?\]/,relevance:0},{ -begin:/\[.+?\]\(((data|javascript|mailto):|(?:http|ftp)s?:\/\/).*?\)/, -relevance:2},{ -begin:e.regex.concat(/\[.+?\]\(/,/[A-Za-z][A-Za-z0-9+.-]*/,/:\/\/.*?\)/), -relevance:2},{begin:/\[.+?\]\([./?&#].*?\)/,relevance:1},{ -begin:/\[.*?\]\(.*?\)/,relevance:0}],returnBegin:!0,contains:[{match:/\[(?=\])/ -},{className:"string",relevance:0,begin:"\\[",end:"\\]",excludeBegin:!0, -returnEnd:!0},{className:"link",relevance:0,begin:"\\]\\(",end:"\\)", -excludeBegin:!0,excludeEnd:!0},{className:"symbol",relevance:0,begin:"\\]\\[", -end:"\\]",excludeBegin:!0,excludeEnd:!0}]},a={className:"strong",contains:[], -variants:[{begin:/_{2}(?!\s)/,end:/_{2}/},{begin:/\*{2}(?!\s)/,end:/\*{2}/}] -},i={className:"emphasis",contains:[],variants:[{begin:/\*(?![*\s])/,end:/\*/},{ -begin:/_(?![_\s])/,end:/_/,relevance:0}]},r=e.inherit(a,{contains:[] -}),s=e.inherit(i,{contains:[]});a.contains.push(s),i.contains.push(r) -;let o=[n,t];return[a,i,r,s].forEach((e=>{e.contains=e.contains.concat(o) -})),o=o.concat(a,i),{name:"Markdown",aliases:["md","mkdown","mkd"],contains:[{ -className:"section",variants:[{begin:"^#{1,6}",end:"$",contains:o},{ -begin:"(?=^.+?\\n[=-]{2,}$)",contains:[{begin:"^[=-]*$"},{begin:"^",end:"\\n", -contains:o}]}]},n,{className:"bullet",begin:"^[ \t]*([*+-]|(\\d+\\.))(?=\\s+)", -end:"\\s+",excludeEnd:!0},a,i,{className:"quote",begin:"^>\\s+",contains:o, -end:"$"},{className:"code",variants:[{begin:"(`{3,})[^`](.|\\n)*?\\1`*[ ]*"},{ -begin:"(~{3,})[^~](.|\\n)*?\\1~*[ ]*"},{begin:"```",end:"```+[ ]*$"},{ -begin:"~~~",end:"~~~+[ ]*$"},{begin:"`.+?`"},{begin:"(?=^( {4}|\\t))", -contains:[{begin:"^( {4}|\\t)",end:"(\\n)$"}],relevance:0}]},{ -begin:"^[-\\*]{3,}",end:"$"},t,{begin:/^\[[^\n]+\]:/,returnBegin:!0,contains:[{ -className:"symbol",begin:/\[/,end:/\]/,excludeBegin:!0,excludeEnd:!0},{ -className:"link",begin:/:\s*/,end:/$/,excludeBegin:!0}]}]}},grmr_objectivec:e=>{ -const n=/[a-zA-Z@][a-zA-Z0-9_]*/,t={$pattern:n, -keyword:["@interface","@class","@protocol","@implementation"]};return{ -name:"Objective-C",aliases:["mm","objc","obj-c","obj-c++","objective-c++"], -keywords:{"variable.language":["this","super"],$pattern:n, -keyword:["while","export","sizeof","typedef","const","struct","for","union","volatile","static","mutable","if","do","return","goto","enum","else","break","extern","asm","case","default","register","explicit","typename","switch","continue","inline","readonly","assign","readwrite","self","@synchronized","id","typeof","nonatomic","IBOutlet","IBAction","strong","weak","copy","in","out","inout","bycopy","byref","oneway","__strong","__weak","__block","__autoreleasing","@private","@protected","@public","@try","@property","@end","@throw","@catch","@finally","@autoreleasepool","@synthesize","@dynamic","@selector","@optional","@required","@encode","@package","@import","@defs","@compatibility_alias","__bridge","__bridge_transfer","__bridge_retained","__bridge_retain","__covariant","__contravariant","__kindof","_Nonnull","_Nullable","_Null_unspecified","__FUNCTION__","__PRETTY_FUNCTION__","__attribute__","getter","setter","retain","unsafe_unretained","nonnull","nullable","null_unspecified","null_resettable","class","instancetype","NS_DESIGNATED_INITIALIZER","NS_UNAVAILABLE","NS_REQUIRES_SUPER","NS_RETURNS_INNER_POINTER","NS_INLINE","NS_AVAILABLE","NS_DEPRECATED","NS_ENUM","NS_OPTIONS","NS_SWIFT_UNAVAILABLE","NS_ASSUME_NONNULL_BEGIN","NS_ASSUME_NONNULL_END","NS_REFINED_FOR_SWIFT","NS_SWIFT_NAME","NS_SWIFT_NOTHROW","NS_DURING","NS_HANDLER","NS_ENDHANDLER","NS_VALUERETURN","NS_VOIDRETURN"], -literal:["false","true","FALSE","TRUE","nil","YES","NO","NULL"], -built_in:["dispatch_once_t","dispatch_queue_t","dispatch_sync","dispatch_async","dispatch_once"], -type:["int","float","char","unsigned","signed","short","long","double","wchar_t","unichar","void","bool","BOOL","id|0","_Bool"] -},illegal:"/,end:/$/,illegal:"\\n" -},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"class", -begin:"("+t.keyword.join("|")+")\\b",end:/(\{|$)/,excludeEnd:!0,keywords:t, -contains:[e.UNDERSCORE_TITLE_MODE]},{begin:"\\."+e.UNDERSCORE_IDENT_RE, -relevance:0}]}},grmr_perl:e=>{const n=e.regex,t=/[dualxmsipngr]{0,12}/,a={ -$pattern:/[\w.]+/, -keyword:"abs accept alarm and atan2 bind binmode bless break caller chdir chmod chomp chop chown chr chroot close closedir connect continue cos crypt dbmclose dbmopen defined delete die do dump each else elsif endgrent endhostent endnetent endprotoent endpwent endservent eof eval exec exists exit exp fcntl fileno flock for foreach fork format formline getc getgrent getgrgid getgrnam gethostbyaddr gethostbyname gethostent getlogin getnetbyaddr getnetbyname getnetent getpeername getpgrp getpriority getprotobyname getprotobynumber getprotoent getpwent getpwnam getpwuid getservbyname getservbyport getservent getsockname getsockopt given glob gmtime goto grep gt hex if index int ioctl join keys kill last lc lcfirst length link listen local localtime log lstat lt ma map mkdir msgctl msgget msgrcv msgsnd my ne next no not oct open opendir or ord our pack package pipe pop pos print printf prototype push q|0 qq quotemeta qw qx rand read readdir readline readlink readpipe recv redo ref rename require reset return reverse rewinddir rindex rmdir say scalar seek seekdir select semctl semget semop send setgrent sethostent setnetent setpgrp setpriority setprotoent setpwent setservent setsockopt shift shmctl shmget shmread shmwrite shutdown sin sleep socket socketpair sort splice split sprintf sqrt srand stat state study sub substr symlink syscall sysopen sysread sysseek system syswrite tell telldir tie tied time times tr truncate uc ucfirst umask undef unless unlink unpack unshift untie until use utime values vec wait waitpid wantarray warn when while write x|0 xor y|0" -},i={className:"subst",begin:"[$@]\\{",end:"\\}",keywords:a},r={begin:/->\{/, -end:/\}/},s={variants:[{begin:/\$\d/},{ -begin:n.concat(/[$%@](\^\w\b|#\w+(::\w+)*|\{\w+\}|\w+(::\w*)*)/,"(?![A-Za-z])(?![@$%])") -},{begin:/[$%@][^\s\w{]/,relevance:0}] -},o=[e.BACKSLASH_ESCAPE,i,s],l=[/!/,/\//,/\|/,/\?/,/'/,/"/,/#/],c=(e,a,i="\\1")=>{ -const r="\\1"===i?i:n.concat(i,a) -;return n.concat(n.concat("(?:",e,")"),a,/(?:\\.|[^\\\/])*?/,r,/(?:\\.|[^\\\/])*?/,i,t) -},d=(e,a,i)=>n.concat(n.concat("(?:",e,")"),a,/(?:\\.|[^\\\/])*?/,i,t),g=[s,e.HASH_COMMENT_MODE,e.COMMENT(/^=\w/,/=cut/,{ -endsWithParent:!0}),r,{className:"string",contains:o,variants:[{ -begin:"q[qwxr]?\\s*\\(",end:"\\)",relevance:5},{begin:"q[qwxr]?\\s*\\[", -end:"\\]",relevance:5},{begin:"q[qwxr]?\\s*\\{",end:"\\}",relevance:5},{ -begin:"q[qwxr]?\\s*\\|",end:"\\|",relevance:5},{begin:"q[qwxr]?\\s*<",end:">", -relevance:5},{begin:"qw\\s+q",end:"q",relevance:5},{begin:"'",end:"'", -contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"'},{begin:"`",end:"`", -contains:[e.BACKSLASH_ESCAPE]},{begin:/\{\w+\}/,relevance:0},{ -begin:"-?\\w+\\s*=>",relevance:0}]},{className:"number", -begin:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b", -relevance:0},{ -begin:"(\\/\\/|"+e.RE_STARTERS_RE+"|\\b(split|return|print|reverse|grep)\\b)\\s*", -keywords:"split return print reverse grep",relevance:0, -contains:[e.HASH_COMMENT_MODE,{className:"regexp",variants:[{ -begin:c("s|tr|y",n.either(...l,{capture:!0}))},{begin:c("s|tr|y","\\(","\\)")},{ -begin:c("s|tr|y","\\[","\\]")},{begin:c("s|tr|y","\\{","\\}")}],relevance:2},{ -className:"regexp",variants:[{begin:/(m|qr)\/\//,relevance:0},{ -begin:d("(?:m|qr)?",/\//,/\//)},{begin:d("m|qr",n.either(...l,{capture:!0 -}),/\1/)},{begin:d("m|qr",/\(/,/\)/)},{begin:d("m|qr",/\[/,/\]/)},{ -begin:d("m|qr",/\{/,/\}/)}]}]},{className:"function",beginKeywords:"sub", -end:"(\\s*\\(.*?\\))?[;{]",excludeEnd:!0,relevance:5,contains:[e.TITLE_MODE]},{ -begin:"-\\w\\b",relevance:0},{begin:"^__DATA__$",end:"^__END__$", -subLanguage:"mojolicious",contains:[{begin:"^@@.*",end:"$",className:"comment"}] -}];return i.contains=g,r.contains=g,{name:"Perl",aliases:["pl","pm"],keywords:a, -contains:g}},grmr_php:e=>{ -const n=e.regex,t=/(?![A-Za-z0-9])(?![$])/,a=n.concat(/[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/,t),i=n.concat(/(\\?[A-Z][a-z0-9_\x7f-\xff]+|\\?[A-Z]+(?=[A-Z][a-z0-9_\x7f-\xff])){1,}/,t),r={ -scope:"variable",match:"\\$+"+a},s={scope:"subst",variants:[{begin:/\$\w+/},{ -begin:/\{\$/,end:/\}/}]},o=e.inherit(e.APOS_STRING_MODE,{illegal:null -}),l="[ \t\n]",c={scope:"string",variants:[e.inherit(e.QUOTE_STRING_MODE,{ -illegal:null,contains:e.QUOTE_STRING_MODE.contains.concat(s) -}),o,e.END_SAME_AS_BEGIN({begin:/<<<[ \t]*(\w+)\n/,end:/[ \t]*(\w+)\b/, -contains:e.QUOTE_STRING_MODE.contains.concat(s)})]},d={scope:"number", -variants:[{begin:"\\b0[bB][01]+(?:_[01]+)*\\b"},{ -begin:"\\b0[oO][0-7]+(?:_[0-7]+)*\\b"},{ -begin:"\\b0[xX][\\da-fA-F]+(?:_[\\da-fA-F]+)*\\b"},{ -begin:"(?:\\b\\d+(?:_\\d+)*(\\.(?:\\d+(?:_\\d+)*))?|\\B\\.\\d+)(?:[eE][+-]?\\d+)?" -}],relevance:0 -},g=["false","null","true"],u=["__CLASS__","__DIR__","__FILE__","__FUNCTION__","__COMPILER_HALT_OFFSET__","__LINE__","__METHOD__","__NAMESPACE__","__TRAIT__","die","echo","exit","include","include_once","print","require","require_once","array","abstract","and","as","binary","bool","boolean","break","callable","case","catch","class","clone","const","continue","declare","default","do","double","else","elseif","empty","enddeclare","endfor","endforeach","endif","endswitch","endwhile","enum","eval","extends","final","finally","float","for","foreach","from","global","goto","if","implements","instanceof","insteadof","int","integer","interface","isset","iterable","list","match|0","mixed","new","never","object","or","private","protected","public","readonly","real","return","string","switch","throw","trait","try","unset","use","var","void","while","xor","yield"],b=["Error|0","AppendIterator","ArgumentCountError","ArithmeticError","ArrayIterator","ArrayObject","AssertionError","BadFunctionCallException","BadMethodCallException","CachingIterator","CallbackFilterIterator","CompileError","Countable","DirectoryIterator","DivisionByZeroError","DomainException","EmptyIterator","ErrorException","Exception","FilesystemIterator","FilterIterator","GlobIterator","InfiniteIterator","InvalidArgumentException","IteratorIterator","LengthException","LimitIterator","LogicException","MultipleIterator","NoRewindIterator","OutOfBoundsException","OutOfRangeException","OuterIterator","OverflowException","ParentIterator","ParseError","RangeException","RecursiveArrayIterator","RecursiveCachingIterator","RecursiveCallbackFilterIterator","RecursiveDirectoryIterator","RecursiveFilterIterator","RecursiveIterator","RecursiveIteratorIterator","RecursiveRegexIterator","RecursiveTreeIterator","RegexIterator","RuntimeException","SeekableIterator","SplDoublyLinkedList","SplFileInfo","SplFileObject","SplFixedArray","SplHeap","SplMaxHeap","SplMinHeap","SplObjectStorage","SplObserver","SplPriorityQueue","SplQueue","SplStack","SplSubject","SplTempFileObject","TypeError","UnderflowException","UnexpectedValueException","UnhandledMatchError","ArrayAccess","BackedEnum","Closure","Fiber","Generator","Iterator","IteratorAggregate","Serializable","Stringable","Throwable","Traversable","UnitEnum","WeakReference","WeakMap","Directory","__PHP_Incomplete_Class","parent","php_user_filter","self","static","stdClass"],m={ -keyword:u,literal:(e=>{const n=[];return e.forEach((e=>{ -n.push(e),e.toLowerCase()===e?n.push(e.toUpperCase()):n.push(e.toLowerCase()) -})),n})(g),built_in:b},p=e=>e.map((e=>e.replace(/\|\d+$/,""))),_={variants:[{ -match:[/new/,n.concat(l,"+"),n.concat("(?!",p(b).join("\\b|"),"\\b)"),i],scope:{ -1:"keyword",4:"title.class"}}]},h=n.concat(a,"\\b(?!\\()"),f={variants:[{ -match:[n.concat(/::/,n.lookahead(/(?!class\b)/)),h],scope:{2:"variable.constant" -}},{match:[/::/,/class/],scope:{2:"variable.language"}},{ -match:[i,n.concat(/::/,n.lookahead(/(?!class\b)/)),h],scope:{1:"title.class", -3:"variable.constant"}},{match:[i,n.concat("::",n.lookahead(/(?!class\b)/))], -scope:{1:"title.class"}},{match:[i,/::/,/class/],scope:{1:"title.class", -3:"variable.language"}}]},E={scope:"attr", -match:n.concat(a,n.lookahead(":"),n.lookahead(/(?!::)/))},y={relevance:0, -begin:/\(/,end:/\)/,keywords:m,contains:[E,r,f,e.C_BLOCK_COMMENT_MODE,c,d,_] -},w={relevance:0, -match:[/\b/,n.concat("(?!fn\\b|function\\b|",p(u).join("\\b|"),"|",p(b).join("\\b|"),"\\b)"),a,n.concat(l,"*"),n.lookahead(/(?=\()/)], -scope:{3:"title.function.invoke"},contains:[y]};y.contains.push(w) -;const N=[E,f,e.C_BLOCK_COMMENT_MODE,c,d,_];return{case_insensitive:!1, -keywords:m,contains:[{begin:n.concat(/#\[\s*/,i),beginScope:"meta",end:/]/, -endScope:"meta",keywords:{literal:g,keyword:["new","array"]},contains:[{ -begin:/\[/,end:/]/,keywords:{literal:g,keyword:["new","array"]}, -contains:["self",...N]},...N,{scope:"meta",match:i}] -},e.HASH_COMMENT_MODE,e.COMMENT("//","$"),e.COMMENT("/\\*","\\*/",{contains:[{ -scope:"doctag",match:"@[A-Za-z]+"}]}),{match:/__halt_compiler\(\);/, -keywords:"__halt_compiler",starts:{scope:"comment",end:e.MATCH_NOTHING_RE, -contains:[{match:/\?>/,scope:"meta",endsParent:!0}]}},{scope:"meta",variants:[{ -begin:/<\?php/,relevance:10},{begin:/<\?=/},{begin:/<\?/,relevance:.1},{ -begin:/\?>/}]},{scope:"variable.language",match:/\$this\b/},r,w,f,{ -match:[/const/,/\s/,a],scope:{1:"keyword",3:"variable.constant"}},_,{ -scope:"function",relevance:0,beginKeywords:"fn function",end:/[;{]/, -excludeEnd:!0,illegal:"[$%\\[]",contains:[{beginKeywords:"use" -},e.UNDERSCORE_TITLE_MODE,{begin:"=>",endsParent:!0},{scope:"params", -begin:"\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0,keywords:m, -contains:["self",r,f,e.C_BLOCK_COMMENT_MODE,c,d]}]},{scope:"class",variants:[{ -beginKeywords:"enum",illegal:/[($"]/},{beginKeywords:"class interface trait", -illegal:/[:($"]/}],relevance:0,end:/\{/,excludeEnd:!0,contains:[{ -beginKeywords:"extends implements"},e.UNDERSCORE_TITLE_MODE]},{ -beginKeywords:"namespace",relevance:0,end:";",illegal:/[.']/, -contains:[e.inherit(e.UNDERSCORE_TITLE_MODE,{scope:"title.class"})]},{ -beginKeywords:"use",relevance:0,end:";",contains:[{ -match:/\b(as|const|function)\b/,scope:"keyword"},e.UNDERSCORE_TITLE_MODE]},c,d]} -},grmr_php_template:e=>({name:"PHP template",subLanguage:"xml",contains:[{ -begin:/<\?(php|=)?/,end:/\?>/,subLanguage:"php",contains:[{begin:"/\\*", -end:"\\*/",skip:!0},{begin:'b"',end:'"',skip:!0},{begin:"b'",end:"'",skip:!0 -},e.inherit(e.APOS_STRING_MODE,{illegal:null,className:null,contains:null, -skip:!0}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null,className:null, -contains:null,skip:!0})]}]}),grmr_plaintext:e=>({name:"Plain text", -aliases:["text","txt"],disableAutodetect:!0}),grmr_python:e=>{ -const n=e.regex,t=/[\p{XID_Start}_]\p{XID_Continue}*/u,a=["and","as","assert","async","await","break","case","class","continue","def","del","elif","else","except","finally","for","from","global","if","import","in","is","lambda","match","nonlocal|10","not","or","pass","raise","return","try","while","with","yield"],i={ -$pattern:/[A-Za-z]\w+|__\w+__/,keyword:a, -built_in:["__import__","abs","all","any","ascii","bin","bool","breakpoint","bytearray","bytes","callable","chr","classmethod","compile","complex","delattr","dict","dir","divmod","enumerate","eval","exec","filter","float","format","frozenset","getattr","globals","hasattr","hash","help","hex","id","input","int","isinstance","issubclass","iter","len","list","locals","map","max","memoryview","min","next","object","oct","open","ord","pow","print","property","range","repr","reversed","round","set","setattr","slice","sorted","staticmethod","str","sum","super","tuple","type","vars","zip"], -literal:["__debug__","Ellipsis","False","None","NotImplemented","True"], -type:["Any","Callable","Coroutine","Dict","List","Literal","Generic","Optional","Sequence","Set","Tuple","Type","Union"] -},r={className:"meta",begin:/^(>>>|\.\.\.) /},s={className:"subst",begin:/\{/, -end:/\}/,keywords:i,illegal:/#/},o={begin:/\{\{/,relevance:0},l={ -className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{ -begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?'''/,end:/'''/, -contains:[e.BACKSLASH_ESCAPE,r],relevance:10},{ -begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?"""/,end:/"""/, -contains:[e.BACKSLASH_ESCAPE,r],relevance:10},{ -begin:/([fF][rR]|[rR][fF]|[fF])'''/,end:/'''/, -contains:[e.BACKSLASH_ESCAPE,r,o,s]},{begin:/([fF][rR]|[rR][fF]|[fF])"""/, -end:/"""/,contains:[e.BACKSLASH_ESCAPE,r,o,s]},{begin:/([uU]|[rR])'/,end:/'/, -relevance:10},{begin:/([uU]|[rR])"/,end:/"/,relevance:10},{ -begin:/([bB]|[bB][rR]|[rR][bB])'/,end:/'/},{begin:/([bB]|[bB][rR]|[rR][bB])"/, -end:/"/},{begin:/([fF][rR]|[rR][fF]|[fF])'/,end:/'/, -contains:[e.BACKSLASH_ESCAPE,o,s]},{begin:/([fF][rR]|[rR][fF]|[fF])"/,end:/"/, -contains:[e.BACKSLASH_ESCAPE,o,s]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE] -},c="[0-9](_?[0-9])*",d=`(\\b(${c}))?\\.(${c})|\\b(${c})\\.`,g="\\b|"+a.join("|"),u={ -className:"number",relevance:0,variants:[{ -begin:`(\\b(${c})|(${d}))[eE][+-]?(${c})[jJ]?(?=${g})`},{begin:`(${d})[jJ]?`},{ -begin:`\\b([1-9](_?[0-9])*|0+(_?0)*)[lLjJ]?(?=${g})`},{ -begin:`\\b0[bB](_?[01])+[lL]?(?=${g})`},{begin:`\\b0[oO](_?[0-7])+[lL]?(?=${g})` -},{begin:`\\b0[xX](_?[0-9a-fA-F])+[lL]?(?=${g})`},{begin:`\\b(${c})[jJ](?=${g})` -}]},b={className:"comment",begin:n.lookahead(/# type:/),end:/$/,keywords:i, -contains:[{begin:/# type:/},{begin:/#/,end:/\b\B/,endsWithParent:!0}]},m={ -className:"params",variants:[{className:"",begin:/\(\s*\)/,skip:!0},{begin:/\(/, -end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:i, -contains:["self",r,u,l,e.HASH_COMMENT_MODE]}]};return s.contains=[l,u,r],{ -name:"Python",aliases:["py","gyp","ipython"],unicodeRegex:!0,keywords:i, -illegal:/(<\/|->|\?)|=>/,contains:[r,u,{begin:/\bself\b/},{beginKeywords:"if", -relevance:0},l,b,e.HASH_COMMENT_MODE,{match:[/\bdef/,/\s+/,t],scope:{ -1:"keyword",3:"title.function"},contains:[m]},{variants:[{ -match:[/\bclass/,/\s+/,t,/\s*/,/\(\s*/,t,/\s*\)/]},{match:[/\bclass/,/\s+/,t]}], -scope:{1:"keyword",3:"title.class",6:"title.class.inherited"}},{ -className:"meta",begin:/^[\t ]*@/,end:/(?=#)|$/,contains:[u,m,l]}]}}, -grmr_python_repl:e=>({aliases:["pycon"],contains:[{className:"meta.prompt", -starts:{end:/ |$/,starts:{end:"$",subLanguage:"python"}},variants:[{ -begin:/^>>>(?=[ ]|$)/},{begin:/^\.\.\.(?=[ ]|$)/}]}]}),grmr_r:e=>{ -const n=e.regex,t=/(?:(?:[a-zA-Z]|\.[._a-zA-Z])[._a-zA-Z0-9]*)|\.(?!\d)/,a=n.either(/0[xX][0-9a-fA-F]+\.[0-9a-fA-F]*[pP][+-]?\d+i?/,/0[xX][0-9a-fA-F]+(?:[pP][+-]?\d+)?[Li]?/,/(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?[Li]?/),i=/[=!<>:]=|\|\||&&|:::?|<-|<<-|->>|->|\|>|[-+*\/?!$&|:<=>@^~]|\*\*/,r=n.either(/[()]/,/[{}]/,/\[\[/,/[[\]]/,/\\/,/,/) -;return{name:"R",keywords:{$pattern:t, -keyword:"function if in break next repeat else for while", -literal:"NULL NA TRUE FALSE Inf NaN NA_integer_|10 NA_real_|10 NA_character_|10 NA_complex_|10", -built_in:"LETTERS letters month.abb month.name pi T F abs acos acosh all any anyNA Arg as.call as.character as.complex as.double as.environment as.integer as.logical as.null.default as.numeric as.raw asin asinh atan atanh attr attributes baseenv browser c call ceiling class Conj cos cosh cospi cummax cummin cumprod cumsum digamma dim dimnames emptyenv exp expression floor forceAndCall gamma gc.time globalenv Im interactive invisible is.array is.atomic is.call is.character is.complex is.double is.environment is.expression is.finite is.function is.infinite is.integer is.language is.list is.logical is.matrix is.na is.name is.nan is.null is.numeric is.object is.pairlist is.raw is.recursive is.single is.symbol lazyLoadDBfetch length lgamma list log max min missing Mod names nargs nzchar oldClass on.exit pos.to.env proc.time prod quote range Re rep retracemem return round seq_along seq_len seq.int sign signif sin sinh sinpi sqrt standardGeneric substitute sum switch tan tanh tanpi tracemem trigamma trunc unclass untracemem UseMethod xtfrm" -},contains:[e.COMMENT(/#'/,/$/,{contains:[{scope:"doctag",match:/@examples/, -starts:{end:n.lookahead(n.either(/\n^#'\s*(?=@[a-zA-Z]+)/,/\n^(?!#')/)), -endsParent:!0}},{scope:"doctag",begin:"@param",end:/$/,contains:[{ -scope:"variable",variants:[{match:t},{match:/`(?:\\.|[^`\\])+`/}],endsParent:!0 -}]},{scope:"doctag",match:/@[a-zA-Z]+/},{scope:"keyword",match:/\\[a-zA-Z]+/}] -}),e.HASH_COMMENT_MODE,{scope:"string",contains:[e.BACKSLASH_ESCAPE], -variants:[e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\(/,end:/\)(-*)"/ -}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\{/,end:/\}(-*)"/ -}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\[/,end:/\](-*)"/ -}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\(/,end:/\)(-*)'/ -}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\{/,end:/\}(-*)'/ -}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\[/,end:/\](-*)'/}),{begin:'"',end:'"', -relevance:0},{begin:"'",end:"'",relevance:0}]},{relevance:0,variants:[{scope:{ -1:"operator",2:"number"},match:[i,a]},{scope:{1:"operator",2:"number"}, -match:[/%[^%]*%/,a]},{scope:{1:"punctuation",2:"number"},match:[r,a]},{scope:{ -2:"number"},match:[/[^a-zA-Z0-9._]|^/,a]}]},{scope:{3:"operator"}, -match:[t,/\s+/,/<-/,/\s+/]},{scope:"operator",relevance:0,variants:[{match:i},{ -match:/%[^%]*%/}]},{scope:"punctuation",relevance:0,match:r},{begin:"`",end:"`", -contains:[{begin:/\\./}]}]}},grmr_ruby:e=>{ -const n=e.regex,t="([a-zA-Z_]\\w*[!?=]?|[-+~]@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?)",a=n.either(/\b([A-Z]+[a-z0-9]+)+/,/\b([A-Z]+[a-z0-9]+)+[A-Z]+/),i=n.concat(a,/(::\w+)*/),r={ -"variable.constant":["__FILE__","__LINE__","__ENCODING__"], -"variable.language":["self","super"], -keyword:["alias","and","begin","BEGIN","break","case","class","defined","do","else","elsif","end","END","ensure","for","if","in","module","next","not","or","redo","require","rescue","retry","return","then","undef","unless","until","when","while","yield","include","extend","prepend","public","private","protected","raise","throw"], -built_in:["proc","lambda","attr_accessor","attr_reader","attr_writer","define_method","private_constant","module_function"], -literal:["true","false","nil"]},s={className:"doctag",begin:"@[A-Za-z]+"},o={ -begin:"#<",end:">"},l=[e.COMMENT("#","$",{contains:[s] -}),e.COMMENT("^=begin","^=end",{contains:[s],relevance:10 -}),e.COMMENT("^__END__",e.MATCH_NOTHING_RE)],c={className:"subst",begin:/#\{/, -end:/\}/,keywords:r},d={className:"string",contains:[e.BACKSLASH_ESCAPE,c], -variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/`/,end:/`/},{ -begin:/%[qQwWx]?\(/,end:/\)/},{begin:/%[qQwWx]?\[/,end:/\]/},{ -begin:/%[qQwWx]?\{/,end:/\}/},{begin:/%[qQwWx]?/},{begin:/%[qQwWx]?\//, -end:/\//},{begin:/%[qQwWx]?%/,end:/%/},{begin:/%[qQwWx]?-/,end:/-/},{ -begin:/%[qQwWx]?\|/,end:/\|/},{begin:/\B\?(\\\d{1,3})/},{ -begin:/\B\?(\\x[A-Fa-f0-9]{1,2})/},{begin:/\B\?(\\u\{?[A-Fa-f0-9]{1,6}\}?)/},{ -begin:/\B\?(\\M-\\C-|\\M-\\c|\\c\\M-|\\M-|\\C-\\M-)[\x20-\x7e]/},{ -begin:/\B\?\\(c|C-)[\x20-\x7e]/},{begin:/\B\?\\?\S/},{ -begin:n.concat(/<<[-~]?'?/,n.lookahead(/(\w+)(?=\W)[^\n]*\n(?:[^\n]*\n)*?\s*\1\b/)), -contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/, -contains:[e.BACKSLASH_ESCAPE,c]})]}]},g="[0-9](_?[0-9])*",u={className:"number", -relevance:0,variants:[{ -begin:`\\b([1-9](_?[0-9])*|0)(\\.(${g}))?([eE][+-]?(${g})|r)?i?\\b`},{ -begin:"\\b0[dD][0-9](_?[0-9])*r?i?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*r?i?\\b" -},{begin:"\\b0[oO][0-7](_?[0-7])*r?i?\\b"},{ -begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*r?i?\\b"},{ -begin:"\\b0(_?[0-7])+r?i?\\b"}]},b={variants:[{match:/\(\)/},{ -className:"params",begin:/\(/,end:/(?=\))/,excludeBegin:!0,endsParent:!0, -keywords:r}]},m=[d,{variants:[{match:[/class\s+/,i,/\s+<\s+/,i]},{ -match:[/\b(class|module)\s+/,i]}],scope:{2:"title.class", -4:"title.class.inherited"},keywords:r},{match:[/(include|extend)\s+/,i],scope:{ -2:"title.class"},keywords:r},{relevance:0,match:[i,/\.new[. (]/],scope:{ -1:"title.class"}},{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/, -className:"variable.constant"},{relevance:0,match:a,scope:"title.class"},{ -match:[/def/,/\s+/,t],scope:{1:"keyword",3:"title.function"},contains:[b]},{ -begin:e.IDENT_RE+"::"},{className:"symbol", -begin:e.UNDERSCORE_IDENT_RE+"(!|\\?)?:",relevance:0},{className:"symbol", -begin:":(?!\\s)",contains:[d,{begin:t}],relevance:0},u,{className:"variable", -begin:"(\\$\\W)|((\\$|@@?)(\\w+))(?=[^@$?])(?![A-Za-z])(?![@$?'])"},{ -className:"params",begin:/\|/,end:/\|/,excludeBegin:!0,excludeEnd:!0, -relevance:0,keywords:r},{begin:"("+e.RE_STARTERS_RE+"|unless)\\s*", -keywords:"unless",contains:[{className:"regexp",contains:[e.BACKSLASH_ESCAPE,c], -illegal:/\n/,variants:[{begin:"/",end:"/[a-z]*"},{begin:/%r\{/,end:/\}[a-z]*/},{ -begin:"%r\\(",end:"\\)[a-z]*"},{begin:"%r!",end:"![a-z]*"},{begin:"%r\\[", -end:"\\][a-z]*"}]}].concat(o,l),relevance:0}].concat(o,l) -;c.contains=m,b.contains=m;const p=[{begin:/^\s*=>/,starts:{end:"$",contains:m} -},{className:"meta.prompt", -begin:"^([>?]>|[\\w#]+\\(\\w+\\):\\d+:\\d+[>*]|(\\w+-)?\\d+\\.\\d+\\.\\d+(p\\d+)?[^\\d][^>]+>)(?=[ ])", -starts:{end:"$",keywords:r,contains:m}}];return l.unshift(o),{name:"Ruby", -aliases:["rb","gemspec","podspec","thor","irb"],keywords:r,illegal:/\/\*/, -contains:[e.SHEBANG({binary:"ruby"})].concat(p).concat(l).concat(m)}}, -grmr_rust:e=>{const n=e.regex,t={className:"title.function.invoke",relevance:0, -begin:n.concat(/\b/,/(?!let\b)/,e.IDENT_RE,n.lookahead(/\s*\(/)) -},a="([ui](8|16|32|64|128|size)|f(32|64))?",i=["drop ","Copy","Send","Sized","Sync","Drop","Fn","FnMut","FnOnce","ToOwned","Clone","Debug","PartialEq","PartialOrd","Eq","Ord","AsRef","AsMut","Into","From","Default","Iterator","Extend","IntoIterator","DoubleEndedIterator","ExactSizeIterator","SliceConcatExt","ToString","assert!","assert_eq!","bitflags!","bytes!","cfg!","col!","concat!","concat_idents!","debug_assert!","debug_assert_eq!","env!","panic!","file!","format!","format_args!","include_bytes!","include_str!","line!","local_data_key!","module_path!","option_env!","print!","println!","select!","stringify!","try!","unimplemented!","unreachable!","vec!","write!","writeln!","macro_rules!","assert_ne!","debug_assert_ne!"],r=["i8","i16","i32","i64","i128","isize","u8","u16","u32","u64","u128","usize","f32","f64","str","char","bool","Box","Option","Result","String","Vec"] -;return{name:"Rust",aliases:["rs"],keywords:{$pattern:e.IDENT_RE+"!?",type:r, -keyword:["abstract","as","async","await","become","box","break","const","continue","crate","do","dyn","else","enum","extern","false","final","fn","for","if","impl","in","let","loop","macro","match","mod","move","mut","override","priv","pub","ref","return","self","Self","static","struct","super","trait","true","try","type","typeof","unsafe","unsized","use","virtual","where","while","yield"], -literal:["true","false","Some","None","Ok","Err"],built_in:i},illegal:""},t]}}, -grmr_scss:e=>{const n=te(e),t=se,a=re,i="@[a-z-]+",r={className:"variable", -begin:"(\\$[a-zA-Z-][a-zA-Z0-9_-]*)\\b",relevance:0};return{name:"SCSS", -case_insensitive:!0,illegal:"[=/|']", -contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,n.CSS_NUMBER_MODE,{ -className:"selector-id",begin:"#[A-Za-z0-9_-]+",relevance:0},{ -className:"selector-class",begin:"\\.[A-Za-z0-9_-]+",relevance:0 -},n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-tag", -begin:"\\b("+ae.join("|")+")\\b",relevance:0},{className:"selector-pseudo", -begin:":("+a.join("|")+")"},{className:"selector-pseudo", -begin:":(:)?("+t.join("|")+")"},r,{begin:/\(/,end:/\)/, -contains:[n.CSS_NUMBER_MODE]},n.CSS_VARIABLE,{className:"attribute", -begin:"\\b("+oe.join("|")+")\\b"},{ -begin:"\\b(whitespace|wait|w-resize|visible|vertical-text|vertical-ideographic|uppercase|upper-roman|upper-alpha|underline|transparent|top|thin|thick|text|text-top|text-bottom|tb-rl|table-header-group|table-footer-group|sw-resize|super|strict|static|square|solid|small-caps|separate|se-resize|scroll|s-resize|rtl|row-resize|ridge|right|repeat|repeat-y|repeat-x|relative|progress|pointer|overline|outside|outset|oblique|nowrap|not-allowed|normal|none|nw-resize|no-repeat|no-drop|newspaper|ne-resize|n-resize|move|middle|medium|ltr|lr-tb|lowercase|lower-roman|lower-alpha|loose|list-item|line|line-through|line-edge|lighter|left|keep-all|justify|italic|inter-word|inter-ideograph|inside|inset|inline|inline-block|inherit|inactive|ideograph-space|ideograph-parenthesis|ideograph-numeric|ideograph-alpha|horizontal|hidden|help|hand|groove|fixed|ellipsis|e-resize|double|dotted|distribute|distribute-space|distribute-letter|distribute-all-lines|disc|disabled|default|decimal|dashed|crosshair|collapse|col-resize|circle|char|center|capitalize|break-word|break-all|bottom|both|bolder|bold|block|bidi-override|below|baseline|auto|always|all-scroll|absolute|table|table-cell)\\b" -},{begin:/:/,end:/[;}{]/,relevance:0, -contains:[n.BLOCK_COMMENT,r,n.HEXCOLOR,n.CSS_NUMBER_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,n.IMPORTANT,n.FUNCTION_DISPATCH] -},{begin:"@(page|font-face)",keywords:{$pattern:i,keyword:"@page @font-face"}},{ -begin:"@",end:"[{;]",returnBegin:!0,keywords:{$pattern:/[a-z-]+/, -keyword:"and or not only",attribute:ie.join(" ")},contains:[{begin:i, -className:"keyword"},{begin:/[a-z-]+(?=:)/,className:"attribute" -},r,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,n.HEXCOLOR,n.CSS_NUMBER_MODE] -},n.FUNCTION_DISPATCH]}},grmr_shell:e=>({name:"Shell Session", -aliases:["console","shellsession"],contains:[{className:"meta.prompt", -begin:/^\s{0,3}[/~\w\d[\]()@-]*[>%$#][ ]?/,starts:{end:/[^\\](?=\s*$)/, -subLanguage:"bash"}}]}),grmr_sql:e=>{ -const n=e.regex,t=e.COMMENT("--","$"),a=["true","false","unknown"],i=["bigint","binary","blob","boolean","char","character","clob","date","dec","decfloat","decimal","float","int","integer","interval","nchar","nclob","national","numeric","real","row","smallint","time","timestamp","varchar","varying","varbinary"],r=["abs","acos","array_agg","asin","atan","avg","cast","ceil","ceiling","coalesce","corr","cos","cosh","count","covar_pop","covar_samp","cume_dist","dense_rank","deref","element","exp","extract","first_value","floor","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","last_value","lead","listagg","ln","log","log10","lower","max","min","mod","nth_value","ntile","nullif","percent_rank","percentile_cont","percentile_disc","position","position_regex","power","rank","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","row_number","sin","sinh","sqrt","stddev_pop","stddev_samp","substring","substring_regex","sum","tan","tanh","translate","translate_regex","treat","trim","trim_array","unnest","upper","value_of","var_pop","var_samp","width_bucket"],s=["create table","insert into","primary key","foreign key","not null","alter table","add constraint","grouping sets","on overflow","character set","respect nulls","ignore nulls","nulls first","nulls last","depth first","breadth first"],o=r,l=["abs","acos","all","allocate","alter","and","any","are","array","array_agg","array_max_cardinality","as","asensitive","asin","asymmetric","at","atan","atomic","authorization","avg","begin","begin_frame","begin_partition","between","bigint","binary","blob","boolean","both","by","call","called","cardinality","cascaded","case","cast","ceil","ceiling","char","char_length","character","character_length","check","classifier","clob","close","coalesce","collate","collect","column","commit","condition","connect","constraint","contains","convert","copy","corr","corresponding","cos","cosh","count","covar_pop","covar_samp","create","cross","cube","cume_dist","current","current_catalog","current_date","current_default_transform_group","current_path","current_role","current_row","current_schema","current_time","current_timestamp","current_path","current_role","current_transform_group_for_type","current_user","cursor","cycle","date","day","deallocate","dec","decimal","decfloat","declare","default","define","delete","dense_rank","deref","describe","deterministic","disconnect","distinct","double","drop","dynamic","each","element","else","empty","end","end_frame","end_partition","end-exec","equals","escape","every","except","exec","execute","exists","exp","external","extract","false","fetch","filter","first_value","float","floor","for","foreign","frame_row","free","from","full","function","fusion","get","global","grant","group","grouping","groups","having","hold","hour","identity","in","indicator","initial","inner","inout","insensitive","insert","int","integer","intersect","intersection","interval","into","is","join","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","language","large","last_value","lateral","lead","leading","left","like","like_regex","listagg","ln","local","localtime","localtimestamp","log","log10","lower","match","match_number","match_recognize","matches","max","member","merge","method","min","minute","mod","modifies","module","month","multiset","national","natural","nchar","nclob","new","no","none","normalize","not","nth_value","ntile","null","nullif","numeric","octet_length","occurrences_regex","of","offset","old","omit","on","one","only","open","or","order","out","outer","over","overlaps","overlay","parameter","partition","pattern","per","percent","percent_rank","percentile_cont","percentile_disc","period","portion","position","position_regex","power","precedes","precision","prepare","primary","procedure","ptf","range","rank","reads","real","recursive","ref","references","referencing","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","release","result","return","returns","revoke","right","rollback","rollup","row","row_number","rows","running","savepoint","scope","scroll","search","second","seek","select","sensitive","session_user","set","show","similar","sin","sinh","skip","smallint","some","specific","specifictype","sql","sqlexception","sqlstate","sqlwarning","sqrt","start","static","stddev_pop","stddev_samp","submultiset","subset","substring","substring_regex","succeeds","sum","symmetric","system","system_time","system_user","table","tablesample","tan","tanh","then","time","timestamp","timezone_hour","timezone_minute","to","trailing","translate","translate_regex","translation","treat","trigger","trim","trim_array","true","truncate","uescape","union","unique","unknown","unnest","update","upper","user","using","value","values","value_of","var_pop","var_samp","varbinary","varchar","varying","versioning","when","whenever","where","width_bucket","window","with","within","without","year","add","asc","collation","desc","final","first","last","view"].filter((e=>!r.includes(e))),c={ -begin:n.concat(/\b/,n.either(...o),/\s*\(/),relevance:0,keywords:{built_in:o}} -;return{name:"SQL",case_insensitive:!0,illegal:/[{}]|<\//,keywords:{ -$pattern:/\b[\w\.]+/,keyword:((e,{exceptions:n,when:t}={})=>{const a=t -;return n=n||[],e.map((e=>e.match(/\|\d+$/)||n.includes(e)?e:a(e)?e+"|0":e)) -})(l,{when:e=>e.length<3}),literal:a,type:i, -built_in:["current_catalog","current_date","current_default_transform_group","current_path","current_role","current_schema","current_transform_group_for_type","current_user","session_user","system_time","system_user","current_time","localtime","current_timestamp","localtimestamp"] -},contains:[{begin:n.either(...s),relevance:0,keywords:{$pattern:/[\w\.]+/, -keyword:l.concat(s),literal:a,type:i}},{className:"type", -begin:n.either("double precision","large object","with timezone","without timezone") -},c,{className:"variable",begin:/@[a-z0-9]+/},{className:"string",variants:[{ -begin:/'/,end:/'/,contains:[{begin:/''/}]}]},{begin:/"/,end:/"/,contains:[{ -begin:/""/}]},e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,t,{className:"operator", -begin:/[-+*/=%^~]|&&?|\|\|?|!=?|<(?:=>?|<|>)?|>[>=]?/,relevance:0}]}}, -grmr_swift:e=>{const n={match:/\s+/,relevance:0},t=e.COMMENT("/\\*","\\*/",{ -contains:["self"]}),a=[e.C_LINE_COMMENT_MODE,t],i={match:[/\./,p(...ve,...Oe)], -className:{2:"keyword"}},r={match:m(/\./,p(...xe)),relevance:0 -},s=xe.filter((e=>"string"==typeof e)).concat(["_|0"]),o={variants:[{ -className:"keyword", -match:p(...xe.filter((e=>"string"!=typeof e)).concat(ke).map(Ne),...Oe)}]},l={ -$pattern:p(/\b\w+/,/#\w+/),keyword:s.concat(Ae),literal:Me},c=[i,r,o],d=[{ -match:m(/\./,p(...Ce)),relevance:0},{className:"built_in", -match:m(/\b/,p(...Ce),/(?=\()/)}],u={match:/->/,relevance:0},b=[u,{ -className:"operator",relevance:0,variants:[{match:De},{match:`\\.(\\.|${Re})+`}] -}],_="([0-9a-fA-F]_*)+",h={className:"number",relevance:0,variants:[{ -match:"\\b(([0-9]_*)+)(\\.(([0-9]_*)+))?([eE][+-]?(([0-9]_*)+))?\\b"},{ -match:`\\b0x(${_})(\\.(${_}))?([pP][+-]?(([0-9]_*)+))?\\b`},{ -match:/\b0o([0-7]_*)+\b/},{match:/\b0b([01]_*)+\b/}]},f=(e="")=>({ -className:"subst",variants:[{match:m(/\\/,e,/[0\\tnr"']/)},{ -match:m(/\\/,e,/u\{[0-9a-fA-F]{1,8}\}/)}]}),E=(e="")=>({className:"subst", -match:m(/\\/,e,/[\t ]*(?:[\r\n]|\r\n)/)}),y=(e="")=>({className:"subst", -label:"interpol",begin:m(/\\/,e,/\(/),end:/\)/}),w=(e="")=>({begin:m(e,/"""/), -end:m(/"""/,e),contains:[f(e),E(e),y(e)]}),N=(e="")=>({begin:m(e,/"/), -end:m(/"/,e),contains:[f(e),y(e)]}),v={className:"string", -variants:[w(),w("#"),w("##"),w("###"),N(),N("#"),N("##"),N("###")]},O={ -match:m(/`/,Be,/`/)},k=[O,{className:"variable",match:/\$\d+/},{ -className:"variable",match:`\\$${Le}+`}],x=[{match:/(@|#(un)?)available/, -className:"keyword",starts:{contains:[{begin:/\(/,end:/\)/,keywords:Fe, -contains:[...b,h,v]}]}},{className:"keyword",match:m(/@/,p(...ze))},{ -className:"meta",match:m(/@/,Be)}],M={match:g(/\b[A-Z]/),relevance:0,contains:[{ -className:"type", -match:m(/(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)/,Le,"+") -},{className:"type",match:$e,relevance:0},{match:/[?!]+/,relevance:0},{ -match:/\.\.\./,relevance:0},{match:m(/\s+&\s+/,g($e)),relevance:0}]},S={ -begin://,keywords:l,contains:[...a,...c,...x,u,M]};M.contains.push(S) -;const A={begin:/\(/,end:/\)/,relevance:0,keywords:l,contains:["self",{ -match:m(Be,/\s*:/),keywords:"_|0",relevance:0 -},...a,...c,...d,...b,h,v,...k,...x,M]},C={begin://,contains:[...a,M] -},T={begin:/\(/,end:/\)/,keywords:l,contains:[{ -begin:p(g(m(Be,/\s*:/)),g(m(Be,/\s+/,Be,/\s*:/))),end:/:/,relevance:0, -contains:[{className:"keyword",match:/\b_\b/},{className:"params",match:Be}] -},...a,...c,...b,h,v,...x,M,A],endsParent:!0,illegal:/["']/},R={ -match:[/func/,/\s+/,p(O.match,Be,De)],className:{1:"keyword",3:"title.function" -},contains:[C,T,n],illegal:[/\[/,/%/]},D={ -match:[/\b(?:subscript|init[?!]?)/,/\s*(?=[<(])/],className:{1:"keyword"}, -contains:[C,T,n],illegal:/\[|%/},I={match:[/operator/,/\s+/,De],className:{ -1:"keyword",3:"title"}},L={begin:[/precedencegroup/,/\s+/,$e],className:{ -1:"keyword",3:"title"},contains:[M],keywords:[...Se,...Me],end:/}/} -;for(const e of v.variants){const n=e.contains.find((e=>"interpol"===e.label)) -;n.keywords=l;const t=[...c,...d,...b,h,v,...k];n.contains=[...t,{begin:/\(/, -end:/\)/,contains:["self",...t]}]}return{name:"Swift",keywords:l, -contains:[...a,R,D,{beginKeywords:"struct protocol class extension enum actor", -end:"\\{",excludeEnd:!0,keywords:l,contains:[e.inherit(e.TITLE_MODE,{ -className:"title.class",begin:/[A-Za-z$_][\u00C0-\u02B80-9A-Za-z$_]*/}),...c] -},I,L,{beginKeywords:"import",end:/$/,contains:[...a],relevance:0 -},...c,...d,...b,h,v,...k,...x,M,A]}},grmr_typescript:e=>{ -const n=we(e),t=["any","void","number","boolean","string","object","never","symbol","bigint","unknown"],a={ -beginKeywords:"namespace",end:/\{/,excludeEnd:!0, -contains:[n.exports.CLASS_REFERENCE]},i={beginKeywords:"interface",end:/\{/, -excludeEnd:!0,keywords:{keyword:"interface extends",built_in:t}, -contains:[n.exports.CLASS_REFERENCE]},r={$pattern:be, -keyword:me.concat(["type","namespace","interface","public","private","protected","implements","declare","abstract","readonly","enum","override"]), -literal:pe,built_in:ye.concat(t),"variable.language":Ee},s={className:"meta", -begin:"@[A-Za-z$_][0-9A-Za-z$_]*"},o=(e,n,t)=>{ -const a=e.contains.findIndex((e=>e.label===n)) -;if(-1===a)throw Error("can not find mode to replace");e.contains.splice(a,1,t)} -;return Object.assign(n.keywords,r), -n.exports.PARAMS_CONTAINS.push(s),n.contains=n.contains.concat([s,a,i]), -o(n,"shebang",e.SHEBANG()),o(n,"use_strict",{className:"meta",relevance:10, -begin:/^\s*['"]use strict['"]/ -}),n.contains.find((e=>"func.def"===e.label)).relevance=0,Object.assign(n,{ -name:"TypeScript",aliases:["ts","tsx"]}),n},grmr_vbnet:e=>{ -const n=e.regex,t=/\d{1,2}\/\d{1,2}\/\d{4}/,a=/\d{4}-\d{1,2}-\d{1,2}/,i=/(\d|1[012])(:\d+){0,2} *(AM|PM)/,r=/\d{1,2}(:\d{1,2}){1,2}/,s={ -className:"literal",variants:[{begin:n.concat(/# */,n.either(a,t),/ *#/)},{ -begin:n.concat(/# */,r,/ *#/)},{begin:n.concat(/# */,i,/ *#/)},{ -begin:n.concat(/# */,n.either(a,t),/ +/,n.either(i,r),/ *#/)}] -},o=e.COMMENT(/'''/,/$/,{contains:[{className:"doctag",begin:/<\/?/,end:/>/}] -}),l=e.COMMENT(null,/$/,{variants:[{begin:/'/},{begin:/([\t ]|^)REM(?=\s)/}]}) -;return{name:"Visual Basic .NET",aliases:["vb"],case_insensitive:!0, -classNameAliases:{label:"symbol"},keywords:{ -keyword:"addhandler alias aggregate ansi as async assembly auto binary by byref byval call case catch class compare const continue custom declare default delegate dim distinct do each equals else elseif end enum erase error event exit explicit finally for friend from function get global goto group handles if implements imports in inherits interface into iterator join key let lib loop me mid module mustinherit mustoverride mybase myclass namespace narrowing new next notinheritable notoverridable of off on operator option optional order overloads overridable overrides paramarray partial preserve private property protected public raiseevent readonly redim removehandler resume return select set shadows shared skip static step stop structure strict sub synclock take text then throw to try unicode until using when where while widening with withevents writeonly yield", -built_in:"addressof and andalso await directcast gettype getxmlnamespace is isfalse isnot istrue like mod nameof new not or orelse trycast typeof xor cbool cbyte cchar cdate cdbl cdec cint clng cobj csbyte cshort csng cstr cuint culng cushort", -type:"boolean byte char date decimal double integer long object sbyte short single string uinteger ulong ushort", -literal:"true false nothing"}, -illegal:"//|\\{|\\}|endif|gosub|variant|wend|^\\$ ",contains:[{ -className:"string",begin:/"(""|[^/n])"C\b/},{className:"string",begin:/"/, -end:/"/,illegal:/\n/,contains:[{begin:/""/}]},s,{className:"number",relevance:0, -variants:[{begin:/\b\d[\d_]*((\.[\d_]+(E[+-]?[\d_]+)?)|(E[+-]?[\d_]+))[RFD@!#]?/ -},{begin:/\b\d[\d_]*((U?[SIL])|[%&])?/},{begin:/&H[\dA-F_]+((U?[SIL])|[%&])?/},{ -begin:/&O[0-7_]+((U?[SIL])|[%&])?/},{begin:/&B[01_]+((U?[SIL])|[%&])?/}]},{ -className:"label",begin:/^\w+:/},o,l,{className:"meta", -begin:/[\t ]*#(const|disable|else|elseif|enable|end|externalsource|if|region)\b/, -end:/$/,keywords:{ -keyword:"const disable else elseif enable end externalsource if region then"}, -contains:[l]}]}},grmr_wasm:e=>{e.regex;const n=e.COMMENT(/\(;/,/;\)/) -;return n.contains.push("self"),{name:"WebAssembly",keywords:{$pattern:/[\w.]+/, -keyword:["anyfunc","block","br","br_if","br_table","call","call_indirect","data","drop","elem","else","end","export","func","global.get","global.set","local.get","local.set","local.tee","get_global","get_local","global","if","import","local","loop","memory","memory.grow","memory.size","module","mut","nop","offset","param","result","return","select","set_global","set_local","start","table","tee_local","then","type","unreachable"] -},contains:[e.COMMENT(/;;/,/$/),n,{match:[/(?:offset|align)/,/\s*/,/=/], -className:{1:"keyword",3:"operator"}},{className:"variable",begin:/\$[\w_]+/},{ -match:/(\((?!;)|\))+/,className:"punctuation",relevance:0},{ -begin:[/(?:func|call|call_indirect)/,/\s+/,/\$[^\s)]+/],className:{1:"keyword", -3:"title.function"}},e.QUOTE_STRING_MODE,{match:/(i32|i64|f32|f64)(?!\.)/, -className:"type"},{className:"keyword", -match:/\b(f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|nearest|neg?|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|store(?:8|16|32)?|sqrt|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))\b/ -},{className:"number",relevance:0, -match:/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/ -}]}},grmr_yaml:e=>{ -const n="true false yes no null",t="[\\w#;/?:@&=+$,.~*'()[\\]]+",a={ -className:"string",relevance:0,variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/ -},{begin:/\S+/}],contains:[e.BACKSLASH_ESCAPE,{className:"template-variable", -variants:[{begin:/\{\{/,end:/\}\}/},{begin:/%\{/,end:/\}/}]}]},i=e.inherit(a,{ -variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/[^\s,{}[\]]+/}]}),r={ -end:",",endsWithParent:!0,excludeEnd:!0,keywords:n,relevance:0},s={begin:/\{/, -end:/\}/,contains:[r],illegal:"\\n",relevance:0},o={begin:"\\[",end:"\\]", -contains:[r],illegal:"\\n",relevance:0},l=[{className:"attr",variants:[{ -begin:"\\w[\\w :\\/.-]*:(?=[ \t]|$)"},{begin:'"\\w[\\w :\\/.-]*":(?=[ \t]|$)'},{ -begin:"'\\w[\\w :\\/.-]*':(?=[ \t]|$)"}]},{className:"meta",begin:"^---\\s*$", -relevance:10},{className:"string", -begin:"[\\|>]([1-9]?[+-])?[ ]*\\n( +)[^ ][^\\n]*\\n(\\2[^\\n]+\\n?)*"},{ -begin:"<%[%=-]?",end:"[%-]?%>",subLanguage:"ruby",excludeBegin:!0,excludeEnd:!0, -relevance:0},{className:"type",begin:"!\\w+!"+t},{className:"type", -begin:"!<"+t+">"},{className:"type",begin:"!"+t},{className:"type",begin:"!!"+t -},{className:"meta",begin:"&"+e.UNDERSCORE_IDENT_RE+"$"},{className:"meta", -begin:"\\*"+e.UNDERSCORE_IDENT_RE+"$"},{className:"bullet",begin:"-(?=[ ]|$)", -relevance:0},e.HASH_COMMENT_MODE,{beginKeywords:n,keywords:{literal:n}},{ -className:"number", -begin:"\\b[0-9]{4}(-[0-9][0-9]){0,2}([Tt \\t][0-9][0-9]?(:[0-9][0-9]){2})?(\\.[0-9]*)?([ \\t])*(Z|[-+][0-9][0-9]?(:[0-9][0-9])?)?\\b" -},{className:"number",begin:e.C_NUMBER_RE+"\\b",relevance:0},s,o,a],c=[...l] -;return c.pop(),c.push(i),r.contains=c,{name:"YAML",case_insensitive:!0, -aliases:["yml"],contains:l}}});const je=ne;for(const e of Object.keys(Ue)){ -const n=e.replace("grmr_","").replace("_","-");je.registerLanguage(n,Ue[e])} -return je}() -;"object"==typeof exports&&"undefined"!=typeof module&&(module.exports=hljs); \ No newline at end of file diff --git a/spaces/doluvor/faster-whisper-webui/tests/segments_test.py b/spaces/doluvor/faster-whisper-webui/tests/segments_test.py deleted file mode 100644 index d829f1c77f74b3c96513fe4965d532cf2d1dceb4..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/tests/segments_test.py +++ /dev/null @@ -1,48 +0,0 @@ -import sys -import unittest - -sys.path.append('../whisper-webui') - -from src.segments import merge_timestamps - -class TestSegments(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestSegments, self).__init__(*args, **kwargs) - - def test_merge_segments(self): - segments = [ - {'start': 10.0, 'end': 20.0}, - {'start': 22.0, 'end': 27.0}, - {'start': 31.0, 'end': 35.0}, - {'start': 45.0, 'end': 60.0}, - {'start': 61.0, 'end': 65.0}, - {'start': 68.0, 'end': 98.0}, - {'start': 100.0, 'end': 102.0}, - {'start': 110.0, 'end': 112.0} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 9.0, 'end': 36.0}, - {'start': 44.0, 'end': 66.0}, - {'start': 67.0, 'end': 99.0}, - {'start': 99.0, 'end': 103.0}, - {'start': 109.0, 'end': 113.0} - ]) - - def test_overlap_next(self): - segments = [ - {'start': 5.0, 'end': 39.182}, - {'start': 39.986, 'end': 40.814} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 4.0, 'end': 39.584}, - {'start': 39.584, 'end': 41.814} - ]) - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/dongyi/MMFS/data/deprecated/custom_data.py b/spaces/dongyi/MMFS/data/deprecated/custom_data.py deleted file mode 100644 index cda38558c43690fd2168cc49b0383f7c53bad969..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/data/deprecated/custom_data.py +++ /dev/null @@ -1,121 +0,0 @@ -import os -import random -import numpy as np -from utils.augmentation import ImagePathToImage -from utils.data_utils import Transforms, check_img_loaded, check_numpy_loaded - - -class CustomData(object): - - def __init__(self, config, shuffle=False): - self.paired_file_groups = [] - self.paired_type_groups = [] - self.len_of_groups = [] - self.landmark_scale = config['dataset']['landmark_scale'] - self.shuffle = shuffle - self.config = config - - data_dict = config['dataset']['custom_' + config['common']['phase'] + '_data'] - if len(data_dict) == 0: - self.len_of_groups.append(0) - return - - for i, group in enumerate(data_dict.values()): # one example: (0, group_1), (1, group_2) - data_types = group['data_types'] # one example: 'image', 'patch' - data_names = group['data_names'] # one example: 'real_A', 'patch_A' - file_list = group['file_list'] # one example: "lmt/data/trainA.txt" - assert(len(data_types) == len(data_names)) - - self.paired_file_groups.append({}) - self.paired_type_groups.append({}) - for data_name, data_type in zip(data_names, data_types): - self.paired_file_groups[i][data_name] = [] - self.paired_type_groups[i][data_name] = data_type - - paired_file = open(file_list, 'rt') - lines = paired_file.readlines() - if self.shuffle: - random.shuffle(lines) - for line in lines: - items = line.strip().split(' ') - if len(items) == len(data_names): - ok = True - for item in items: - ok = ok and os.path.exists(item) and os.path.getsize(item) > 0 - if ok: - for data_name, item in zip(data_names, items): - self.paired_file_groups[i][data_name].append(item) - paired_file.close() - - self.len_of_groups.append(len(self.paired_file_groups[i][data_names[0]])) - - self.transform = Transforms(config) - self.transform.get_transform_from_config() - self.transform.get_transforms().insert(0, ImagePathToImage()) - self.transform = self.transform.compose_transforms() - - def get_len(self): - return max(self.len_of_groups) - - def get_item(self, idx): - return_dict = {} - for i in range(len(self.paired_file_groups)): - inner_idx = idx if idx < self.len_of_groups[i] else random.randint(0, self.len_of_groups[i] - 1) - img_list = [] - img_k_list = [] - for k, v in self.paired_file_groups[i].items(): - if self.paired_type_groups[i][k] == 'image': - # gather images for processing later - img_k_list.append(k) - img_list.append(v[inner_idx]) - elif self.paired_type_groups[i][k] == 'landmark': - # different from images, landmark doesn't use data augmentation. So process them directly here. - lmk = np.load(v[inner_idx]) - lmk[:, 0] *= self.landmark_scale[0] - lmk[:, 1] *= self.landmark_scale[1] - return_dict[k] = lmk - return_dict[k + '_path'] = v[inner_idx] - - # transform all images - if len(img_list) == 1: - return_dict[img_k_list[0]], _ = self.transform(img_list[0], None) - elif len(img_list) > 1: - input1, input2 = img_list[0], img_list[1:] - output1, output2 = self.transform(input1, input2) # output1 is one image. output2 is a list of images. - return_dict[img_k_list[0]] = output1 - for j in range(1, len(img_list)): - return_dict[img_k_list[j]] = output2[j-1] - - return return_dict - - def split_data_into_bins(self, num_bins): - bins = [] - for i in range(0, num_bins): - bins.append([]) - for i in range(0, len(self.paired_file_groups)): - for b in range(0, num_bins): - bins[b].append({}) - for dataname, item_list in self.paired_file_groups[i].items(): - if len(item_list) < self.config['dataset']['n_threads']: - bins[0][i][dataname] = item_list - else: - num_items_in_bin = len(item_list) // num_bins - for j in range(0, len(item_list)): - which_bin = min(j // num_items_in_bin, num_bins - 1) - if dataname not in bins[which_bin][i]: - bins[which_bin][i][dataname] = [] - else: - bins[which_bin][i][dataname].append(item_list[j]) - return bins - - def check_data_helper(self, data): - all_pass = True - for paired_file_group in data: - for k, v in paired_file_group.items(): - if len(v) > 0: - for v1 in v: - if '.npy' in v1: # case: numpy array or landmark - all_pass = all_pass and check_numpy_loaded(v1) - else: # case: image - all_pass = all_pass and check_img_loaded(v1) - return all_pass diff --git a/spaces/ds520/bingo/src/components/tone-selector.tsx b/spaces/ds520/bingo/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
          -
          - 选择对话样式 -
          -
          -
            - { - ToneList.map(tone => ( -
          • onChange?.(tone.type)}> - -
          • - )) - } -
          -
          -
          - ) -} diff --git a/spaces/ds520/bingo/src/components/ui/select.tsx b/spaces/ds520/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/epexVfeibi/Imagedeblurr/Acrobat Pro DC BASE CRACK UPDATE 2019.008.20071 OCT 2018 Utorrent.md b/spaces/epexVfeibi/Imagedeblurr/Acrobat Pro DC BASE CRACK UPDATE 2019.008.20071 OCT 2018 Utorrent.md deleted file mode 100644 index b3a85f440af824b7c7eaf417d6cc06710ba578c7..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Acrobat Pro DC BASE CRACK UPDATE 2019.008.20071 OCT 2018 Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Acrobat Pro DC BASE CRACK UPDATE 2019.008.20071 OCT 2018 utorrent


          Download File ►►► https://jinyurl.com/2uEpGJ



          -
          -adobe acrobat reader dc (continuous track) update, adobe acrobat reader mui dc ... Search for "Adobe Acrobat Pro DC (Continuous Track) BASE RELEASE + CRACK" torren ca8d075f12 ... UPDATE v19.008.20071 (2019.008.20071) (2018-10-02) (OCT 2018). ... Torrent Episode Downloader v0.80 .rar 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/exbert-project/exbert/server/utils/token_processing.py b/spaces/exbert-project/exbert/server/utils/token_processing.py deleted file mode 100644 index 4a6f614133cce14d6333f8af4737c7192c5c6ae7..0000000000000000000000000000000000000000 --- a/spaces/exbert-project/exbert/server/utils/token_processing.py +++ /dev/null @@ -1,48 +0,0 @@ -import numpy as np -from transformers.tokenization_bert import BertTokenizer -from .f import flatten_, assoc, memoize, GetAttr - -from typing import List - -def fix_byte_spaces(toks: List[str]) -> List[str]: - return [t.replace("\u0120", " ").replace("\u010A", "\\n") for t in toks] - -@memoize -def get_bpe(bpe_pretrained_name_or_path): - return BertTokenizer.from_pretrained(bpe_pretrained_name_or_path) - -# [String] -> [String] -def remove_CLS_SEP(toks): - return [t for t in toks if t not in set(["[CLS]", "[SEP]"])] - -# torch.Tensor -> np.Array -def process_hidden_tensors(t): - """Embeddings are returned from the BERT model in a non-ideal embedding shape: - - unnecessary batch dimension - - Undesired second sentence "[SEP]". - - Drop the unnecessary information and just return what we need for the first sentence - """ - # Drop unnecessary batch dim and second sent - t = t.squeeze(0)[:-1] - - # Drop second sentence sep ?? - t = t[1:-1] - - # Convert to numpy - return t.data.numpy() - - -# np.Array -> np.Array -def normalize(a): - """Divide each head by its norm""" - norms = np.linalg.norm(a, axis=-1, keepdims=True) - return a / norms - - -# np.Array: -> np.Array -def reshape(a): - """Combine the last two dimensions of a numpy array""" - all_head_size = a.shape[-2] * a.shape[-1] - new_shape = a.shape[:-2] + (all_head_size,) - return a.reshape(new_shape) \ No newline at end of file diff --git a/spaces/facebook/MusicGen/audiocraft/data/sound_dataset.py b/spaces/facebook/MusicGen/audiocraft/data/sound_dataset.py deleted file mode 100644 index 8b88cbe8016b4bd28c2de749177c9af29f7755fc..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/data/sound_dataset.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Dataset of audio with a simple description. -""" - -from dataclasses import dataclass, fields, replace -import json -from pathlib import Path -import random -import typing as tp - -import numpy as np -import torch - -from .info_audio_dataset import ( - InfoAudioDataset, - get_keyword_or_keyword_list -) -from ..modules.conditioners import ( - ConditioningAttributes, - SegmentWithAttributes, - WavCondition, -) - - -EPS = torch.finfo(torch.float32).eps -TARGET_LEVEL_LOWER = -35 -TARGET_LEVEL_UPPER = -15 - - -@dataclass -class SoundInfo(SegmentWithAttributes): - """Segment info augmented with Sound metadata. - """ - description: tp.Optional[str] = None - self_wav: tp.Optional[torch.Tensor] = None - - @property - def has_sound_meta(self) -> bool: - return self.description is not None - - def to_condition_attributes(self) -> ConditioningAttributes: - out = ConditioningAttributes() - - for _field in fields(self): - key, value = _field.name, getattr(self, _field.name) - if key == 'self_wav': - out.wav[key] = value - else: - out.text[key] = value - return out - - @staticmethod - def attribute_getter(attribute): - if attribute == 'description': - preprocess_func = get_keyword_or_keyword_list - else: - preprocess_func = None - return preprocess_func - - @classmethod - def from_dict(cls, dictionary: dict, fields_required: bool = False): - _dictionary: tp.Dict[str, tp.Any] = {} - - # allow a subset of attributes to not be loaded from the dictionary - # these attributes may be populated later - post_init_attributes = ['self_wav'] - - for _field in fields(cls): - if _field.name in post_init_attributes: - continue - elif _field.name not in dictionary: - if fields_required: - raise KeyError(f"Unexpected missing key: {_field.name}") - else: - preprocess_func: tp.Optional[tp.Callable] = cls.attribute_getter(_field.name) - value = dictionary[_field.name] - if preprocess_func: - value = preprocess_func(value) - _dictionary[_field.name] = value - return cls(**_dictionary) - - -class SoundDataset(InfoAudioDataset): - """Sound audio dataset: Audio dataset with environmental sound-specific metadata. - - Args: - info_fields_required (bool): Whether all the mandatory metadata fields should be in the loaded metadata. - external_metadata_source (tp.Optional[str]): Folder containing JSON metadata for the corresponding dataset. - The metadata files contained in this folder are expected to match the stem of the audio file with - a json extension. - aug_p (float): Probability of performing audio mixing augmentation on the batch. - mix_p (float): Proportion of batch items that are mixed together when applying audio mixing augmentation. - mix_snr_low (int): Lowerbound for SNR value sampled for mixing augmentation. - mix_snr_high (int): Upperbound for SNR value sampled for mixing augmentation. - mix_min_overlap (float): Minimum overlap between audio files when performing mixing augmentation. - kwargs: Additional arguments for AudioDataset. - - See `audiocraft.data.info_audio_dataset.InfoAudioDataset` for full initialization arguments. - """ - def __init__( - self, - *args, - info_fields_required: bool = True, - external_metadata_source: tp.Optional[str] = None, - aug_p: float = 0., - mix_p: float = 0., - mix_snr_low: int = -5, - mix_snr_high: int = 5, - mix_min_overlap: float = 0.5, - **kwargs - ): - kwargs['return_info'] = True # We require the info for each song of the dataset. - super().__init__(*args, **kwargs) - self.info_fields_required = info_fields_required - self.external_metadata_source = external_metadata_source - self.aug_p = aug_p - self.mix_p = mix_p - if self.aug_p > 0: - assert self.mix_p > 0, "Expecting some mixing proportion mix_p if aug_p > 0" - assert self.channels == 1, "SoundDataset with audio mixing considers only monophonic audio" - self.mix_snr_low = mix_snr_low - self.mix_snr_high = mix_snr_high - self.mix_min_overlap = mix_min_overlap - - def _get_info_path(self, path: tp.Union[str, Path]) -> Path: - """Get path of JSON with metadata (description, etc.). - If there exists a JSON with the same name as 'path.name', then it will be used. - Else, such JSON will be searched for in an external json source folder if it exists. - """ - info_path = Path(path).with_suffix('.json') - if Path(info_path).exists(): - return info_path - elif self.external_metadata_source and (Path(self.external_metadata_source) / info_path.name).exists(): - return Path(self.external_metadata_source) / info_path.name - else: - raise Exception(f"Unable to find a metadata JSON for path: {path}") - - def __getitem__(self, index): - wav, info = super().__getitem__(index) - info_data = info.to_dict() - info_path = self._get_info_path(info.meta.path) - if Path(info_path).exists(): - with open(info_path, 'r') as json_file: - sound_data = json.load(json_file) - sound_data.update(info_data) - sound_info = SoundInfo.from_dict(sound_data, fields_required=self.info_fields_required) - # if there are multiple descriptions, sample one randomly - if isinstance(sound_info.description, list): - sound_info.description = random.choice(sound_info.description) - else: - sound_info = SoundInfo.from_dict(info_data, fields_required=False) - - sound_info.self_wav = WavCondition( - wav=wav[None], length=torch.tensor([info.n_frames]), - sample_rate=[sound_info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time]) - - return wav, sound_info - - def collater(self, samples): - # when training, audio mixing is performed in the collate function - wav, sound_info = super().collater(samples) # SoundDataset always returns infos - if self.aug_p > 0: - wav, sound_info = mix_samples(wav, sound_info, self.aug_p, self.mix_p, - snr_low=self.mix_snr_low, snr_high=self.mix_snr_high, - min_overlap=self.mix_min_overlap) - return wav, sound_info - - -def rms_f(x: torch.Tensor) -> torch.Tensor: - return (x ** 2).mean(1).pow(0.5) - - -def normalize(audio: torch.Tensor, target_level: int = -25) -> torch.Tensor: - """Normalize the signal to the target level.""" - rms = rms_f(audio) - scalar = 10 ** (target_level / 20) / (rms + EPS) - audio = audio * scalar.unsqueeze(1) - return audio - - -def is_clipped(audio: torch.Tensor, clipping_threshold: float = 0.99) -> torch.Tensor: - return (abs(audio) > clipping_threshold).any(1) - - -def mix_pair(src: torch.Tensor, dst: torch.Tensor, min_overlap: float) -> torch.Tensor: - start = random.randint(0, int(src.shape[1] * (1 - min_overlap))) - remainder = src.shape[1] - start - if dst.shape[1] > remainder: - src[:, start:] = src[:, start:] + dst[:, :remainder] - else: - src[:, start:start+dst.shape[1]] = src[:, start:start+dst.shape[1]] + dst - return src - - -def snr_mixer(clean: torch.Tensor, noise: torch.Tensor, snr: int, min_overlap: float, - target_level: int = -25, clipping_threshold: float = 0.99) -> torch.Tensor: - """Function to mix clean speech and noise at various SNR levels. - - Args: - clean (torch.Tensor): Clean audio source to mix, of shape [B, T]. - noise (torch.Tensor): Noise audio source to mix, of shape [B, T]. - snr (int): SNR level when mixing. - min_overlap (float): Minimum overlap between the two mixed sources. - target_level (int): Gain level in dB. - clipping_threshold (float): Threshold for clipping the audio. - Returns: - torch.Tensor: The mixed audio, of shape [B, T]. - """ - if clean.shape[1] > noise.shape[1]: - noise = torch.nn.functional.pad(noise, (0, clean.shape[1] - noise.shape[1])) - else: - noise = noise[:, :clean.shape[1]] - - # normalizing to -25 dB FS - clean = clean / (clean.max(1)[0].abs().unsqueeze(1) + EPS) - clean = normalize(clean, target_level) - rmsclean = rms_f(clean) - - noise = noise / (noise.max(1)[0].abs().unsqueeze(1) + EPS) - noise = normalize(noise, target_level) - rmsnoise = rms_f(noise) - - # set the noise level for a given SNR - noisescalar = (rmsclean / (10 ** (snr / 20)) / (rmsnoise + EPS)).unsqueeze(1) - noisenewlevel = noise * noisescalar - - # mix noise and clean speech - noisyspeech = mix_pair(clean, noisenewlevel, min_overlap) - - # randomly select RMS value between -15 dBFS and -35 dBFS and normalize noisyspeech with that value - # there is a chance of clipping that might happen with very less probability, which is not a major issue. - noisy_rms_level = np.random.randint(TARGET_LEVEL_LOWER, TARGET_LEVEL_UPPER) - rmsnoisy = rms_f(noisyspeech) - scalarnoisy = (10 ** (noisy_rms_level / 20) / (rmsnoisy + EPS)).unsqueeze(1) - noisyspeech = noisyspeech * scalarnoisy - clean = clean * scalarnoisy - noisenewlevel = noisenewlevel * scalarnoisy - - # final check to see if there are any amplitudes exceeding +/- 1. If so, normalize all the signals accordingly - clipped = is_clipped(noisyspeech) - if clipped.any(): - noisyspeech_maxamplevel = noisyspeech[clipped].max(1)[0].abs().unsqueeze(1) / (clipping_threshold - EPS) - noisyspeech[clipped] = noisyspeech[clipped] / noisyspeech_maxamplevel - - return noisyspeech - - -def snr_mix(src: torch.Tensor, dst: torch.Tensor, snr_low: int, snr_high: int, min_overlap: float): - if snr_low == snr_high: - snr = snr_low - else: - snr = np.random.randint(snr_low, snr_high) - mix = snr_mixer(src, dst, snr, min_overlap) - return mix - - -def mix_text(src_text: str, dst_text: str): - """Mix text from different sources by concatenating them.""" - if src_text == dst_text: - return src_text - return src_text + " " + dst_text - - -def mix_samples(wavs: torch.Tensor, infos: tp.List[SoundInfo], aug_p: float, mix_p: float, - snr_low: int, snr_high: int, min_overlap: float): - """Mix samples within a batch, summing the waveforms and concatenating the text infos. - - Args: - wavs (torch.Tensor): Audio tensors of shape [B, C, T]. - infos (list[SoundInfo]): List of SoundInfo items corresponding to the audio. - aug_p (float): Augmentation probability. - mix_p (float): Proportion of items in the batch to mix (and merge) together. - snr_low (int): Lowerbound for sampling SNR. - snr_high (int): Upperbound for sampling SNR. - min_overlap (float): Minimum overlap between mixed samples. - Returns: - tuple[torch.Tensor, list[SoundInfo]]: A tuple containing the mixed wavs - and mixed SoundInfo for the given batch. - """ - # no mixing to perform within the batch - if mix_p == 0: - return wavs, infos - - if random.uniform(0, 1) < aug_p: - # perform all augmentations on waveforms as [B, T] - # randomly picking pairs of audio to mix - assert wavs.size(1) == 1, f"Mix samples requires monophonic audio but C={wavs.size(1)}" - wavs = wavs.mean(dim=1, keepdim=False) - B, T = wavs.shape - k = int(mix_p * B) - mixed_sources_idx = torch.randperm(B)[:k] - mixed_targets_idx = torch.randperm(B)[:k] - aug_wavs = snr_mix( - wavs[mixed_sources_idx], - wavs[mixed_targets_idx], - snr_low, - snr_high, - min_overlap, - ) - # mixing textual descriptions in metadata - descriptions = [info.description for info in infos] - aug_infos = [] - for i, j in zip(mixed_sources_idx, mixed_targets_idx): - text = mix_text(descriptions[i], descriptions[j]) - m = replace(infos[i]) - m.description = text - aug_infos.append(m) - - # back to [B, C, T] - aug_wavs = aug_wavs.unsqueeze(1) - assert aug_wavs.shape[0] > 0, "Samples mixing returned empty batch." - assert aug_wavs.dim() == 3, f"Returned wav should be [B, C, T] but dim = {aug_wavs.dim()}" - assert aug_wavs.shape[0] == len(aug_infos), "Mismatch between number of wavs and infos in the batch" - - return aug_wavs, aug_infos # [B, C, T] - else: - # randomly pick samples in the batch to match - # the batch size when performing audio mixing - B, C, T = wavs.shape - k = int(mix_p * B) - wav_idx = torch.randperm(B)[:k] - wavs = wavs[wav_idx] - infos = [infos[i] for i in wav_idx] - assert wavs.shape[0] == len(infos), "Mismatch between number of wavs and infos in the batch" - - return wavs, infos # [B, C, T] diff --git a/spaces/falterWliame/Face_Mask_Detection/PATCHED Adobe Acrobat XI Pro 11.0.30 Multilingual Crack [TOP].md b/spaces/falterWliame/Face_Mask_Detection/PATCHED Adobe Acrobat XI Pro 11.0.30 Multilingual Crack [TOP].md deleted file mode 100644 index 2432f7501e052a5ba4214ef51d72bd82d8009a98..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/PATCHED Adobe Acrobat XI Pro 11.0.30 Multilingual Crack [TOP].md +++ /dev/null @@ -1,6 +0,0 @@ -

          PATCHED Adobe Acrobat XI Pro 11.0.30 Multilingual Crack


          Download Zip 🗹 https://urlca.com/2uDbPm



          -
          -... 2.1.6 MAC OS (Patched) · Adobe Acrobat XI Pro 11.0.30 FINAL + Crack [TechTools] ... Hotspot Shield VPN Elite 9.03.1 Multilingual + Patch 1fdad05405
          -
          -
          -

          diff --git a/spaces/falterWliame/Face_Mask_Detection/Protesis Fija Contemporanea Rosenstiel Pdf BEST Downloadgolkes.md b/spaces/falterWliame/Face_Mask_Detection/Protesis Fija Contemporanea Rosenstiel Pdf BEST Downloadgolkes.md deleted file mode 100644 index cb8056feb429cb0ce0d7454a5af3d1e7481e045e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Protesis Fija Contemporanea Rosenstiel Pdf BEST Downloadgolkes.md +++ /dev/null @@ -1,38 +0,0 @@ -

          protesis fija contemporanea rosenstiel pdf downloadgolkes


          Download ✪✪✪ https://urlca.com/2uDdCP



          -
          -The method $getAttribute takes a boolean parameter, so if it is true, it will return a result. - -Here is an example. - -public function inversediff($commentator) { - - if ($commentator['type']) { - - $comment = Comment::where('status', Comment::STATUS_DRAFT)->where('id', $commentator['id'])->first(); - - if ($comment) { - - $field = Comment::getAttribute('field', false); - - $field = $this->field->getAttribute('field', false); - - if ($field == $comment->field) { - - $comment = Comment::where('id', $comment->id)->where('status', Comment::STATUS_APPROVED)->first(); - - if ($comment) { - - $diff = $comment->field->diff($comment->field, $comment->value, $comment->type); - - if ($diff) { - - $output = array_map(function ($value) { - - return array_filter(array_map(function ($field) - - return $field->getAttribute('link', false)? "getAttribute('link').">$value" : $value; - - , $diff)); 4fefd39f24
          -
          -
          -

          diff --git a/spaces/fatiXbelha/sd/Cmo disfrutar de Instagram estilo iPhone en tu Android con esta APK con emojis de iOS y fuentes variadas.md b/spaces/fatiXbelha/sd/Cmo disfrutar de Instagram estilo iPhone en tu Android con esta APK con emojis de iOS y fuentes variadas.md deleted file mode 100644 index 9d51d89b4f1cc10b30c91956c6fcd6d2e90ea759..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Cmo disfrutar de Instagram estilo iPhone en tu Android con esta APK con emojis de iOS y fuentes variadas.md +++ /dev/null @@ -1,133 +0,0 @@ - -

          How to Get Instagram Style iPhone APK on Your Android Device

          -

          If you are an avid user of Instagram, you might have noticed that the app looks different on iOS devices than on Android devices. The iOS version of Instagram has some features and options that are not available on the Android version, such as the latest emojis, the ability to download content directly from the app, and more. If you want to enjoy these features on your Android device, you might be interested in downloading Instagram style iPhone APK.

          -

          Instagram style iPhone APK is a modified version of the original Instagram app that mimics the appearance and functionality of the iOS version. It allows you to use iOS emojis, download videos and images from Instagram, customize your profile and settings, and access hidden features and modes. In this article, we will show you how to download and install Instagram style iPhone APK safely, what are its features, how it compares to the original Instagram app, and some FAQs that you might have.

          -

          instagram estilo iphone apk


          Download File ››› https://urllie.com/2uNBvz



          -

          Features of Instagram Style iPhone APK

          -

          Instagram style iPhone APK has many features that make it stand out from the original Instagram app. Here are some of them:

          -

          iOS Emojis

          -

          One of the most noticeable features of Instagram style iPhone APK is that it allows you to use iOS emojis instead of Android emojis. iOS emojis are more expressive, diverse, and updated than Android emojis. They also look better on Instagram posts and stories. To use iOS emojis on Instagram style iPhone APK, you just need to tap on the emoji icon on the keyboard and select the emoji that you want. You can also see how they look like on this table:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          iOS EmojiAndroid EmojiDescription
          \uD83E\uDD73\uD83D\uDE0ESmiling face with sunglasses
          \uD83E\uDD70\uD83D\uDE0DSmiling face with heart-eyes
          \uD83E\uDD29\uD83D\uDE31Face screaming in fear
          \uD83E\uDD2A\uD83D\uDE2CFace with tongue
          \uD83E\uDD28\uD83D\uDE0BFace savoring food
          -

          Download Content

          -

          Another feature of Instagram style iPhone APK is that it allows you to download videos and images from Instagram directly from the app. This is very convenient if you want to save some content that you like or share it with others. To download content from Instagram style iPhone APK, you just need to tap on the three dots icon on the top right corner of the post or story and select the download option. You can also choose the quality and format of the content that you want to download. The downloaded content will be saved to your device's gallery or file manager.

          -

          Other Options

          -

          Instagram style iPhone APK also has some other options that let you customize your profile and settings, and access hidden features and modes. For example, you can change your profile picture, bio, name, and theme color. You can also enable or disable notifications, sounds, vibrations, and auto-play videos. You can also access some hidden features and modes, such as dark mode, story mode, reel mode, and live mode. To access these options, you just need to tap on the menu icon on the top left corner of the app and select the option that you want.

          -

          Comparison of Instagram Style iPhone APK and Original Instagram App

          -

          Instagram style iPhone APK is not the same as the original Instagram app. There are some pros and cons, similarities and differences between them. Here are some of them:

          -

          Pros and Cons

          -

          Instagram style iPhone APK has some advantages and disadvantages over the original Instagram app. Here are some of them:

          - - - - - - - - - -
          ProsCons
          - More features and options than the original Instagram app - More attractive and updated emojis - Ability to download content directly from the app - Ability to customize profile and settings - Access to hidden features and modes - Not an official app from Instagram - May not be compatible with some Android devices - May not be updated regularly - May pose some security risks - May violate Instagram's terms of service
          -

          Similarities and Differences

          -

          Instagram style iPhone APK and the original Instagram app have some similarities and differences. Here are some of them:

          -

          instagram estilo iphone con imojis 14.5 actualizado
          -instagram estilo iphone en android 2023 con nuevos emojis
          -instagram estilo iphone para android itodoplay
          -instagram estilo iphone con fuentes personalizadas
          -instagram estilo iphone modificado con funciones extras
          -instagram estilo iphone descargar gratis apk
          -instagram estilo iphone con emojis de ios 15
          -instagram estilo iphone sin root ni permisos
          -instagram estilo iphone ultima version 2023
          -instagram estilo iphone con modo oscuro
          -instagram estilo iphone con stickers animados
          -instagram estilo iphone con descarga de contenidos
          -instagram estilo iphone con efectos y filtros
          -instagram estilo iphone con historias personalizadas
          -instagram estilo iphone con transiciones y sonidos
          -instagram estilo iphone con chat privado y seguro
          -instagram estilo iphone con reacciones y encuestas
          -instagram estilo iphone con temas y colores
          -instagram estilo iphone con stickers de whatsapp
          -instagram estilo iphone con fondos de pantalla
          -instagram estilo iphone con edicion de fotos y videos
          -instagram estilo iphone con musica y letras
          -instagram estilo iphone con seguidores y likes
          -instagram estilo iphone con hashtags y etiquetas
          -instagram estilo iphone con comentarios y menciones
          -instagram estilo iphone con directos y reels
          -instagram estilo iphone con guias y colecciones
          -instagram estilo iphone con compras y negocios
          -instagram estilo iphone con explorar y buscar
          -instagram estilo iphone con notificaciones y ajustes
          -instagram estilo iphone con verificacion y seguridad
          -instagram estilo iphone con ayuda y soporte
          -instagram estilo iphone con sugerencias y recomendaciones
          -instagram estilo iphone con amigos y familiares
          -instagram estilo iphone con celebridades e influencers
          -instagram estilo iphone con deportes y entretenimiento
          -instagram estilo iphone con arte y cultura
          -instagram estilo iphone con moda y belleza
          -instagram estilo iphone con viajes y turismo
          -instagram estilo iphone con comida y salud
          -instagram estilo iphone con humor y diversion
          -instagram estilo iphone con educacion y ciencia
          -instagram estilo iphone con noticias y actualidad
          -instagram estilo iphone con mascotas y naturaleza
          -instagram estilo iphone con juegos y tecnologia
          -instagram estilo iphone con amor y relaciones
          -instagram estilo iphone con inspiracion y motivacion
          -instagram estilo iphone con creatividad e innovacion

          - - - - - - - - - -
          SimilaritiesDifferences
          - Both apps allow you to use Instagram's social media platform - Both apps have similar interfaces and layouts - Both apps allow you to post, like, comment, follow, unfollow, message, and explore content on Instagram - Both apps require an Instagram account to use them - Both apps are free to download and use - Instagram style iPhone APK mimics the iOS version of Instagram, while the original Instagram app is the Android version - Instagram style iPhone APK has more features and options than the original Instagram app - Instagram style iPhone APK uses iOS emojis, while the original Instagram app uses Android emojis - Instagram style iPhone APK allows you to download content directly from the app, while the original Instagram app does not - Instagram style iPhone APK is a modified version of the original Instagram app, while the original Instagram app is an official app from Instagram
          -

          Conclusion

          -

          In conclusion, Instagram style iPhone APK is a modified version of the original Instagram app that mimics the appearance and functionality of the iOS version. It allows you to use iOS emojis, download videos and images from Instagram, customize your profile and settings, and access hidden features and modes. However, it also has some drawbacks, such as compatibility issues, security risks, and possible violation of Instagram's terms of service. Therefore, you should use it at your own risk and discretion.

          -

          If you want to try out Instagram style iPhone APK on your Android device, you can download it from this link: [text]. Make sure that you enable unknown sources on your device's settings before installing it. Also, make sure that you backup your data and scan the file for viruses before using it. We hope that this article has helped you understand what is Instagram style iPhone APK and how to use it safely. If you have any questions or feedback, please let us know in the comments below.

          -

          FAQs

          -

          Is Instagram style iPhone APK legal and safe?

          -

          Instagram style iPhone APK is not a legal or official app from Instagram. It is a modified version of the original Instagram app that may violate Instagram's terms of service. Therefore, using it may result in your account being banned or suspended by Instagram. Moreover, Instagram style iPhone APK may not be safe to use, as it may contain malware, spyware, or adware that may harm your device or compromise your privacy. Therefore, you should use it at your own risk and discretion, and only download it from trusted sources.

          -

          Will I lose my Instagram account or data if I use Instagram style iPhone APK?

          -

          There is a possibility that you may lose your Instagram account or data if you use Instagram style iPhone APK. This is because Instagram style iPhone APK may not be compatible with the latest updates or features of the original Instagram app, and may cause errors, crashes, or glitches on your device. Moreover, Instagram style iPhone APK may not be secure or encrypted, and may expose your account or data to hackers, scammers, or third parties. Therefore, you should backup your data and use a secondary account if you want to use Instagram style iPhone APK.

          -

          Can I update Instagram style iPhone APK regularly?

          -

          Instagram style iPhone APK may not be updated regularly, as it depends on the developers who created it. Therefore, you may not be able to enjoy the latest features or improvements of the original Instagram app if you use Instagram style iPhone APK. Moreover, updating Instagram style iPhone APK may not be easy or automatic, as you may need to uninstall the previous version and install the new version manually. Therefore, you should check the source of the app for any updates or news before updating it.

          -

          Does Instagram style iPhone APK work on all Android devices?

          -

          Instagram style iPhone APK may not work on all Android devices, as it may require some specific specifications or settings to run properly. For example, you may need to have a certain Android version, RAM size, storage space, or screen resolution to use Instagram style iPhone APK. Moreover, some Android devices may not support the iOS emojis or features that Instagram style iPhone APK offers. Therefore, you should check the compatibility of your device before downloading and installing Instagram style iPhone APK.

          -

          Where can I find more information or support for Instagram style iPhone APK?

          -

          If you need more information or support for Instagram style iPhone APK, you can visit the website or social media pages of the developers who created it. You can also search for online forums, blogs, reviews, or videos that discuss or demonstrate how to use Instagram style iPhone APK. However, you should be careful and cautious when looking for information or support for Instagram style iPhone APK, as some sources may be unreliable, outdated, or misleading. You should also avoid clicking on any suspicious links or downloading any unknown files that may harm your device or compromise your privacy.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Never Have I Ever Party Games APK for Android.md b/spaces/fatiXbelha/sd/Download Never Have I Ever Party Games APK for Android.md deleted file mode 100644 index 4fd5b5ba5da1ee1f47b509db659c0b823ba9b8d2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Never Have I Ever Party Games APK for Android.md +++ /dev/null @@ -1,93 +0,0 @@ - -

          Never Have I Ever Game APK: The Ultimate Party Game for Android

          -

          Are you looking for a fun and exciting way to spice up your parties? Do you want to get to know your friends better and reveal their secrets? Do you enjoy laughing at yourself and others? If you answered yes to any of these questions, then you should try Never Have I Ever Game APK, the official app of the popular party game that will turn strangers into best friends within minutes!

          -

          never have i ever game apk


          Download ❤❤❤ https://urllie.com/2uNvaQ



          -

          What is Never Have I Ever Game APK?

          -

          A fun and embarrassing party game for everyone

          -

          Never Have I Ever Game APK is a mobile app that lets you play the classic party game of Never Have I Ever on your Android device. The game is simple: you read a statement that starts with "Never have I ever..." and then you answer honestly if you have done it or not. For example, "Never have I ever kissed someone on the first date". If you have done it, you say "I have" and if you haven't, you say "I have not". You can also use your fingers or drinks to keep score.

          -

          How to play Never Have I Ever Game APK?

          -

          Choose a category and a mode

          -

          The app has hundreds of questions divided into different categories, such as Funny, Dirty, Love, School, Travel, Party, etc. You can choose any category you like or mix them up for more variety. You can also choose between two modes: Normal or Extreme. Normal mode has more general and mild questions, while Extreme mode has more daring and naughty questions.

          -

          Read the statements aloud and answer honestly

          -

          Once you have chosen a category and a mode, you can start playing with your friends. You can take turns reading the statements aloud or let the app read them for you. You have to answer honestly and reveal if you have done the thing or not. You can also ask for more details or share your stories if you want to make it more interesting.

          -

          Keep track of your score and see who wins

          -

          You can use your fingers or drinks to keep track of your score. Every time you say "I have", you lose a finger or take a sip of your drink. The game ends when someone runs out of fingers or drinks, or when you decide to stop. The person with the most fingers or drinks left is the winner. You can also play without keeping score and just have fun.

          -

          Why download Never Have I Ever Game APK?

          -

          It's free and easy to use

          -

          Never Have I Ever Game APK is a free app that you can download from the Google Play Store or from this link. It has a simple and user-friendly interface that lets you choose your settings and start playing in seconds. You don't need any special equipment or cards, just your phone and your friends.

          -

          It has hundreds of hilarious and naughty questions

          -

          The app has over 500 questions that will make you laugh, blush, and cringe. You will discover things about your friends that you never knew before, and they will discover things about you too. You will also learn new facts and trivia that will surprise you. The questions are updated regularly, so you will never run out of things to say.

          -

          It can turn strangers into best friends within minutes

          -

          Never Have I Ever Game APK is a great icebreaker that can help you get to know new people and make new friends. It can also strengthen your bond with your existing friends and spice up your relationships. The game will make you feel more comfortable and confident with each other, as you share your secrets and experiences. You will also have a lot of fun and laughter along the way.

          -

          How to download Never Have I Ever Game APK?

          -

          Follow these simple steps to get the app on your device

          -

          Step 1: Go to the Google Play Store or click on this link

          -

          The first step is to go to the Google Play Store on your Android device or click on this link that will take you directly to the app page. You will see the app icon, name, rating, and description.

          -

          Step 2: Tap on the Install button and wait for the download to finish

          -

          The next step is to tap on the green Install button that will start the download process. You will see a progress bar that shows how much time is left until the download is complete. The app size is about 20 MB, so it won't take long.

          -

          never have i ever game app download
          -never have i ever game android apk
          -never have i ever game apk mod
          -never have i ever game apk offline
          -never have i ever game apk latest version
          -never have i ever game apk for pc
          -never have i ever game apk free
          -never have i ever game apk online
          -never have i ever game apk pure
          -never have i ever game apk hack
          -never have i ever game apk full
          -never have i ever game apk premium
          -never have i ever game apk no ads
          -never have i ever game apk unlimited
          -never have i ever game apk pro
          -never have i ever game apk cracked
          -never have i ever game apk mirror
          -never have i ever game apk update
          -never have i ever game apk old version
          -never have i ever game apk 2023
          -never have i ever game apk fun
          -never have i ever game apk dirty
          -never have i ever game apk adult
          -never have i ever game apk kids
          -never have i ever game apk family
          -never have i ever game apk couples
          -never have i ever game apk friends
          -never have i ever game apk party
          -never have i ever game apk trivia
          -never have i ever game apk questions
          -never have i ever game apk answers
          -never have i ever game apk challenges
          -never have i ever game apk dares
          -never have i ever game apk secrets
          -never have i ever game apk stories
          -never have i ever game apk funny
          -never have i ever game apk hilarious
          -never have i ever game apk naughty
          -never have i ever game apk spicy
          -never have i ever game apk hot

          -

          Step 3: Open the app and start playing with your friends

          -

          The final step is to open the app and start playing with your friends. You will see a welcome screen that gives you some instructions and tips on how to play. You can also access the settings menu where you can change the language, sound, voice, and other options. Then, you can choose a category and a mode, and start reading the statements aloud. Have fun!

          -

          Conclusion

          -

          Never Have I Ever Game APK is the ultimate party game for Android that will make you laugh, blush, and cringe with your friends. It's a fun and easy way to get to know each other better and reveal your secrets. It's also free and easy to download from the Google Play Store or from this link. So what are you waiting for? Download Never Have I Ever Game APK today and start playing with your friends!

          -

          FAQs

          -

          Here are some frequently asked questions about Never Have I Ever Game APK:

          -
            -
          • Q: How many people can play Never Have I Ever Game APK?
          • -
          • A: There is no limit to how many people can play Never Have I Ever Game APK, as long as everyone can hear the statements and answer them. The more people, the more fun!
          • -
          • Q: Can I play Never Have I Ever Game APK online with other people?
          • -
          • A: Yes, you can play Never Have I Ever Game APK online with other people using video chat apps like Zoom, Skype, or Google Meet. Just share your screen with them and let them see the statements and answer them.
          • -
          • Q: Can I create my own questions for Never Have I Ever Game APK?
          • -
          • A: Yes, you can create your own questions for Never Have I Ever Game APK by using the Custom Mode. You can type in any statement you want and add it to the game. You can also edit or delete your custom questions anytime.
          • -
          • Q: Is Never Have I Ever Game APK safe and secure?
          • -
          • A: Yes, Never Have I Ever Game APK is safe and secure to use. It does not collect any personal information or data from you or your device. It also does not contain any viruses, malware, or spyware that could harm your device.
          • -
          • Q: What are some tips and tricks for playing Never Have I Ever Game APK?
          • -
          • A: Here are some tips and tricks for playing Never Have I Ever Game APK:
          • -
              -
            • Be honest and don't lie about your answers. The game is more fun when everyone is truthful and authentic.
            • -
            • Be respectful and don't judge others for their answers. The game is meant to be a safe and friendly space where everyone can share their experiences without fear or shame.
            • -
            • Be creative and don't be afraid to ask for more details or stories. The game is a great opportunity to learn more about your friends and their lives.
            • -
            • Be adventurous and don't be shy to try new things. The game is a chance to challenge yourself and explore new possibilities.
            • -
            -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/Japamala Prayer Malayalam.pdf [BETTER].md b/spaces/feregVcuzo/sanity-test-midi/Japamala Prayer Malayalam.pdf [BETTER].md deleted file mode 100644 index 1cf9f0e28f2617205d1c09c2ce25cdc1b915e6db..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/Japamala Prayer Malayalam.pdf [BETTER].md +++ /dev/null @@ -1,70 +0,0 @@ -## Japamala Prayer Malayalam.pdf - - - - - - - - - -**Download >>>>> [https://amniacagou.blogspot.com/?c=2txvJ0](https://amniacagou.blogspot.com/?c=2txvJ0)** - - - - - - - - - - - - ``` - -# How to Pray the Japamala in Malayalam - - - -The Japamala is a form of rosary prayer that is popular among the Syro-Malabar Catholics of Kerala, India. The word Japamala means "garland of prayers" in Sanskrit, and it consists of 53 beads that represent the Hail Marys, and 6 additional beads that represent the Our Fathers. The Japamala is divided into four sections, each corresponding to a different mystery of the life of Jesus Christ: Joyful, Sorrowful, Glorious, and Luminous. The Japamala is usually prayed in Malayalam, the native language of Kerala, but it can also be prayed in other languages such as English or Latin. - - - -To pray the Japamala in Malayalam, you will need a Japamala bead necklace, a Bible or a prayer book, and a quiet place to meditate. You can follow these steps: - - - -1. Begin by making the sign of the cross and saying: "Njangalude pithave swargasthanaaya naamathil (+) pithaavum puthranum parisudhaathmavumaya eka daivathil njangalude vishwasathode koodi cheyunna prarthana." This means: "In the name of the Father (+) and of the Son and of the Holy Spirit, we join together in prayer with faith in the one God." - -2. On the first bead after the cross, say the Apostles' Creed in Malayalam: "Njan vishwasikkunnu daivaputhranaya eesho kristhuvinu njangalude rakshakanu ennu. Avan daivathinte eka puthranu. Avante pithavu avane garbhamadakkunnu parisudhaathmavine kondu. Avan daivakanyaka maariyayil ninnu pirannu. Avan ponchus pilaaatussinte adiyil dukhappedunnu. Kurishil marichu. Poomarichu. Muppathaam divasathe naale avan uyirthezhunnettu. Swargathil poyi avante valathu bhagathil daivathinte singhasanathil irikkunnu. Avan vannu bhoomiyilekku marikkappedunnathinu sesham vazhchakku marikkunnathinu vicharippikkunnu ennu njan vishwasikkunnu." - -3. On the next three beads, say one Our Father, one Hail Mary, and one Glory Be in Malayalam: "Njangalude pithaave swargasthithane, angayude naamam poornamaakatte. Angayude rajyam varatte. Angayude thirumanasu swargathile pole bhoomiyilum aakatte. Njangalkku iniyaappam tharunna pole njangal nalkiya iniyaappam njangalude iniyaappakkaranmarkkum tharname. Njangalude paapangale kshamikkaname njangal paapikkarodu cheythirikuna pole. Njangale pralobhanthil ninnum rakshikkaname doshavum papavum ninnum vimuktharaka name. Amen." This means: "Our Father who art in heaven, hallowed be thy name. Thy kingdom come. Thy will be done on earth as it is in heaven. Give us this day our daily bread and forgive us our trespasses as we forgive those who trespass against us. And lead us not into temptation but deliver us from evil and sin. Amen." - -4. "Njangalude daivamaathaave mariyame daivaputhra eeshoye garbhamadakkunnu parisudha kanaya nee ange anugrahathaal njangalkku abhishtappedunnavaril orupadu abhishtappedunnavare aanu nee ange anugrahathaal njangalkku marikkunnathinum marikkappedunnathinum ange sahaayam cheyyaname amen." This means: "Hail - -``` - -5. "Mahimayum sthuthiyum balavaanaya daivathinu njangalude pithaavinte puthranodu parisudhaathmavodu koode eppozhum nithyavum amen." This means: "Glory be to the Father and to the Son and to the Holy Spirit as it was in the beginning is now and ever shall be world without end amen." - -6. On the next bead, announce the first mystery of the section you are praying. For example, if you are praying the Joyful Mysteries on Monday or Saturday, you can say: "Aadyamaya santhosharahasyam: parisudha kanyaka mariyam daivaduthanaya gabriyele avarodu sambhaashikkunnu ennu." This means: "The first Joyful Mystery: The Annunciation of the angel Gabriel to the Virgin Mary." - -7. On the next ten beads, say ten Hail Marys in Malayalam, meditating on the mystery. - -8. On the next bead, say one Glory Be in Malayalam, and optionally, one Fatima Prayer: "Ente eeshoye njangalude paapangale kshamikkaname njangalude sahodharanmarude paapangale kshamikkaname njangalude marikkunnathinum marikkappedunnathinum ange anugraham cheyyaname amen." This means: "O my Jesus forgive us our sins save us from the fires of hell lead all souls to heaven especially those who have most need of thy mercy amen." - -9. Repeat steps 6 to 9 for the remaining four mysteries of the section you are praying. - -10. At the end of the Japamala, say one Hail Holy Queen in Malayalam: "Njangalude rakshakiya daivamaathaave mariyame nee ange anugrahathaal swarggathile pole bhoomiyilum sthuthyappedunnavare aanu nee ange anugrahathaal njangalkku abhishtappedunnavaril orupadu abhishtappedunnavare aanu nee ange anugrahathaal njangalkku marikkunnathinum marikkappedunnathinum ange sahaayam cheyyaname nee ange anugrahathaal njangalkku daivaputhranodu koode jeevikkunna daivathinte pithaavine kandu njangale sthuthi cheyyan sahaayikkaname amen." This means: "Hail holy queen mother of mercy our life our sweetness and our hope to thee do we cry poor banished children of Eve to thee do we send up our sighs mourning and weeping in this valley of tears turn then most gracious advocate thine eyes of mercy toward us and after this our exile show unto us the blessed fruit of thy womb Jesus O clement O loving O sweet virgin Mary pray for us O holy mother of God that we may be made worthy of the promises of Christ amen." - -11. Conclude by making the sign of the cross and saying: "Njangalude pithave swargasthanaaya naamathil (+) pithaavum puthranum parisudhaathmavumaya eka daivathil njangalude vishwasathode koodi cheyunna prarthana." This means: "In the name of the Father (+) and of the Son and of the Holy Spirit, we join together in prayer with faith in the one God." - - - -You have now completed praying the Japamala in Malayalam. You can offer your prayers for your personal intentions, for your family and friends, for the Church and the world, and for the souls in purgatory. You can also thank God for his blessings and ask for his guidance and protection. May God bless you and Mary keep you always. - - ``` dfd1c89656 - - - - - diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/debug/karma.conf.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/debug/karma.conf.js deleted file mode 100644 index 103a82d15bd72b3cdf9ba4108272985f7e0bfdb3..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/debug/karma.conf.js +++ /dev/null @@ -1,70 +0,0 @@ -// Karma configuration -// Generated on Fri Dec 16 2016 13:09:51 GMT+0000 (UTC) - -module.exports = function(config) { - config.set({ - - // base path that will be used to resolve all patterns (eg. files, exclude) - basePath: '', - - - // frameworks to use - // available frameworks: https://npmjs.org/browse/keyword/karma-adapter - frameworks: ['mocha', 'chai', 'sinon'], - - - // list of files / patterns to load in the browser - files: [ - 'dist/debug.js', - 'test/*spec.js' - ], - - - // list of files to exclude - exclude: [ - 'src/node.js' - ], - - - // preprocess matching files before serving them to the browser - // available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor - preprocessors: { - }, - - // test results reporter to use - // possible values: 'dots', 'progress' - // available reporters: https://npmjs.org/browse/keyword/karma-reporter - reporters: ['progress'], - - - // web server port - port: 9876, - - - // enable / disable colors in the output (reporters and logs) - colors: true, - - - // level of logging - // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG - logLevel: config.LOG_INFO, - - - // enable / disable watching file and executing tests whenever any file changes - autoWatch: true, - - - // start these browsers - // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher - browsers: ['PhantomJS'], - - - // Continuous Integration mode - // if true, Karma captures browsers, runs the tests and exits - singleRun: false, - - // Concurrency level - // how many browser should be started simultaneous - concurrency: Infinity - }) -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-parser/node_modules/ms/readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-parser/node_modules/ms/readme.md deleted file mode 100644 index 9a1996b17e0de6854dd1cf10c5f2ee642e494085..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-parser/node_modules/ms/readme.md +++ /dev/null @@ -1,60 +0,0 @@ -# ms - -[![Build Status](https://travis-ci.org/zeit/ms.svg?branch=master)](https://travis-ci.org/zeit/ms) -[![Join the community on Spectrum](https://withspectrum.github.io/badge/badge.svg)](https://spectrum.chat/zeit) - -Use this package to easily convert various time formats to milliseconds. - -## Examples - -```js -ms('2 days') // 172800000 -ms('1d') // 86400000 -ms('10h') // 36000000 -ms('2.5 hrs') // 9000000 -ms('2h') // 7200000 -ms('1m') // 60000 -ms('5s') // 5000 -ms('1y') // 31557600000 -ms('100') // 100 -ms('-3 days') // -259200000 -ms('-1h') // -3600000 -ms('-200') // -200 -``` - -### Convert from Milliseconds - -```js -ms(60000) // "1m" -ms(2 * 60000) // "2m" -ms(-3 * 60000) // "-3m" -ms(ms('10 hours')) // "10h" -``` - -### Time Format Written-Out - -```js -ms(60000, { long: true }) // "1 minute" -ms(2 * 60000, { long: true }) // "2 minutes" -ms(-3 * 60000, { long: true }) // "-3 minutes" -ms(ms('10 hours'), { long: true }) // "10 hours" -``` - -## Features - -- Works both in [Node.js](https://nodejs.org) and in the browser -- If a number is supplied to `ms`, a string with a unit is returned -- If a string that contains the number is supplied, it returns it as a number (e.g.: it returns `100` for `'100'`) -- If you pass a string with a number and a valid unit, the number of equivalent milliseconds is returned - -## Related Packages - -- [ms.macro](https://github.com/knpwrs/ms.macro) - Run `ms` as a macro at build-time. - -## Caught a Bug? - -1. [Fork](https://help.github.com/articles/fork-a-repo/) this repository to your own GitHub account and then [clone](https://help.github.com/articles/cloning-a-repository/) it to your local device -2. Link the package to the global module directory: `npm link` -3. Within the module you want to test your local development instance of ms, just link it to the dependencies: `npm link ms`. Instead of the default one from npm, Node.js will now use your clone of ms! - -As always, you can run the tests using: `npm test` diff --git a/spaces/firdavsyorkulov/delivery_project_fastapi/models.py b/spaces/firdavsyorkulov/delivery_project_fastapi/models.py deleted file mode 100644 index c8e7d9a1168d30479c437089a77f0cd74288c802..0000000000000000000000000000000000000000 --- a/spaces/firdavsyorkulov/delivery_project_fastapi/models.py +++ /dev/null @@ -1,48 +0,0 @@ -from database import Base -from sqlalchemy import Column, Integer, Boolean, Text, ForeignKey, String -from sqlalchemy.orm import relationship -from sqlalchemy_utils.types import ChoiceType - - -class User(Base): - __tablename__ = "user" - id = Column(Integer, primary_key=True) - username = Column(String(25), unique=True) - email = Column(String(50), unique=True) - password = Column(Text, nullable=True) - is_staff = Column(Boolean, default=False) - is_active = Column(Boolean, default=False) - orders = relationship("Order", back_populates="user") - - def __repr__(self): - return f"Click Me: " - + radio - + "", # HTML - os.path.join(os.path.dirname(__file__), "files/titanic.csv"), - df1, # Dataframe - np.random.randint(0, 10, (4, 4)), # Dataframe - df2, # Timeseries - ) - - -demo = gr.Interface( - fn, - inputs=[ - gr.Textbox(value="Lorem ipsum", label="Textbox"), - gr.Textbox(lines=3, placeholder="Type here..", label="Textbox 2"), - gr.Number(label="Number", value=42), - gr.Slider(10, 20, value=15, label="Slider: 10 - 20"), - gr.Slider(maximum=20, step=0.04, label="Slider: step @ 0.04"), - gr.Checkbox(label="Checkbox"), - gr.CheckboxGroup( - label="CheckboxGroup", choices=CHOICES, value=CHOICES[0:2] - ), - gr.Radio(label="Radio", choices=CHOICES, value=CHOICES[2]), - gr.Dropdown(label="Dropdown", choices=CHOICES), - gr.Image(label="Image"), - gr.Image(label="Image w/ Cropper", tool="select"), - gr.Image(label="Sketchpad", source="canvas"), - gr.Image(label="Webcam", source="webcam"), - gr.Video(label="Video"), - gr.Audio(label="Audio"), - gr.Audio(label="Microphone", source="microphone"), - gr.File(label="File"), - gr.Dataframe(label="Dataframe", headers=["Name", "Age", "Gender"]), - gr.Timeseries(x="time", y=["price", "value"], colors=["pink", "purple"]), - ], - outputs=[ - gr.Textbox(label="Textbox"), - gr.Label(label="Label"), - gr.Audio(label="Audio"), - gr.Image(label="Image"), - gr.Video(label="Video"), - gr.HighlightedText( - label="HighlightedText", color_map={"punc": "pink", "test 0": "blue"} - ), - gr.HighlightedText(label="HighlightedText", show_legend=True), - gr.JSON(label="JSON"), - gr.HTML(label="HTML"), - gr.File(label="File"), - gr.Dataframe(label="Dataframe"), - gr.Dataframe(label="Numpy"), - gr.Timeseries(x="time", y=["price", "value"], label="Timeseries"), - ], - examples=[ - [ - "the quick brown fox", - "jumps over the lazy dog", - 10, - 12, - 4, - True, - ["foo", "baz"], - "baz", - "bar", - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/world.mp4"), - os.path.join(os.path.dirname(__file__), "files/cantina.wav"), - os.path.join(os.path.dirname(__file__), "files/cantina.wav"), - os.path.join(os.path.dirname(__file__), "files/titanic.csv"), - [[1, 2, 3], [3, 4, 5]], - os.path.join(os.path.dirname(__file__), "files/time.csv"), - ] - ] - * 3, - theme="default", - title="Kitchen Sink", - cache_examples=False, - description="Try out all the components!", - article="Learn more about [Gradio](http://gradio.app)", -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/fuckyoudeki/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md b/spaces/fuckyoudeki/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index a4f28a3d27d66d79cb95f2b8b847832172bb5f11..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,40 +0,0 @@ - - - - -### Background - - -### Changes - - -### Documentation - - -### Test Plan - - -### PR Quality Checklist -- [ ] My pull request is atomic and focuses on a single change. -- [ ] I have thoroughly tested my changes with multiple different prompts. -- [ ] I have considered potential risks and mitigations for my changes. -- [ ] I have documented my changes clearly and comprehensively. -- [ ] I have not snuck in any "extra" small tweaks changes - - - - diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/ocrnet_hr18.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/ocrnet_hr18.py deleted file mode 100644 index c60f62a7cdf3f5c5096a7a7e725e8268fddcb057..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/ocrnet_hr18.py +++ /dev/null @@ -1,68 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - type='HRNet', - norm_cfg=norm_cfg, - norm_eval=False, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(18, 36)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(18, 36, 72)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(18, 36, 72, 144)))), - decode_head=[ - dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - channels=sum([18, 36, 72, 144]), - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - channels=512, - ocr_channels=256, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - ], - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/gradio/theme_builder_main/README.md b/spaces/gradio/theme_builder_main/README.md deleted file mode 100644 index 6d0a516afce304376383d8e01faaa3bb93bde1eb..0000000000000000000000000000000000000000 --- a/spaces/gradio/theme_builder_main/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: theme_builder_main -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gulabpatel/Real-ESRGAN/scripts/extract_subimages.py b/spaces/gulabpatel/Real-ESRGAN/scripts/extract_subimages.py deleted file mode 100644 index 9b969ae0d4adff403f2ad362b9afaaaee58e2cef..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Real-ESRGAN/scripts/extract_subimages.py +++ /dev/null @@ -1,135 +0,0 @@ -import argparse -import cv2 -import numpy as np -import os -import sys -from basicsr.utils import scandir -from multiprocessing import Pool -from os import path as osp -from tqdm import tqdm - - -def main(args): - """A multi-thread tool to crop large images to sub-images for faster IO. - - opt (dict): Configuration dict. It contains: - n_thread (int): Thread number. - compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size - and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2. - input_folder (str): Path to the input folder. - save_folder (str): Path to save folder. - crop_size (int): Crop size. - step (int): Step for overlapped sliding window. - thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped. - - Usage: - For each folder, run this script. - Typically, there are GT folder and LQ folder to be processed for DIV2K dataset. - After process, each sub_folder should have the same number of subimages. - Remember to modify opt configurations according to your settings. - """ - - opt = {} - opt['n_thread'] = args.n_thread - opt['compression_level'] = args.compression_level - opt['input_folder'] = args.input - opt['save_folder'] = args.output - opt['crop_size'] = args.crop_size - opt['step'] = args.step - opt['thresh_size'] = args.thresh_size - extract_subimages(opt) - - -def extract_subimages(opt): - """Crop images to subimages. - - Args: - opt (dict): Configuration dict. It contains: - input_folder (str): Path to the input folder. - save_folder (str): Path to save folder. - n_thread (int): Thread number. - """ - input_folder = opt['input_folder'] - save_folder = opt['save_folder'] - if not osp.exists(save_folder): - os.makedirs(save_folder) - print(f'mkdir {save_folder} ...') - else: - print(f'Folder {save_folder} already exists. Exit.') - sys.exit(1) - - # scan all images - img_list = list(scandir(input_folder, full_path=True)) - - pbar = tqdm(total=len(img_list), unit='image', desc='Extract') - pool = Pool(opt['n_thread']) - for path in img_list: - pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1)) - pool.close() - pool.join() - pbar.close() - print('All processes done.') - - -def worker(path, opt): - """Worker for each process. - - Args: - path (str): Image path. - opt (dict): Configuration dict. It contains: - crop_size (int): Crop size. - step (int): Step for overlapped sliding window. - thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped. - save_folder (str): Path to save folder. - compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION. - - Returns: - process_info (str): Process information displayed in progress bar. - """ - crop_size = opt['crop_size'] - step = opt['step'] - thresh_size = opt['thresh_size'] - img_name, extension = osp.splitext(osp.basename(path)) - - # remove the x2, x3, x4 and x8 in the filename for DIV2K - img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '') - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - - h, w = img.shape[0:2] - h_space = np.arange(0, h - crop_size + 1, step) - if h - (h_space[-1] + crop_size) > thresh_size: - h_space = np.append(h_space, h - crop_size) - w_space = np.arange(0, w - crop_size + 1, step) - if w - (w_space[-1] + crop_size) > thresh_size: - w_space = np.append(w_space, w - crop_size) - - index = 0 - for x in h_space: - for y in w_space: - index += 1 - cropped_img = img[x:x + crop_size, y:y + crop_size, ...] - cropped_img = np.ascontiguousarray(cropped_img) - cv2.imwrite( - osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img, - [cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']]) - process_info = f'Processing {img_name} ...' - return process_info - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder') - parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder') - parser.add_argument('--crop_size', type=int, default=480, help='Crop size') - parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window') - parser.add_argument( - '--thresh_size', - type=int, - default=0, - help='Threshold size. Patches whose size is lower than thresh_size will be dropped.') - parser.add_argument('--n_thread', type=int, default=20, help='Thread number.') - parser.add_argument('--compression_level', type=int, default=3, help='Compression level') - args = parser.parse_args() - - main(args) diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/eval/__init__.py b/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/eval/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/gylleus/icongen/torch_utils/ops/upfirdn2d.cpp b/spaces/gylleus/icongen/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/gylleus/icongen/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/haakohu/deep_privacy2/dp2/data/transforms/transforms.py b/spaces/haakohu/deep_privacy2/dp2/data/transforms/transforms.py deleted file mode 100644 index 5fd43e7a515deacca4be7242d065b4f3ccb6800e..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/data/transforms/transforms.py +++ /dev/null @@ -1,277 +0,0 @@ -from pathlib import Path -from typing import Dict, List -import torchvision -import torch -import tops -import torchvision.transforms.functional as F -from .functional import hflip -import numpy as np -from dp2.utils.vis_utils import get_coco_keypoints -from PIL import Image, ImageDraw -from typing import Tuple - - -class RandomHorizontalFlip(torch.nn.Module): - - def __init__(self, p: float, flip_map=None, **kwargs): - super().__init__() - self.flip_ratio = p - self.flip_map = flip_map - if self.flip_ratio is None: - self.flip_ratio = 0.5 - assert 0 <= self.flip_ratio <= 1 - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - if torch.rand(1) > self.flip_ratio: - return container - return hflip(container, self.flip_map) - - -class CenterCrop(torch.nn.Module): - """ - Performs the transform on the image. - NOTE: Does not transform the mask to improve runtime. - """ - - def __init__(self, size: List[int]): - super().__init__() - self.size = tuple(size) - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - min_size = min(container["img"].shape[1], container["img"].shape[2]) - if min_size < self.size[0]: - container["img"] = F.center_crop(container["img"], min_size) - container["img"] = F.resize(container["img"], self.size) - return container - container["img"] = F.center_crop(container["img"], self.size) - return container - - -class Resize(torch.nn.Module): - """ - Performs the transform on the image. - NOTE: Does not transform the mask to improve runtime. - """ - - def __init__(self, size, interpolation=F.InterpolationMode.BILINEAR): - super().__init__() - self.size = tuple(size) - self.interpolation = interpolation - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - container["img"] = F.resize(container["img"], self.size, self.interpolation, antialias=True) - if "semantic_mask" in container: - container["semantic_mask"] = F.resize( - container["semantic_mask"], self.size, F.InterpolationMode.NEAREST) - if "embedding" in container: - container["embedding"] = F.resize( - container["embedding"], self.size, self.interpolation) - if "mask" in container: - container["mask"] = F.resize( - container["mask"], self.size, F.InterpolationMode.NEAREST) - if "E_mask" in container: - container["E_mask"] = F.resize( - container["E_mask"], self.size, F.InterpolationMode.NEAREST) - if "maskrcnn_mask" in container: - container["maskrcnn_mask"] = F.resize( - container["maskrcnn_mask"], self.size, F.InterpolationMode.NEAREST) - if "vertices" in container: - container["vertices"] = F.resize( - container["vertices"], self.size, F.InterpolationMode.NEAREST) - return container - - def __repr__(self): - repr = super().__repr__() - vars_ = dict(size=self.size, interpolation=self.interpolation) - return repr + " " + " ".join([f"{k}: {v}" for k, v in vars_.items()]) - - -class Normalize(torch.nn.Module): - """ - Performs the transform on the image. - NOTE: Does not transform the mask to improve runtime. - """ - - def __init__(self, mean, std, inplace, keys=["img"]): - super().__init__() - self.mean = mean - self.std = std - self.inplace = inplace - self.keys = keys - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - for key in self.keys: - container[key] = F.normalize(container[key], self.mean, self.std, self.inplace) - return container - - def __repr__(self): - repr = super().__repr__() - vars_ = dict(mean=self.mean, std=self.std, inplace=self.inplace) - return repr + " " + " ".join([f"{k}: {v}" for k, v in vars_.items()]) - - -class ToFloat(torch.nn.Module): - - def __init__(self, keys=["img"], norm=True) -> None: - super().__init__() - self.keys = keys - self.gain = 255 if norm else 1 - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - for key in self.keys: - container[key] = container[key].float() / self.gain - return container - - -class RandomCrop(torchvision.transforms.RandomCrop): - """ - Performs the transform on the image. - NOTE: Does not transform the mask to improve runtime. - """ - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - container["img"] = super().forward(container["img"]) - return container - - -class CreateCondition(torch.nn.Module): - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - if container["img"].dtype == torch.uint8: - container["condition"] = container["img"] * container["mask"].byte() + (1-container["mask"].byte()) * 127 - return container - container["condition"] = container["img"] * container["mask"] - return container - - -class CreateEmbedding(torch.nn.Module): - - def __init__(self, embed_path: Path, cuda=True) -> None: - super().__init__() - self.embed_map = torch.load(embed_path, map_location=torch.device("cpu")) - if cuda: - self.embed_map = tops.to_cuda(self.embed_map) - - def forward(self, container: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - vertices = container["vertices"] - if vertices.ndim == 3: - embedding = self.embed_map[vertices.long()].squeeze(dim=0) - embedding = embedding.permute(2, 0, 1) * container["E_mask"] - pass - else: - assert vertices.ndim == 4 - embedding = self.embed_map[vertices.long()].squeeze(dim=1) - embedding = embedding.permute(0, 3, 1, 2) * container["E_mask"] - container["embedding"] = embedding - container["embed_map"] = self.embed_map.clone() - return container - - -class InsertJointMap(torch.nn.Module): - - def __init__(self, imsize: Tuple) -> None: - super().__init__() - self.imsize = imsize - knames = get_coco_keypoints()[0] - knames = knames + ["neck", "mid_hip"] - connectivity = { - "nose": ["left_eye", "right_eye", "neck"], - "left_eye": ["right_eye", "left_ear"], - "right_eye": ["right_ear"], - "left_shoulder": ["right_shoulder", "left_elbow", "left_hip"], - "right_shoulder": ["right_elbow", "right_hip"], - "left_elbow": ["left_wrist"], - "right_elbow": ["right_wrist"], - "left_hip": ["right_hip", "left_knee"], - "right_hip": ["right_knee"], - "left_knee": ["left_ankle"], - "right_knee": ["right_ankle"], - "neck": ["mid_hip", "nose"], - } - category = { - ("nose", "left_eye"): 0, # head - ("nose", "right_eye"): 0, # head - ("nose", "neck"): 0, # head - ("left_eye", "right_eye"): 0, # head - ("left_eye", "left_ear"): 0, # head - ("right_eye", "right_ear"): 0, # head - ("left_shoulder", "left_elbow"): 1, # left arm - ("left_elbow", "left_wrist"): 1, # left arm - ("right_shoulder", "right_elbow"): 2, # right arm - ("right_elbow", "right_wrist"): 2, # right arm - ("left_shoulder", "right_shoulder"): 3, # body - ("left_shoulder", "left_hip"): 3, # body - ("right_shoulder", "right_hip"): 3, # body - ("left_hip", "right_hip"): 3, # body - ("left_hip", "left_knee"): 4, # left leg - ("left_knee", "left_ankle"): 4, # left leg - ("right_hip", "right_knee"): 5, # right leg - ("right_knee", "right_ankle"): 5, # right leg - ("neck", "mid_hip"): 3, # body - ("neck", "nose"): 0, # head - } - self.indices2category = { - tuple([knames.index(n) for n in k]): v for k, v in category.items() - } - self.connectivity_indices = { - knames.index(k): [knames.index(v_) for v_ in v] - for k, v in connectivity.items() - } - self.l_shoulder = knames.index("left_shoulder") - self.r_shoulder = knames.index("right_shoulder") - self.l_hip = knames.index("left_hip") - self.r_hip = knames.index("right_hip") - self.l_eye = knames.index("left_eye") - self.r_eye = knames.index("right_eye") - self.nose = knames.index("nose") - self.neck = knames.index("neck") - - def create_joint_map(self, N, H, W, keypoints): - joint_maps = np.zeros((N, H, W), dtype=np.uint8) - for bidx, keypoints in enumerate(keypoints): - assert keypoints.shape == (17, 3), keypoints.shape - keypoints = torch.cat((keypoints, torch.zeros(2, 3))) - visible = keypoints[:, -1] > 0 - - if visible[self.l_shoulder] and visible[self.r_shoulder]: - neck = (keypoints[self.l_shoulder] - + (keypoints[self.r_shoulder] - keypoints[self.l_shoulder]) / 2) - keypoints[-2] = neck - visible[-2] = 1 - if visible[self.l_hip] and visible[self.r_hip]: - mhip = (keypoints[self.l_hip] - + (keypoints[self.r_hip] - keypoints[self.l_hip]) / 2 - ) - keypoints[-1] = mhip - visible[-1] = 1 - - keypoints[:, 0] *= W - keypoints[:, 1] *= H - joint_map = Image.fromarray(np.zeros((H, W), dtype=np.uint8)) - draw = ImageDraw.Draw(joint_map) - for fidx in self.connectivity_indices.keys(): - for tidx in self.connectivity_indices[fidx]: - if visible[fidx] == 0 or visible[tidx] == 0: - continue - c = self.indices2category[(fidx, tidx)] - s = tuple(keypoints[fidx, :2].round().long().numpy().tolist()) - e = tuple(keypoints[tidx, :2].round().long().numpy().tolist()) - draw.line((s, e), width=1, fill=c + 1) - if visible[self.nose] == 0 and visible[self.neck] == 1: - m_eye = ( - keypoints[self.l_eye] - + (keypoints[self.r_eye] - keypoints[self.l_eye]) / 2 - ) - s = tuple(m_eye[:2].round().long().numpy().tolist()) - e = tuple(keypoints[self.neck, :2].round().long().numpy().tolist()) - c = self.indices2category[(self.nose, self.neck)] - draw.line((s, e), width=1, fill=c + 1) - joint_map = np.array(joint_map) - - joint_maps[bidx] = np.array(joint_map) - return joint_maps[:, None] - - def forward(self, batch): - batch["joint_map"] = torch.from_numpy(self.create_joint_map( - batch["img"].shape[0], *self.imsize, batch["keypoints"])) - return batch diff --git a/spaces/hamacojr/CAT-Seg/eval.sh b/spaces/hamacojr/CAT-Seg/eval.sh deleted file mode 100644 index 450a72857d3e7eff0e81d933cc4e95378f90e086..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/eval.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/sh - -gpus=4 -config=$1 -output=$2 - -if [ -z $config ] -then - echo "No config file found! Run with "sh run.sh [CONFIG_FILE] [OUTPUT_DIR] [OPTS]"" - exit 0 -fi - -if [ -z $output ] -then - echo "No output directory found! Run with "sh run.sh [CONFIG_FILE] [OUTPUT_DIR] [OPTS]"" - exit 0 -fi - -shift 2 -opts=${@} - -#ADE20k-150 -python train_net.py --config $config \ - --num-gpus $gpus \ - --dist-url "auto" \ - --eval-only \ - OUTPUT_DIR $output/eval \ - MODEL.SEM_SEG_HEAD.TEST_CLASS_JSON "datasets/ADE_20k/ADE20K_150_class.json" \ - DATASETS.TEST \(\"ade20k_150_test_sem_seg\"\,\) \ - TEST.SLIDING_WINDOW "True" \ - MODEL.SEM_SEG_HEAD.POOLING_SIZES "[1,1]" \ - MODEL.WEIGHTS $output/model_final.pth \ - $opts - -#ADE20k-847 -python train_net.py --config $config \ - --num-gpus $gpus \ - --dist-url "auto" \ - --eval-only \ - OUTPUT_DIR $output/eval \ - MODEL.SEM_SEG_HEAD.TEST_CLASS_JSON "datasets/ADE_20k/ADE20K_847_pure_class.json" \ - DATASETS.TEST \(\"ade20k_full_sem_seg_freq_val_all\"\,\) \ - TEST.SLIDING_WINDOW "True" \ - MODEL.SEM_SEG_HEAD.POOLING_SIZES "[1,1]" \ - MODEL.WEIGHTS $output/model_final.pth \ - $opts - -#Pascal VOC -python train_net.py --config $config \ - --num-gpus $gpus \ - --dist-url "auto" \ - --eval-only \ - OUTPUT_DIR $output/eval \ - MODEL.SEM_SEG_HEAD.TEST_CLASS_JSON "datasets/pascal-voc20/VOC_20_class.json" \ - DATASETS.TEST \(\"voc_2012_test_sem_seg\"\,\) \ - TEST.SLIDING_WINDOW "True" \ - MODEL.SEM_SEG_HEAD.POOLING_SIZES "[1,1]" \ - MODEL.WEIGHTS $output/model_final.pth \ - $opts - -#Pascal VOC-b -python train_net.py --config $config \ - --num-gpus $gpus \ - --dist-url "auto" \ - --eval-only \ - OUTPUT_DIR $output/eval \ - MODEL.SEM_SEG_HEAD.TEST_CLASS_JSON "datasets/pascal-voc20/VOC_20_class_59.json" \ - DATASETS.TEST \(\"voc_2012_test_openseg_sem_seg\"\,\) \ - TEST.SLIDING_WINDOW "True" \ - MODEL.SEM_SEG_HEAD.POOLING_SIZES "[1,1]" \ - MODEL.WEIGHTS $output/model_final.pth \ - $opts - -#Pascal Context 59 -python train_net.py --config $config \ - --num-gpus $gpus \ - --dist-url "auto" \ - --eval-only \ - OUTPUT_DIR $output/eval \ - MODEL.SEM_SEG_HEAD.TEST_CLASS_JSON "datasets/pascal-context/pas59.json" \ - DATASETS.TEST \(\"context_59_test_sem_seg\"\,\) \ - TEST.SLIDING_WINDOW "True" \ - MODEL.SEM_SEG_HEAD.POOLING_SIZES "[1,1]" \ - MODEL.WEIGHTS $output/model_final.pth \ - $opts - -#Pascal Context 459 -python train_net.py --config $config \ - --num-gpus $gpus \ - --dist-url "auto" \ - --eval-only \ - OUTPUT_DIR $output/eval \ - MODEL.SEM_SEG_HEAD.TEST_CLASS_JSON "datasets/pascal-context/pas459.json" \ - DATASETS.TEST \(\"context_459_test_sem_seg\"\,\) \ - TEST.SLIDING_WINDOW "True" \ - MODEL.SEM_SEG_HEAD.POOLING_SIZES "[1,1]" \ - MODEL.WEIGHTS $output/model_final.pth \ - $opts - -cat $output/eval/log.txt | grep copypaste \ No newline at end of file diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/test_time_augmentation.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/test_time_augmentation.py deleted file mode 100644 index 8d250b6bb7792b54ddeaaab62cc6c170d74d3bb9..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/test_time_augmentation.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -from itertools import count - -import numpy as np -import torch -from fvcore.transforms import HFlipTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from detectron2.data.detection_utils import read_image -from detectron2.modeling import DatasetMapperTTA - -__all__ = [ - "SemanticSegmentorWithTTA", -] - - -class SemanticSegmentorWithTTA(nn.Module): - """ - A SemanticSegmentor with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`SemanticSegmentor.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=1): - """ - Args: - cfg (CfgNode): - model (SemanticSegmentor): a SemanticSegmentor to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - self.cfg = cfg.clone() - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - def _batch_inference(self, batched_inputs): - """ - Execute inference on a list of inputs, - using batch size = self.batch_size, instead of the length of the list. - Inputs & outputs have the same format as :meth:`SemanticSegmentor.forward` - """ - outputs = [] - inputs = [] - for idx, input in zip(count(), batched_inputs): - inputs.append(input) - if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1: - with torch.no_grad(): - outputs.extend(self.model(inputs)) - inputs = [] - return outputs - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`SemanticSegmentor.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs] - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - Returns: - dict: one output dict - """ - augmented_inputs, tfms = self._get_augmented_inputs(input) - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # Delete now useless variables to avoid being out of memory - del augmented_inputs - # 2: merge the results - # handle flip specially - new_outputs = [] - for output, tfm in zip(outputs, tfms): - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - new_outputs.append(output.pop("sem_seg").flip(dims=[2])) - else: - new_outputs.append(output.pop("sem_seg")) - del outputs - # to avoid OOM with torch.stack - final_predictions = new_outputs[0] - for i in range(1, len(new_outputs)): - final_predictions += new_outputs[i] - final_predictions = final_predictions / len(new_outputs) - del new_outputs - return {"sem_seg": final_predictions} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/__init__.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/__init__.py deleted file mode 100644 index 3cf72e9280c90bdfeaced30750650ef0f9021c3d..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -from .constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD -from .factory import create_model, create_model_and_transforms, create_model_from_pretrained, get_tokenizer -from .factory import list_models, add_model_config, get_model_config, load_checkpoint -from .loss import ClipLoss -from .model import CLIP, CustomTextCLIP, CLIPTextCfg, CLIPVisionCfg,\ - convert_weights_to_lp, convert_weights_to_fp16, trace_model, get_cast_dtype -from .openai import load_openai_model, list_openai_models -from .pretrained import list_pretrained, list_pretrained_models_by_tag, list_pretrained_tags_by_model,\ - get_pretrained_url, download_pretrained_from_url, is_pretrained_cfg, get_pretrained_cfg, download_pretrained -from .tokenizer import SimpleTokenizer, tokenize -from .transform import image_transform, AugmentationCfg diff --git a/spaces/hank1996/yolopv2/lib/core/postprocess.py b/spaces/hank1996/yolopv2/lib/core/postprocess.py deleted file mode 100644 index f1f3f1843e61d079cb7b02f61bd6da3b2cfc2cad..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/lib/core/postprocess.py +++ /dev/null @@ -1,223 +0,0 @@ - - -import torch -from lib.utils import is_parallel -import numpy as np -np.set_printoptions(threshold=np.inf) -import cv2 -from sklearn.cluster import DBSCAN - - -def build_targets(cfg, predictions, targets, model): - ''' - predictions - [16, 3, 32, 32, 85] - [16, 3, 16, 16, 85] - [16, 3, 8, 8, 85] - torch.tensor(predictions[i].shape)[[3, 2, 3, 2]] - [32,32,32,32] - [16,16,16,16] - [8,8,8,8] - targets[3,x,7] - t [index, class, x, y, w, h, head_index] - ''' - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - det = model.module.model[model.module.detector_index] if is_parallel(model) \ - else model.model[model.detector_index] # Detect() module - # print(type(model)) - # det = model.model[model.detector_index] - # print(type(det)) - na, nt = det.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch = [], [], [], [] - gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(det.nl): - anchors = det.anchors[i] #[3,2] - gain[2:6] = torch.tensor(predictions[i].shape)[[3, 2, 3, 2]] # xyxy gain - # Match targets to anchors - t = targets * gain - - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < cfg.TRAIN.ANCHOR_THRESHOLD # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - return tcls, tbox, indices, anch - -def morphological_process(image, kernel_size=5, func_type=cv2.MORPH_CLOSE): - """ - morphological process to fill the hole in the binary segmentation result - :param image: - :param kernel_size: - :return: - """ - if len(image.shape) == 3: - raise ValueError('Binary segmentation result image should be a single channel image') - - if image.dtype is not np.uint8: - image = np.array(image, np.uint8) - - kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(kernel_size, kernel_size)) - - # close operation fille hole - closing = cv2.morphologyEx(image, func_type, kernel, iterations=1) - - return closing - -def connect_components_analysis(image): - """ - connect components analysis to remove the small components - :param image: - :return: - """ - if len(image.shape) == 3: - gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - else: - gray_image = image - # print(gray_image.dtype) - return cv2.connectedComponentsWithStats(gray_image, connectivity=8, ltype=cv2.CV_32S) - -def if_y(samples_x): - for sample_x in samples_x: - if len(sample_x): - # if len(sample_x) != (sample_x[-1] - sample_x[0] + 1) or sample_x[-1] == sample_x[0]: - if sample_x[-1] == sample_x[0]: - return False - return True - -def fitlane(mask, sel_labels, labels, stats): - H, W = mask.shape - for label_group in sel_labels: - states = [stats[k] for k in label_group] - x, y, w, h, _ = states[0] - # if len(label_group) > 1: - # print('in') - # for m in range(len(label_group)-1): - # labels[labels == label_group[m+1]] = label_group[0] - t = label_group[0] - # samples_y = np.linspace(y, H-1, 30) - # else: - samples_y = np.linspace(y, y+h-1, 30) - - samples_x = [np.where(labels[int(sample_y)]==t)[0] for sample_y in samples_y] - - if if_y(samples_x): - samples_x = [int(np.mean(sample_x)) if len(sample_x) else -1 for sample_x in samples_x] - samples_x = np.array(samples_x) - samples_y = np.array(samples_y) - samples_y = samples_y[samples_x != -1] - samples_x = samples_x[samples_x != -1] - func = np.polyfit(samples_y, samples_x, 2) - x_limits = np.polyval(func, H-1) - # if (y_max + h - 1) >= 720: - if x_limits < 0 or x_limits > W: - # if (y_max + h - 1) > 720: - # draw_y = np.linspace(y, 720-1, 720-y) - draw_y = np.linspace(y, y+h-1, h) - else: - # draw_y = np.linspace(y, y+h-1, y+h-y) - draw_y = np.linspace(y, H-1, H-y) - draw_x = np.polyval(func, draw_y) - # draw_y = draw_y[draw_x < W] - # draw_x = draw_x[draw_x < W] - draw_points = (np.asarray([draw_x, draw_y]).T).astype(np.int32) - cv2.polylines(mask, [draw_points], False, 1, thickness=15) - else: - # if ( + w - 1) >= 1280: - samples_x = np.linspace(x, W-1, 30) - # else: - # samples_x = np.linspace(x, x_max+w-1, 30) - samples_y = [np.where(labels[:, int(sample_x)]==t)[0] for sample_x in samples_x] - samples_y = [int(np.mean(sample_y)) if len(sample_y) else -1 for sample_y in samples_y] - samples_x = np.array(samples_x) - samples_y = np.array(samples_y) - samples_x = samples_x[samples_y != -1] - samples_y = samples_y[samples_y != -1] - try: - func = np.polyfit(samples_x, samples_y, 2) - except: - pass - # y_limits = np.polyval(func, 0) - # if y_limits > 720 or y_limits < 0: - # if (x + w - 1) >= 1280: - # draw_x = np.linspace(x, 1280-1, 1280-x) - # else: - y_limits = np.polyval(func, 0) - if y_limits >= H or y_limits < 0: - draw_x = np.linspace(x, x+w-1, w+x-x) - else: - y_limits = np.polyval(func, W-1) - if y_limits >= H or y_limits < 0: - draw_x = np.linspace(x, x+w-1, w+x-x) - # if x+w-1 < 640: - # draw_x = np.linspace(0, x+w-1, w+x-x) - else: - draw_x = np.linspace(x, W-1, W-x) - draw_y = np.polyval(func, draw_x) - draw_points = (np.asarray([draw_x, draw_y]).T).astype(np.int32) - cv2.polylines(mask, [draw_points], False, 1, thickness=15) - return mask - -def connect_lane(image, shadow_height=0): - if len(image.shape) == 3: - gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - else: - gray_image = image - if shadow_height: - image[:shadow_height] = 0 - mask = np.zeros((image.shape[0], image.shape[1]), np.uint8) - - num_labels, labels, stats, centers = cv2.connectedComponentsWithStats(gray_image, connectivity=8, ltype=cv2.CV_32S) - # ratios = [] - selected_label = [] - - for t in range(1, num_labels, 1): - _, _, _, _, area = stats[t] - if area > 400: - selected_label.append(t) - if len(selected_label) == 0: - return mask - else: - split_labels = [[label,] for label in selected_label] - mask_post = fitlane(mask, split_labels, labels, stats) - return mask_post - - - - diff --git a/spaces/hank1996/yolopv2/utils/loss.py b/spaces/hank1996/yolopv2/utils/loss.py deleted file mode 100644 index 96ba09011f5d0e69966b293baa79dedc173c7ecb..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/utils/loss.py +++ /dev/null @@ -1,1158 +0,0 @@ - - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from utils.general import bbox_iou, bbox_alpha_iou, box_iou, box_giou, box_diou, box_ciou, xywh2xyxy -from utils.torch_utils import is_parallel - - -def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441 - # return positive, negative label smoothing BCE targets - return 1.0 - 0.5 * eps, 0.5 * eps - - -class BCEBlurWithLogitsLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=0.05): - super(BCEBlurWithLogitsLoss, self).__init__() - self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss() - self.alpha = alpha - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - pred = torch.sigmoid(pred) # prob from logits - dx = pred - true # reduce only missing label effects - # dx = (pred - true).abs() # reduce missing label and false label effects - alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4)) - loss *= alpha_factor - return loss.mean() - - -class SigmoidBin(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, bin_count=10, min=0.0, max=1.0, reg_scale = 2.0, use_loss_regression=True, use_fw_regression=True, BCE_weight=1.0, smooth_eps=0.0): - super(SigmoidBin, self).__init__() - - self.bin_count = bin_count - self.length = bin_count + 1 - self.min = min - self.max = max - self.scale = float(max - min) - self.shift = self.scale / 2.0 - - self.use_loss_regression = use_loss_regression - self.use_fw_regression = use_fw_regression - self.reg_scale = reg_scale - self.BCE_weight = BCE_weight - - start = min + (self.scale/2.0) / self.bin_count - end = max - (self.scale/2.0) / self.bin_count - step = self.scale / self.bin_count - self.step = step - #print(f" start = {start}, end = {end}, step = {step} ") - - bins = torch.range(start, end + 0.0001, step).float() - self.register_buffer('bins', bins) - - - self.cp = 1.0 - 0.5 * smooth_eps - self.cn = 0.5 * smooth_eps - - self.BCEbins = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([BCE_weight])) - self.MSELoss = nn.MSELoss() - - def get_length(self): - return self.length - - def forward(self, pred): - assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length) - - pred_reg = (pred[..., 0] * self.reg_scale - self.reg_scale/2.0) * self.step - pred_bin = pred[..., 1:(1+self.bin_count)] - - _, bin_idx = torch.max(pred_bin, dim=-1) - bin_bias = self.bins[bin_idx] - - if self.use_fw_regression: - result = pred_reg + bin_bias - else: - result = bin_bias - result = result.clamp(min=self.min, max=self.max) - - return result - - - def training_loss(self, pred, target): - assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length) - assert pred.shape[0] == target.shape[0], 'pred.shape=%d is not equal to the target.shape=%d' % (pred.shape[0], target.shape[0]) - device = pred.device - - pred_reg = (pred[..., 0].sigmoid() * self.reg_scale - self.reg_scale/2.0) * self.step - pred_bin = pred[..., 1:(1+self.bin_count)] - - diff_bin_target = torch.abs(target[..., None] - self.bins) - _, bin_idx = torch.min(diff_bin_target, dim=-1) - - bin_bias = self.bins[bin_idx] - bin_bias.requires_grad = False - result = pred_reg + bin_bias - - target_bins = torch.full_like(pred_bin, self.cn, device=device) # targets - n = pred.shape[0] - target_bins[range(n), bin_idx] = self.cp - - loss_bin = self.BCEbins(pred_bin, target_bins) # BCE - - if self.use_loss_regression: - loss_regression = self.MSELoss(result, target) # MSE - loss = loss_bin + loss_regression - else: - loss = loss_bin - - out_result = result.clamp(min=self.min, max=self.max) - - return loss, out_result - - -class FocalLoss(nn.Module): - # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(FocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - # p_t = torch.exp(-loss) - # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability - - # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py - pred_prob = torch.sigmoid(pred) # prob from logits - p_t = true * pred_prob + (1 - true) * (1 - pred_prob) - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = (1.0 - p_t) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - - -class QFocalLoss(nn.Module): - # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(QFocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - - pred_prob = torch.sigmoid(pred) # prob from logits - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = torch.abs(true - pred_prob) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - -class RankSort(torch.autograd.Function): - @staticmethod - def forward(ctx, logits, targets, delta_RS=0.50, eps=1e-10): - - classification_grads=torch.zeros(logits.shape).cuda() - - #Filter fg logits - fg_labels = (targets > 0.) - fg_logits = logits[fg_labels] - fg_targets = targets[fg_labels] - fg_num = len(fg_logits) - - #Do not use bg with scores less than minimum fg logit - #since changing its score does not have an effect on precision - threshold_logit = torch.min(fg_logits)-delta_RS - relevant_bg_labels=((targets==0) & (logits>=threshold_logit)) - - relevant_bg_logits = logits[relevant_bg_labels] - relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda() - sorting_error=torch.zeros(fg_num).cuda() - ranking_error=torch.zeros(fg_num).cuda() - fg_grad=torch.zeros(fg_num).cuda() - - #sort the fg logits - order=torch.argsort(fg_logits) - #Loops over each positive following the order - for ii in order: - # Difference Transforms (x_ij) - fg_relations=fg_logits-fg_logits[ii] - bg_relations=relevant_bg_logits-fg_logits[ii] - - if delta_RS > 0: - fg_relations=torch.clamp(fg_relations/(2*delta_RS)+0.5,min=0,max=1) - bg_relations=torch.clamp(bg_relations/(2*delta_RS)+0.5,min=0,max=1) - else: - fg_relations = (fg_relations >= 0).float() - bg_relations = (bg_relations >= 0).float() - - # Rank of ii among pos and false positive number (bg with larger scores) - rank_pos=torch.sum(fg_relations) - FP_num=torch.sum(bg_relations) - - # Rank of ii among all examples - rank=rank_pos+FP_num - - # Ranking error of example ii. target_ranking_error is always 0. (Eq. 7) - ranking_error[ii]=FP_num/rank - - # Current sorting error of example ii. (Eq. 7) - current_sorting_error = torch.sum(fg_relations*(1-fg_targets))/rank_pos - - #Find examples in the target sorted order for example ii - iou_relations = (fg_targets >= fg_targets[ii]) - target_sorted_order = iou_relations * fg_relations - - #The rank of ii among positives in sorted order - rank_pos_target = torch.sum(target_sorted_order) - - #Compute target sorting error. (Eq. 8) - #Since target ranking error is 0, this is also total target error - target_sorting_error= torch.sum(target_sorted_order*(1-fg_targets))/rank_pos_target - - #Compute sorting error on example ii - sorting_error[ii] = current_sorting_error - target_sorting_error - - #Identity Update for Ranking Error - if FP_num > eps: - #For ii the update is the ranking error - fg_grad[ii] -= ranking_error[ii] - #For negatives, distribute error via ranking pmf (i.e. bg_relations/FP_num) - relevant_bg_grad += (bg_relations*(ranking_error[ii]/FP_num)) - - #Find the positives that are misranked (the cause of the error) - #These are the ones with smaller IoU but larger logits - missorted_examples = (~ iou_relations) * fg_relations - - #Denominotor of sorting pmf - sorting_pmf_denom = torch.sum(missorted_examples) - - #Identity Update for Sorting Error - if sorting_pmf_denom > eps: - #For ii the update is the sorting error - fg_grad[ii] -= sorting_error[ii] - #For positives, distribute error via sorting pmf (i.e. missorted_examples/sorting_pmf_denom) - fg_grad += (missorted_examples*(sorting_error[ii]/sorting_pmf_denom)) - - #Normalize gradients by number of positives - classification_grads[fg_labels]= (fg_grad/fg_num) - classification_grads[relevant_bg_labels]= (relevant_bg_grad/fg_num) - - ctx.save_for_backward(classification_grads) - - return ranking_error.mean(), sorting_error.mean() - - @staticmethod - def backward(ctx, out_grad1, out_grad2): - g1, =ctx.saved_tensors - return g1*out_grad1, None, None, None - -class aLRPLoss(torch.autograd.Function): - @staticmethod - def forward(ctx, logits, targets, regression_losses, delta=1., eps=1e-5): - classification_grads=torch.zeros(logits.shape).cuda() - - #Filter fg logits - fg_labels = (targets == 1) - fg_logits = logits[fg_labels] - fg_num = len(fg_logits) - - #Do not use bg with scores less than minimum fg logit - #since changing its score does not have an effect on precision - threshold_logit = torch.min(fg_logits)-delta - - #Get valid bg logits - relevant_bg_labels=((targets==0)&(logits>=threshold_logit)) - relevant_bg_logits=logits[relevant_bg_labels] - relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda() - rank=torch.zeros(fg_num).cuda() - prec=torch.zeros(fg_num).cuda() - fg_grad=torch.zeros(fg_num).cuda() - - max_prec=0 - #sort the fg logits - order=torch.argsort(fg_logits) - #Loops over each positive following the order - for ii in order: - #x_ij s as score differences with fgs - fg_relations=fg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with fgs - fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1) - #Discard i=j in the summation in rank_pos - fg_relations[ii]=0 - - #x_ij s as score differences with bgs - bg_relations=relevant_bg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with bgs - bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1) - - #Compute the rank of the example within fgs and number of bgs with larger scores - rank_pos=1+torch.sum(fg_relations) - FP_num=torch.sum(bg_relations) - #Store the total since it is normalizer also for aLRP Regression error - rank[ii]=rank_pos+FP_num - - #Compute precision for this example to compute classification loss - prec[ii]=rank_pos/rank[ii] - #For stability, set eps to a infinitesmall value (e.g. 1e-6), then compute grads - if FP_num > eps: - fg_grad[ii] = -(torch.sum(fg_relations*regression_losses)+FP_num)/rank[ii] - relevant_bg_grad += (bg_relations*(-fg_grad[ii]/FP_num)) - - #aLRP with grad formulation fg gradient - classification_grads[fg_labels]= fg_grad - #aLRP with grad formulation bg gradient - classification_grads[relevant_bg_labels]= relevant_bg_grad - - classification_grads /= (fg_num) - - cls_loss=1-prec.mean() - ctx.save_for_backward(classification_grads) - - return cls_loss, rank, order - - @staticmethod - def backward(ctx, out_grad1, out_grad2, out_grad3): - g1, =ctx.saved_tensors - return g1*out_grad1, None, None, None, None - - -class APLoss(torch.autograd.Function): - @staticmethod - def forward(ctx, logits, targets, delta=1.): - classification_grads=torch.zeros(logits.shape).cuda() - - #Filter fg logits - fg_labels = (targets == 1) - fg_logits = logits[fg_labels] - fg_num = len(fg_logits) - - #Do not use bg with scores less than minimum fg logit - #since changing its score does not have an effect on precision - threshold_logit = torch.min(fg_logits)-delta - - #Get valid bg logits - relevant_bg_labels=((targets==0)&(logits>=threshold_logit)) - relevant_bg_logits=logits[relevant_bg_labels] - relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda() - rank=torch.zeros(fg_num).cuda() - prec=torch.zeros(fg_num).cuda() - fg_grad=torch.zeros(fg_num).cuda() - - max_prec=0 - #sort the fg logits - order=torch.argsort(fg_logits) - #Loops over each positive following the order - for ii in order: - #x_ij s as score differences with fgs - fg_relations=fg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with fgs - fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1) - #Discard i=j in the summation in rank_pos - fg_relations[ii]=0 - - #x_ij s as score differences with bgs - bg_relations=relevant_bg_logits-fg_logits[ii] - #Apply piecewise linear function and determine relations with bgs - bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1) - - #Compute the rank of the example within fgs and number of bgs with larger scores - rank_pos=1+torch.sum(fg_relations) - FP_num=torch.sum(bg_relations) - #Store the total since it is normalizer also for aLRP Regression error - rank[ii]=rank_pos+FP_num - - #Compute precision for this example - current_prec=rank_pos/rank[ii] - - #Compute interpolated AP and store gradients for relevant bg examples - if (max_prec<=current_prec): - max_prec=current_prec - relevant_bg_grad += (bg_relations/rank[ii]) - else: - relevant_bg_grad += (bg_relations/rank[ii])*(((1-max_prec)/(1-current_prec))) - - #Store fg gradients - fg_grad[ii]=-(1-max_prec) - prec[ii]=max_prec - - #aLRP with grad formulation fg gradient - classification_grads[fg_labels]= fg_grad - #aLRP with grad formulation bg gradient - classification_grads[relevant_bg_labels]= relevant_bg_grad - - classification_grads /= fg_num - - cls_loss=1-prec.mean() - ctx.save_for_backward(classification_grads) - - return cls_loss - - @staticmethod - def backward(ctx, out_grad1): - g1, =ctx.saved_tensors - return g1*out_grad1, None, None - - -class ComputeLoss: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLoss, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.1, .05]) # P3-P7 - #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.5, 0.4, .1]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors': - setattr(self, k, getattr(det, k)) - - def __call__(self, p, targets): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = indices[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets - t[range(n), tcls[i]] = self.cp - #t[t==self.cp] = iou.detach().clamp(0).type(t.dtype) - lcls += self.BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., 4], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch = [], [], [], [] - gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - return tcls, tbox, indices, anch - - -class ComputeLossOTA: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLossOTA, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors', 'stride': - setattr(self, k, getattr(det, k)) - - def __call__(self, p, targets, imgs): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs) - pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p] - - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - grid = torch.stack([gi, gj], dim=1) - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - #pxy = ps[:, :2].sigmoid() * 3. - 1. - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i] - selected_tbox[:, :2] -= grid - iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - selected_tcls = targets[i][:, 1].long() - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets - t[range(n), selected_tcls] = self.cp - lcls += self.BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., 4], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets, imgs): - - #indices, anch = self.find_positive(p, targets) - indices, anch = self.find_3_positive(p, targets) - #indices, anch = self.find_4_positive(p, targets) - #indices, anch = self.find_5_positive(p, targets) - #indices, anch = self.find_9_positive(p, targets) - - matching_bs = [[] for pp in p] - matching_as = [[] for pp in p] - matching_gjs = [[] for pp in p] - matching_gis = [[] for pp in p] - matching_targets = [[] for pp in p] - matching_anchs = [[] for pp in p] - - nl = len(p) - - for batch_idx in range(p[0].shape[0]): - - b_idx = targets[:, 0]==batch_idx - this_target = targets[b_idx] - if this_target.shape[0] == 0: - continue - - txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1] - txyxy = xywh2xyxy(txywh) - - pxyxys = [] - p_cls = [] - p_obj = [] - from_which_layer = [] - all_b = [] - all_a = [] - all_gj = [] - all_gi = [] - all_anch = [] - - for i, pi in enumerate(p): - - b, a, gj, gi = indices[i] - idx = (b == batch_idx) - b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx] - all_b.append(b) - all_a.append(a) - all_gj.append(gj) - all_gi.append(gi) - all_anch.append(anch[i][idx]) - from_which_layer.append(torch.ones(size=(len(b),)) * i) - - fg_pred = pi[b, a, gj, gi] - p_obj.append(fg_pred[:, 4:5]) - p_cls.append(fg_pred[:, 5:]) - - grid = torch.stack([gi, gj], dim=1) - pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8. - #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i] - pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8. - pxywh = torch.cat([pxy, pwh], dim=-1) - pxyxy = xywh2xyxy(pxywh) - pxyxys.append(pxyxy) - - pxyxys = torch.cat(pxyxys, dim=0) - if pxyxys.shape[0] == 0: - continue - p_obj = torch.cat(p_obj, dim=0) - p_cls = torch.cat(p_cls, dim=0) - from_which_layer = torch.cat(from_which_layer, dim=0) - all_b = torch.cat(all_b, dim=0) - all_a = torch.cat(all_a, dim=0) - all_gj = torch.cat(all_gj, dim=0) - all_gi = torch.cat(all_gi, dim=0) - all_anch = torch.cat(all_anch, dim=0) - - pair_wise_iou = box_iou(txyxy, pxyxys) - - pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8) - - top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1) - dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1) - - gt_cls_per_image = ( - F.one_hot(this_target[:, 1].to(torch.int64), self.nc) - .float() - .unsqueeze(1) - .repeat(1, pxyxys.shape[0], 1) - ) - - num_gt = this_target.shape[0] - cls_preds_ = ( - p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - ) - - y = cls_preds_.sqrt_() - pair_wise_cls_loss = F.binary_cross_entropy_with_logits( - torch.log(y/(1-y)) , gt_cls_per_image, reduction="none" - ).sum(-1) - del cls_preds_ - - cost = ( - pair_wise_cls_loss - + 3.0 * pair_wise_iou_loss - ) - - matching_matrix = torch.zeros_like(cost) - - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False - ) - matching_matrix[gt_idx][pos_idx] = 1.0 - - del top_k, dynamic_ks - anchor_matching_gt = matching_matrix.sum(0) - if (anchor_matching_gt > 1).sum() > 0: - _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0) - matching_matrix[:, anchor_matching_gt > 1] *= 0.0 - matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0 - fg_mask_inboxes = matching_matrix.sum(0) > 0.0 - matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0) - - from_which_layer = from_which_layer[fg_mask_inboxes] - all_b = all_b[fg_mask_inboxes] - all_a = all_a[fg_mask_inboxes] - all_gj = all_gj[fg_mask_inboxes] - all_gi = all_gi[fg_mask_inboxes] - all_anch = all_anch[fg_mask_inboxes] - - this_target = this_target[matched_gt_inds] - - for i in range(nl): - layer_idx = from_which_layer == i - matching_bs[i].append(all_b[layer_idx]) - matching_as[i].append(all_a[layer_idx]) - matching_gjs[i].append(all_gj[layer_idx]) - matching_gis[i].append(all_gi[layer_idx]) - matching_targets[i].append(this_target[layer_idx]) - matching_anchs[i].append(all_anch[layer_idx]) - - for i in range(nl): - matching_bs[i] = torch.cat(matching_bs[i], dim=0) - matching_as[i] = torch.cat(matching_as[i], dim=0) - matching_gjs[i] = torch.cat(matching_gjs[i], dim=0) - matching_gis[i] = torch.cat(matching_gis[i], dim=0) - matching_targets[i] = torch.cat(matching_targets[i], dim=0) - matching_anchs[i] = torch.cat(matching_anchs[i], dim=0) - - return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs - - def find_3_positive(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - indices, anch = [], [] - gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - anch.append(anchors[a]) # anchors - - return indices, anch - - -class ComputeLossBinOTA: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLossBinOTA, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - #MSEangle = nn.MSELoss().to(device) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors', 'stride', 'bin_count': - setattr(self, k, getattr(det, k)) - - #xy_bin_sigmoid = SigmoidBin(bin_count=11, min=-0.5, max=1.5, use_loss_regression=False).to(device) - wh_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0, use_loss_regression=False).to(device) - #angle_bin_sigmoid = SigmoidBin(bin_count=31, min=-1.1, max=1.1, use_loss_regression=False).to(device) - self.wh_bin_sigmoid = wh_bin_sigmoid - - def __call__(self, p, targets, imgs): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs) - pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p] - - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 # x,y, w-bce, h-bce # xy_bin_sigmoid.get_length()*2 - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - grid = torch.stack([gi, gj], dim=1) - selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i] - selected_tbox[:, :2] -= grid - - #pxy = ps[:, :2].sigmoid() * 2. - 0.5 - ##pxy = ps[:, :2].sigmoid() * 3. - 1. - #pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - #pbox = torch.cat((pxy, pwh), 1) # predicted box - - #x_loss, px = xy_bin_sigmoid.training_loss(ps[..., 0:12], tbox[i][..., 0]) - #y_loss, py = xy_bin_sigmoid.training_loss(ps[..., 12:24], tbox[i][..., 1]) - w_loss, pw = self.wh_bin_sigmoid.training_loss(ps[..., 2:(3+self.bin_count)], selected_tbox[..., 2] / anchors[i][..., 0]) - h_loss, ph = self.wh_bin_sigmoid.training_loss(ps[..., (3+self.bin_count):obj_idx], selected_tbox[..., 3] / anchors[i][..., 1]) - - pw *= anchors[i][..., 0] - ph *= anchors[i][..., 1] - - px = ps[:, 0].sigmoid() * 2. - 0.5 - py = ps[:, 1].sigmoid() * 2. - 0.5 - - lbox += w_loss + h_loss # + x_loss + y_loss - - #print(f"\n px = {px.shape}, py = {py.shape}, pw = {pw.shape}, ph = {ph.shape} \n") - - pbox = torch.cat((px.unsqueeze(1), py.unsqueeze(1), pw.unsqueeze(1), ph.unsqueeze(1)), 1).to(device) # predicted box - - - - - iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - selected_tcls = targets[i][:, 1].long() - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, (1+obj_idx):], self.cn, device=device) # targets - t[range(n), selected_tcls] = self.cp - lcls += self.BCEcls(ps[:, (1+obj_idx):], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., obj_idx], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets, imgs): - - #indices, anch = self.find_positive(p, targets) - indices, anch = self.find_3_positive(p, targets) - #indices, anch = self.find_4_positive(p, targets) - #indices, anch = self.find_5_positive(p, targets) - #indices, anch = self.find_9_positive(p, targets) - - matching_bs = [[] for pp in p] - matching_as = [[] for pp in p] - matching_gjs = [[] for pp in p] - matching_gis = [[] for pp in p] - matching_targets = [[] for pp in p] - matching_anchs = [[] for pp in p] - - nl = len(p) - - for batch_idx in range(p[0].shape[0]): - - b_idx = targets[:, 0]==batch_idx - this_target = targets[b_idx] - if this_target.shape[0] == 0: - continue - - txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1] - txyxy = xywh2xyxy(txywh) - - pxyxys = [] - p_cls = [] - p_obj = [] - from_which_layer = [] - all_b = [] - all_a = [] - all_gj = [] - all_gi = [] - all_anch = [] - - for i, pi in enumerate(p): - - obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 - - b, a, gj, gi = indices[i] - idx = (b == batch_idx) - b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx] - all_b.append(b) - all_a.append(a) - all_gj.append(gj) - all_gi.append(gi) - all_anch.append(anch[i][idx]) - from_which_layer.append(torch.ones(size=(len(b),)) * i) - - fg_pred = pi[b, a, gj, gi] - p_obj.append(fg_pred[:, obj_idx:(obj_idx+1)]) - p_cls.append(fg_pred[:, (obj_idx+1):]) - - grid = torch.stack([gi, gj], dim=1) - pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8. - #pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8. - pw = self.wh_bin_sigmoid.forward(fg_pred[..., 2:(3+self.bin_count)].sigmoid()) * anch[i][idx][:, 0] * self.stride[i] - ph = self.wh_bin_sigmoid.forward(fg_pred[..., (3+self.bin_count):obj_idx].sigmoid()) * anch[i][idx][:, 1] * self.stride[i] - - pxywh = torch.cat([pxy, pw.unsqueeze(1), ph.unsqueeze(1)], dim=-1) - pxyxy = xywh2xyxy(pxywh) - pxyxys.append(pxyxy) - - pxyxys = torch.cat(pxyxys, dim=0) - if pxyxys.shape[0] == 0: - continue - p_obj = torch.cat(p_obj, dim=0) - p_cls = torch.cat(p_cls, dim=0) - from_which_layer = torch.cat(from_which_layer, dim=0) - all_b = torch.cat(all_b, dim=0) - all_a = torch.cat(all_a, dim=0) - all_gj = torch.cat(all_gj, dim=0) - all_gi = torch.cat(all_gi, dim=0) - all_anch = torch.cat(all_anch, dim=0) - - pair_wise_iou = box_iou(txyxy, pxyxys) - - pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8) - - top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1) - dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1) - - gt_cls_per_image = ( - F.one_hot(this_target[:, 1].to(torch.int64), self.nc) - .float() - .unsqueeze(1) - .repeat(1, pxyxys.shape[0], 1) - ) - - num_gt = this_target.shape[0] - cls_preds_ = ( - p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_() - ) - - y = cls_preds_.sqrt_() - pair_wise_cls_loss = F.binary_cross_entropy_with_logits( - torch.log(y/(1-y)) , gt_cls_per_image, reduction="none" - ).sum(-1) - del cls_preds_ - - cost = ( - pair_wise_cls_loss - + 3.0 * pair_wise_iou_loss - ) - - matching_matrix = torch.zeros_like(cost) - - for gt_idx in range(num_gt): - _, pos_idx = torch.topk( - cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False - ) - matching_matrix[gt_idx][pos_idx] = 1.0 - - del top_k, dynamic_ks - anchor_matching_gt = matching_matrix.sum(0) - if (anchor_matching_gt > 1).sum() > 0: - _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0) - matching_matrix[:, anchor_matching_gt > 1] *= 0.0 - matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0 - fg_mask_inboxes = matching_matrix.sum(0) > 0.0 - matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0) - - from_which_layer = from_which_layer[fg_mask_inboxes] - all_b = all_b[fg_mask_inboxes] - all_a = all_a[fg_mask_inboxes] - all_gj = all_gj[fg_mask_inboxes] - all_gi = all_gi[fg_mask_inboxes] - all_anch = all_anch[fg_mask_inboxes] - - this_target = this_target[matched_gt_inds] - - for i in range(nl): - layer_idx = from_which_layer == i - matching_bs[i].append(all_b[layer_idx]) - matching_as[i].append(all_a[layer_idx]) - matching_gjs[i].append(all_gj[layer_idx]) - matching_gis[i].append(all_gi[layer_idx]) - matching_targets[i].append(this_target[layer_idx]) - matching_anchs[i].append(all_anch[layer_idx]) - - for i in range(nl): - matching_bs[i] = torch.cat(matching_bs[i], dim=0) - matching_as[i] = torch.cat(matching_as[i], dim=0) - matching_gjs[i] = torch.cat(matching_gjs[i], dim=0) - matching_gis[i] = torch.cat(matching_gis[i], dim=0) - matching_targets[i] = torch.cat(matching_targets[i], dim=0) - matching_anchs[i] = torch.cat(matching_anchs[i], dim=0) - - return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs - - def find_3_positive(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - indices, anch = [], [] - gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - anch.append(anchors[a]) # anchors - - return indices, anch - diff --git a/spaces/hareshhecker/prompthero-openjourney-v2v3/app.py b/spaces/hareshhecker/prompthero-openjourney-v2v3/app.py deleted file mode 100644 index 4fa45eda1d4a0af263ec59b35e375b837fe1ecf1..0000000000000000000000000000000000000000 --- a/spaces/hareshhecker/prompthero-openjourney-v2v3/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney-v2").launch() \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/builtin_meta.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/builtin_meta.py deleted file mode 100644 index 74c79863a9d1ef5df9b5ce64f97d6be8e4e37d59..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/builtin_meta.py +++ /dev/null @@ -1,267 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - - -# All coco categories, together with their nice-looking visualization colors -# It's from https://github.com/cocodataset/panopticapi/blob/master/panoptic_coco_categories.json -COCO_CATEGORIES = [ - {"color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}, - {"color": [119, 11, 32], "isthing": 1, "id": 2, "name": "bicycle"}, - {"color": [0, 0, 142], "isthing": 1, "id": 3, "name": "car"}, - {"color": [0, 0, 230], "isthing": 1, "id": 4, "name": "motorcycle"}, - {"color": [106, 0, 228], "isthing": 1, "id": 5, "name": "airplane"}, - {"color": [0, 60, 100], "isthing": 1, "id": 6, "name": "bus"}, - {"color": [0, 80, 100], "isthing": 1, "id": 7, "name": "train"}, - {"color": [0, 0, 70], "isthing": 1, "id": 8, "name": "truck"}, - {"color": [0, 0, 192], "isthing": 1, "id": 9, "name": "boat"}, - {"color": [250, 170, 30], "isthing": 1, "id": 10, "name": "traffic light"}, - {"color": [100, 170, 30], "isthing": 1, "id": 11, "name": "fire hydrant"}, - {"color": [220, 220, 0], "isthing": 1, "id": 13, "name": "stop sign"}, - {"color": [175, 116, 175], "isthing": 1, "id": 14, "name": "parking meter"}, - {"color": [250, 0, 30], "isthing": 1, "id": 15, "name": "bench"}, - {"color": [165, 42, 42], "isthing": 1, "id": 16, "name": "bird"}, - {"color": [255, 77, 255], "isthing": 1, "id": 17, "name": "cat"}, - {"color": [0, 226, 252], "isthing": 1, "id": 18, "name": "dog"}, - {"color": [182, 182, 255], "isthing": 1, "id": 19, "name": "horse"}, - {"color": [0, 82, 0], "isthing": 1, "id": 20, "name": "sheep"}, - {"color": [120, 166, 157], "isthing": 1, "id": 21, "name": "cow"}, - {"color": [110, 76, 0], "isthing": 1, "id": 22, "name": "elephant"}, - {"color": [174, 57, 255], "isthing": 1, "id": 23, "name": "bear"}, - {"color": [199, 100, 0], "isthing": 1, "id": 24, "name": "zebra"}, - {"color": [72, 0, 118], "isthing": 1, "id": 25, "name": "giraffe"}, - {"color": [255, 179, 240], "isthing": 1, "id": 27, "name": "backpack"}, - {"color": [0, 125, 92], "isthing": 1, "id": 28, "name": "umbrella"}, - {"color": [209, 0, 151], "isthing": 1, "id": 31, "name": "handbag"}, - {"color": [188, 208, 182], "isthing": 1, "id": 32, "name": "tie"}, - {"color": [0, 220, 176], "isthing": 1, "id": 33, "name": "suitcase"}, - {"color": [255, 99, 164], "isthing": 1, "id": 34, "name": "frisbee"}, - {"color": [92, 0, 73], "isthing": 1, "id": 35, "name": "skis"}, - {"color": [133, 129, 255], "isthing": 1, "id": 36, "name": "snowboard"}, - {"color": [78, 180, 255], "isthing": 1, "id": 37, "name": "sports ball"}, - {"color": [0, 228, 0], "isthing": 1, "id": 38, "name": "kite"}, - {"color": [174, 255, 243], "isthing": 1, "id": 39, "name": "baseball bat"}, - {"color": [45, 89, 255], "isthing": 1, "id": 40, "name": "baseball glove"}, - {"color": [134, 134, 103], "isthing": 1, "id": 41, "name": "skateboard"}, - {"color": [145, 148, 174], "isthing": 1, "id": 42, "name": "surfboard"}, - {"color": [255, 208, 186], "isthing": 1, "id": 43, "name": "tennis racket"}, - {"color": [197, 226, 255], "isthing": 1, "id": 44, "name": "bottle"}, - {"color": [171, 134, 1], "isthing": 1, "id": 46, "name": "wine glass"}, - {"color": [109, 63, 54], "isthing": 1, "id": 47, "name": "cup"}, - {"color": [207, 138, 255], "isthing": 1, "id": 48, "name": "fork"}, - {"color": [151, 0, 95], "isthing": 1, "id": 49, "name": "knife"}, - {"color": [9, 80, 61], "isthing": 1, "id": 50, "name": "spoon"}, - {"color": [84, 105, 51], "isthing": 1, "id": 51, "name": "bowl"}, - {"color": [74, 65, 105], "isthing": 1, "id": 52, "name": "banana"}, - {"color": [166, 196, 102], "isthing": 1, "id": 53, "name": "apple"}, - {"color": [208, 195, 210], "isthing": 1, "id": 54, "name": "sandwich"}, - {"color": [255, 109, 65], "isthing": 1, "id": 55, "name": "orange"}, - {"color": [0, 143, 149], "isthing": 1, "id": 56, "name": "broccoli"}, - {"color": [179, 0, 194], "isthing": 1, "id": 57, "name": "carrot"}, - {"color": [209, 99, 106], "isthing": 1, "id": 58, "name": "hot dog"}, - {"color": [5, 121, 0], "isthing": 1, "id": 59, "name": "pizza"}, - {"color": [227, 255, 205], "isthing": 1, "id": 60, "name": "donut"}, - {"color": [147, 186, 208], "isthing": 1, "id": 61, "name": "cake"}, - {"color": [153, 69, 1], "isthing": 1, "id": 62, "name": "chair"}, - {"color": [3, 95, 161], "isthing": 1, "id": 63, "name": "couch"}, - {"color": [163, 255, 0], "isthing": 1, "id": 64, "name": "potted plant"}, - {"color": [119, 0, 170], "isthing": 1, "id": 65, "name": "bed"}, - {"color": [0, 182, 199], "isthing": 1, "id": 67, "name": "dining table"}, - {"color": [0, 165, 120], "isthing": 1, "id": 70, "name": "toilet"}, - {"color": [183, 130, 88], "isthing": 1, "id": 72, "name": "tv"}, - {"color": [95, 32, 0], "isthing": 1, "id": 73, "name": "laptop"}, - {"color": [130, 114, 135], "isthing": 1, "id": 74, "name": "mouse"}, - {"color": [110, 129, 133], "isthing": 1, "id": 75, "name": "remote"}, - {"color": [166, 74, 118], "isthing": 1, "id": 76, "name": "keyboard"}, - {"color": [219, 142, 185], "isthing": 1, "id": 77, "name": "cell phone"}, - {"color": [79, 210, 114], "isthing": 1, "id": 78, "name": "microwave"}, - {"color": [178, 90, 62], "isthing": 1, "id": 79, "name": "oven"}, - {"color": [65, 70, 15], "isthing": 1, "id": 80, "name": "toaster"}, - {"color": [127, 167, 115], "isthing": 1, "id": 81, "name": "sink"}, - {"color": [59, 105, 106], "isthing": 1, "id": 82, "name": "refrigerator"}, - {"color": [142, 108, 45], "isthing": 1, "id": 84, "name": "book"}, - {"color": [196, 172, 0], "isthing": 1, "id": 85, "name": "clock"}, - {"color": [95, 54, 80], "isthing": 1, "id": 86, "name": "vase"}, - {"color": [128, 76, 255], "isthing": 1, "id": 87, "name": "scissors"}, - {"color": [201, 57, 1], "isthing": 1, "id": 88, "name": "teddy bear"}, - {"color": [246, 0, 122], "isthing": 1, "id": 89, "name": "hair drier"}, - {"color": [191, 162, 208], "isthing": 1, "id": 90, "name": "toothbrush"}, - {"color": [255, 255, 128], "isthing": 0, "id": 92, "name": "banner"}, - {"color": [147, 211, 203], "isthing": 0, "id": 93, "name": "blanket"}, - {"color": [150, 100, 100], "isthing": 0, "id": 95, "name": "bridge"}, - {"color": [168, 171, 172], "isthing": 0, "id": 100, "name": "cardboard"}, - {"color": [146, 112, 198], "isthing": 0, "id": 107, "name": "counter"}, - {"color": [210, 170, 100], "isthing": 0, "id": 109, "name": "curtain"}, - {"color": [92, 136, 89], "isthing": 0, "id": 112, "name": "door-stuff"}, - {"color": [218, 88, 184], "isthing": 0, "id": 118, "name": "floor-wood"}, - {"color": [241, 129, 0], "isthing": 0, "id": 119, "name": "flower"}, - {"color": [217, 17, 255], "isthing": 0, "id": 122, "name": "fruit"}, - {"color": [124, 74, 181], "isthing": 0, "id": 125, "name": "gravel"}, - {"color": [70, 70, 70], "isthing": 0, "id": 128, "name": "house"}, - {"color": [255, 228, 255], "isthing": 0, "id": 130, "name": "light"}, - {"color": [154, 208, 0], "isthing": 0, "id": 133, "name": "mirror-stuff"}, - {"color": [193, 0, 92], "isthing": 0, "id": 138, "name": "net"}, - {"color": [76, 91, 113], "isthing": 0, "id": 141, "name": "pillow"}, - {"color": [255, 180, 195], "isthing": 0, "id": 144, "name": "platform"}, - {"color": [106, 154, 176], "isthing": 0, "id": 145, "name": "playingfield"}, - {"color": [230, 150, 140], "isthing": 0, "id": 147, "name": "railroad"}, - {"color": [60, 143, 255], "isthing": 0, "id": 148, "name": "river"}, - {"color": [128, 64, 128], "isthing": 0, "id": 149, "name": "road"}, - {"color": [92, 82, 55], "isthing": 0, "id": 151, "name": "roof"}, - {"color": [254, 212, 124], "isthing": 0, "id": 154, "name": "sand"}, - {"color": [73, 77, 174], "isthing": 0, "id": 155, "name": "sea"}, - {"color": [255, 160, 98], "isthing": 0, "id": 156, "name": "shelf"}, - {"color": [255, 255, 255], "isthing": 0, "id": 159, "name": "snow"}, - {"color": [104, 84, 109], "isthing": 0, "id": 161, "name": "stairs"}, - {"color": [169, 164, 131], "isthing": 0, "id": 166, "name": "tent"}, - {"color": [225, 199, 255], "isthing": 0, "id": 168, "name": "towel"}, - {"color": [137, 54, 74], "isthing": 0, "id": 171, "name": "wall-brick"}, - {"color": [135, 158, 223], "isthing": 0, "id": 175, "name": "wall-stone"}, - {"color": [7, 246, 231], "isthing": 0, "id": 176, "name": "wall-tile"}, - {"color": [107, 255, 200], "isthing": 0, "id": 177, "name": "wall-wood"}, - {"color": [58, 41, 149], "isthing": 0, "id": 178, "name": "water-other"}, - {"color": [183, 121, 142], "isthing": 0, "id": 180, "name": "window-blind"}, - {"color": [255, 73, 97], "isthing": 0, "id": 181, "name": "window-other"}, - {"color": [107, 142, 35], "isthing": 0, "id": 184, "name": "tree-merged"}, - {"color": [190, 153, 153], "isthing": 0, "id": 185, "name": "fence-merged"}, - {"color": [146, 139, 141], "isthing": 0, "id": 186, "name": "ceiling-merged"}, - {"color": [70, 130, 180], "isthing": 0, "id": 187, "name": "sky-other-merged"}, - {"color": [134, 199, 156], "isthing": 0, "id": 188, "name": "cabinet-merged"}, - {"color": [209, 226, 140], "isthing": 0, "id": 189, "name": "table-merged"}, - {"color": [96, 36, 108], "isthing": 0, "id": 190, "name": "floor-other-merged"}, - {"color": [96, 96, 96], "isthing": 0, "id": 191, "name": "pavement-merged"}, - {"color": [64, 170, 64], "isthing": 0, "id": 192, "name": "mountain-merged"}, - {"color": [152, 251, 152], "isthing": 0, "id": 193, "name": "grass-merged"}, - {"color": [208, 229, 228], "isthing": 0, "id": 194, "name": "dirt-merged"}, - {"color": [206, 186, 171], "isthing": 0, "id": 195, "name": "paper-merged"}, - {"color": [152, 161, 64], "isthing": 0, "id": 196, "name": "food-other-merged"}, - {"color": [116, 112, 0], "isthing": 0, "id": 197, "name": "building-other-merged"}, - {"color": [0, 114, 143], "isthing": 0, "id": 198, "name": "rock-merged"}, - {"color": [102, 102, 156], "isthing": 0, "id": 199, "name": "wall-other-merged"}, - {"color": [250, 141, 255], "isthing": 0, "id": 200, "name": "rug-merged"}, -] - -# fmt: off -COCO_PERSON_KEYPOINT_NAMES = ( - "nose", - "left_eye", "right_eye", - "left_ear", "right_ear", - "left_shoulder", "right_shoulder", - "left_elbow", "right_elbow", - "left_wrist", "right_wrist", - "left_hip", "right_hip", - "left_knee", "right_knee", - "left_ankle", "right_ankle", -) -# fmt: on - -# Pairs of keypoints that should be exchanged under horizontal flipping -COCO_PERSON_KEYPOINT_FLIP_MAP = ( - ("left_eye", "right_eye"), - ("left_ear", "right_ear"), - ("left_shoulder", "right_shoulder"), - ("left_elbow", "right_elbow"), - ("left_wrist", "right_wrist"), - ("left_hip", "right_hip"), - ("left_knee", "right_knee"), - ("left_ankle", "right_ankle"), -) - -# rules for pairs of keypoints to draw a line between, and the line color to use. -KEYPOINT_CONNECTION_RULES = [ - # face - ("left_ear", "left_eye", (102, 204, 255)), - ("right_ear", "right_eye", (51, 153, 255)), - ("left_eye", "nose", (102, 0, 204)), - ("nose", "right_eye", (51, 102, 255)), - # upper-body - ("left_shoulder", "right_shoulder", (255, 128, 0)), - ("left_shoulder", "left_elbow", (153, 255, 204)), - ("right_shoulder", "right_elbow", (128, 229, 255)), - ("left_elbow", "left_wrist", (153, 255, 153)), - ("right_elbow", "right_wrist", (102, 255, 224)), - # lower-body - ("left_hip", "right_hip", (255, 102, 0)), - ("left_hip", "left_knee", (255, 255, 77)), - ("right_hip", "right_knee", (153, 255, 204)), - ("left_knee", "left_ankle", (191, 255, 128)), - ("right_knee", "right_ankle", (255, 195, 77)), -] - - -def _get_coco_instances_meta(): - thing_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1] - assert len(thing_ids) == 80, len(thing_ids) - # Mapping from the incontiguous COCO category id to an id in [0, 79] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - "thing_colors": thing_colors, - } - return ret - - -def _get_coco_panoptic_separated_meta(): - """ - Returns metadata for "separated" version of the panoptic segmentation dataset. - """ - stuff_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 0] - assert len(stuff_ids) == 53, len(stuff_ids) - - # For semantic segmentation, this mapping maps from contiguous stuff id - # (in [0, 53], used in models) to ids in the dataset (used for processing results) - # The id 0 is mapped to an extra category "thing". - stuff_dataset_id_to_contiguous_id = {k: i + 1 for i, k in enumerate(stuff_ids)} - # When converting COCO panoptic annotations to semantic annotations - # We label the "thing" category to 0 - stuff_dataset_id_to_contiguous_id[0] = 0 - - # 54 names for COCO stuff categories (including "things") - stuff_classes = ["things"] + [ - k["name"].replace("-other", "").replace("-merged", "") - for k in COCO_CATEGORIES - if k["isthing"] == 0 - ] - - # NOTE: I randomly picked a color for things - stuff_colors = [[82, 18, 128]] + [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 0] - ret = { - "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id, - "stuff_classes": stuff_classes, - "stuff_colors": stuff_colors, - } - ret.update(_get_coco_instances_meta()) - return ret - - -def _get_builtin_metadata(dataset_name): - if dataset_name == "coco": - return _get_coco_instances_meta() - if dataset_name == "coco_panoptic_separated": - return _get_coco_panoptic_separated_meta() - elif dataset_name == "coco_person": - return { - "thing_classes": ["person"], - "keypoint_names": COCO_PERSON_KEYPOINT_NAMES, - "keypoint_flip_map": COCO_PERSON_KEYPOINT_FLIP_MAP, - "keypoint_connection_rules": KEYPOINT_CONNECTION_RULES, - } - elif dataset_name == "cityscapes": - # fmt: off - CITYSCAPES_THING_CLASSES = [ - "person", "rider", "car", "truck", - "bus", "train", "motorcycle", "bicycle", - ] - CITYSCAPES_STUFF_CLASSES = [ - "road", "sidewalk", "building", "wall", "fence", "pole", "traffic light", - "traffic sign", "vegetation", "terrain", "sky", "person", "rider", "car", - "truck", "bus", "train", "motorcycle", "bicycle", "license plate", - ] - # fmt: on - return { - "thing_classes": CITYSCAPES_THING_CLASSES, - "stuff_classes": CITYSCAPES_STUFF_CLASSES, - } - raise KeyError("No built-in metadata for dataset {}".format(dataset_name)) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/trident_conv.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/trident_conv.py deleted file mode 100644 index 7e2d5252bda5ebb2e9eee10af9c9a14fc72bb8fe..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/trident_conv.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.modules.utils import _pair - -from detectron2.layers.wrappers import _NewEmptyTensorOp - - -class TridentConv(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - paddings=0, - dilations=1, - groups=1, - num_branch=1, - test_branch_idx=-1, - bias=False, - norm=None, - activation=None, - ): - super(TridentConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.num_branch = num_branch - self.stride = _pair(stride) - self.groups = groups - self.with_bias = bias - if isinstance(paddings, int): - paddings = [paddings] * self.num_branch - if isinstance(dilations, int): - dilations = [dilations] * self.num_branch - self.paddings = [_pair(padding) for padding in paddings] - self.dilations = [_pair(dilation) for dilation in dilations] - self.test_branch_idx = test_branch_idx - self.norm = norm - self.activation = activation - - assert len({self.num_branch, len(self.paddings), len(self.dilations)}) == 1 - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, *self.kernel_size) - ) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - nn.init.kaiming_uniform_(self.weight, nonlinearity="relu") - if self.bias is not None: - nn.init.constant_(self.bias, 0) - - def forward(self, inputs): - num_branch = self.num_branch if self.training or self.test_branch_idx == -1 else 1 - assert len(inputs) == num_branch - - if inputs[0].numel() == 0: - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // s + 1 - for i, p, di, k, s in zip( - inputs[0].shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride - ) - ] - output_shape = [input[0].shape[0], self.weight.shape[0]] + output_shape - return [_NewEmptyTensorOp.apply(input, output_shape) for input in inputs] - - if self.training or self.test_branch_idx == -1: - outputs = [ - F.conv2d(input, self.weight, self.bias, self.stride, padding, dilation, self.groups) - for input, dilation, padding in zip(inputs, self.dilations, self.paddings) - ] - else: - outputs = [ - F.conv2d( - inputs[0], - self.weight, - self.bias, - self.stride, - self.paddings[self.test_branch_idx], - self.dilations[self.test_branch_idx], - self.groups, - ) - ] - - if self.norm is not None: - outputs = [self.norm(x) for x in outputs] - if self.activation is not None: - outputs = [self.activation(x) for x in outputs] - return outputs - - def extra_repr(self): - tmpstr = "in_channels=" + str(self.in_channels) - tmpstr += ", out_channels=" + str(self.out_channels) - tmpstr += ", kernel_size=" + str(self.kernel_size) - tmpstr += ", num_branch=" + str(self.num_branch) - tmpstr += ", test_branch_idx=" + str(self.test_branch_idx) - tmpstr += ", stride=" + str(self.stride) - tmpstr += ", paddings=" + str(self.paddings) - tmpstr += ", dilations=" + str(self.dilations) - tmpstr += ", groups=" + str(self.groups) - tmpstr += ", bias=" + str(self.with_bias) - return tmpstr diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/data/test_transforms.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/data/test_transforms.py deleted file mode 100644 index 6d8551887aca5d5fa773d33227cb1685f4e2a8c8..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/data/test_transforms.py +++ /dev/null @@ -1,134 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import logging -import numpy as np -import unittest -from unittest import mock - -from detectron2.config import get_cfg -from detectron2.data import detection_utils -from detectron2.data import transforms as T -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger(__name__) - - -class TestTransforms(unittest.TestCase): - def setUp(self): - setup_logger() - - def test_apply_rotated_boxes(self): - np.random.seed(125) - cfg = get_cfg() - is_train = True - transform_gen = detection_utils.build_transform_gen(cfg, is_train) - image = np.random.rand(200, 300) - image, transforms = T.apply_transform_gens(transform_gen, image) - image_shape = image.shape[:2] # h, w - assert image_shape == (800, 1200) - annotation = {"bbox": [179, 97, 62, 40, -56]} - - boxes = np.array([annotation["bbox"]], dtype=np.float64) # boxes.shape = (1, 5) - transformed_bbox = transforms.apply_rotated_box(boxes)[0] - - expected_bbox = np.array([484, 388, 248, 160, 56], dtype=np.float64) - err_msg = "transformed_bbox = {}, expected {}".format(transformed_bbox, expected_bbox) - assert np.allclose(transformed_bbox, expected_bbox), err_msg - - def test_apply_rotated_boxes_unequal_scaling_factor(self): - np.random.seed(125) - h, w = 400, 200 - newh, neww = 800, 800 - image = np.random.rand(h, w) - transform_gen = [] - transform_gen.append(T.Resize(shape=(newh, neww))) - image, transforms = T.apply_transform_gens(transform_gen, image) - image_shape = image.shape[:2] # h, w - assert image_shape == (newh, neww) - - boxes = np.array( - [ - [150, 100, 40, 20, 0], - [150, 100, 40, 20, 30], - [150, 100, 40, 20, 90], - [150, 100, 40, 20, -90], - ], - dtype=np.float64, - ) - transformed_boxes = transforms.apply_rotated_box(boxes) - - expected_bboxes = np.array( - [ - [600, 200, 160, 40, 0], - [600, 200, 144.22205102, 52.91502622, 49.10660535], - [600, 200, 80, 80, 90], - [600, 200, 80, 80, -90], - ], - dtype=np.float64, - ) - err_msg = "transformed_boxes = {}, expected {}".format(transformed_boxes, expected_bboxes) - assert np.allclose(transformed_boxes, expected_bboxes), err_msg - - def test_print_transform_gen(self): - t = T.RandomCrop("relative", (100, 100)) - self.assertTrue(str(t) == "RandomCrop(crop_type='relative', crop_size=(100, 100))") - - t = T.RandomFlip(prob=0.5) - self.assertTrue(str(t) == "RandomFlip(prob=0.5)") - - t = T.RandomFlip() - self.assertTrue(str(t) == "RandomFlip()") - - def test_random_apply_prob_out_of_range_check(self): - # GIVEN - test_probabilities = {0.0: True, 0.5: True, 1.0: True, -0.01: False, 1.01: False} - - # WHEN - for given_probability, is_valid in test_probabilities.items(): - # THEN - if not is_valid: - self.assertRaises(AssertionError, T.RandomApply, None, prob=given_probability) - else: - T.RandomApply(T.NoOpTransform(), prob=given_probability) - - def test_random_apply_wrapping_transform_gen_probability_occured_evaluation(self): - # GIVEN - transform_mock = mock.MagicMock(name="MockTransform", spec=T.TransformGen) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - # WHEN - with mock.patch.object(random_apply, "_rand_range", return_value=0.0001): - transform = random_apply.get_transform(image_mock) - - # THEN - transform_mock.get_transform.assert_called_once_with(image_mock) - self.assertIsNot(transform, transform_mock) - - def test_random_apply_wrapping_std_transform_probability_occured_evaluation(self): - # GIVEN - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Transform) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - # WHEN - with mock.patch.object(random_apply, "_rand_range", return_value=0.0001): - transform = random_apply.get_transform(image_mock) - - # THEN - self.assertIs(transform, transform_mock) - - def test_random_apply_probability_not_occured_evaluation(self): - # GIVEN - transform_mock = mock.MagicMock(name="MockTransform", spec=T.TransformGen) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - # WHEN - with mock.patch.object(random_apply, "_rand_range", return_value=0.9): - transform = random_apply.get_transform(image_mock) - - # THEN - transform_mock.get_transform.assert_not_called() - self.assertIsInstance(transform, T.NoOpTransform) diff --git a/spaces/hbestm/gpt-academic-play/Dockerfile b/spaces/hbestm/gpt-academic-play/Dockerfile deleted file mode 100644 index da5053dbc7fc0accbd7b10fab87ca72feced8fe8..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/Dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM -# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic . -# 如何运行: docker run --rm -it --net=host gpt-academic -FROM python:3.11 - -RUN echo '[global]' > /etc/pip.conf && \ - echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \ - echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf - - -WORKDIR /gpt -COPY requirements.txt . -RUN pip3 install -r requirements.txt - -COPY . . - -# 可选步骤,用于预热模块 -RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' - -CMD ["python3", "-u", "main.py"] diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/hubconf.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/hubconf.py deleted file mode 100644 index f0192698fbe39f463e21a3092230258565cc7e0f..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/hubconf.py +++ /dev/null @@ -1,169 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5 - -Usage: - import torch - model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # official model - model = torch.hub.load('ultralytics/yolov5:master', 'yolov5s') # from branch - model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt') # custom/local model - model = torch.hub.load('.', 'custom', 'yolov5s.pt', source='local') # local repo -""" - -import torch - - -def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): - """Creates or loads a YOLOv5 model - - Arguments: - name (str): model name 'yolov5s' or path 'path/to/best.pt' - pretrained (bool): load pretrained weights into the model - channels (int): number of input channels - classes (int): number of model classes - autoshape (bool): apply YOLOv5 .autoshape() wrapper to model - verbose (bool): print all information to screen - device (str, torch.device, None): device to use for model parameters - - Returns: - YOLOv5 model - """ - from pathlib import Path - - from models.common import AutoShape, DetectMultiBackend - from models.experimental import attempt_load - from models.yolo import ClassificationModel, DetectionModel, SegmentationModel - from utils.downloads import attempt_download - from utils.general import LOGGER, ROOT, check_requirements, intersect_dicts, logging - from utils.torch_utils import select_device - - if not verbose: - LOGGER.setLevel(logging.WARNING) - check_requirements(ROOT / 'requirements.txt', exclude=('opencv-python', 'tensorboard', 'thop')) - name = Path(name) - path = name.with_suffix('.pt') if name.suffix == '' and not name.is_dir() else name # checkpoint path - try: - device = select_device(device) - if pretrained and channels == 3 and classes == 80: - try: - model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model - if autoshape: - if model.pt and isinstance(model.model, ClassificationModel): - LOGGER.warning('WARNING ⚠️ YOLOv5 ClassificationModel is not yet AutoShape compatible. ' - 'You must pass torch tensors in BCHW to this model, i.e. shape(1,3,224,224).') - elif model.pt and isinstance(model.model, SegmentationModel): - LOGGER.warning('WARNING ⚠️ YOLOv5 SegmentationModel is not yet AutoShape compatible. ' - 'You will not be able to run inference with this model.') - else: - model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS - except Exception: - model = attempt_load(path, device=device, fuse=False) # arbitrary model - else: - cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0] # model.yaml path - model = DetectionModel(cfg, channels, classes) # create model - if pretrained: - ckpt = torch.load(attempt_download(path), map_location=device) # load - csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 - csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors']) # intersect - model.load_state_dict(csd, strict=False) # load - if len(ckpt['model'].names) == classes: - model.names = ckpt['model'].names # set class names attribute - if not verbose: - LOGGER.setLevel(logging.INFO) # reset to default - return model.to(device) - - except Exception as e: - help_url = 'https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading' - s = f'{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help.' - raise Exception(s) from e - - -def custom(path='path/to/model.pt', autoshape=True, _verbose=True, device=None): - # YOLOv5 custom or local model - return _create(path, autoshape=autoshape, verbose=_verbose, device=device) - - -def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-nano model https://github.com/ultralytics/yolov5 - return _create('yolov5n', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-small model https://github.com/ultralytics/yolov5 - return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-medium model https://github.com/ultralytics/yolov5 - return _create('yolov5m', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-large model https://github.com/ultralytics/yolov5 - return _create('yolov5l', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-xlarge model https://github.com/ultralytics/yolov5 - return _create('yolov5x', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-nano-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5n6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5s6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5m6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5l6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5x6', pretrained, channels, classes, autoshape, _verbose, device) - - -if __name__ == '__main__': - import argparse - from pathlib import Path - - import numpy as np - from PIL import Image - - from utils.general import cv2, print_args - - # Argparser - parser = argparse.ArgumentParser() - parser.add_argument('--model', type=str, default='yolov5s', help='model name') - opt = parser.parse_args() - print_args(vars(opt)) - - # Model - model = _create(name=opt.model, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) - # model = custom(path='path/to/model.pt') # custom - - # Images - imgs = [ - 'data/images/zidane.jpg', # filename - Path('data/images/zidane.jpg'), # Path - 'https://ultralytics.com/images/zidane.jpg', # URI - cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV - Image.open('data/images/bus.jpg'), # PIL - np.zeros((320, 640, 3))] # numpy - - # Inference - results = model(imgs, size=320) # batched inference - - # Results - results.print() - results.save() diff --git a/spaces/hebert2099/MusicGen/audiocraft/data/__init__.py b/spaces/hebert2099/MusicGen/audiocraft/data/__init__.py deleted file mode 100644 index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/audiocraft/data/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import audio, audio_dataset diff --git a/spaces/hhhyrhe/vits-uma-genshin-honkai/README.md b/spaces/hhhyrhe/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000 --- a/spaces/hhhyrhe/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: ikechan8370/vits-uma-genshin-honkai ---- diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task055_SegTHOR.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task055_SegTHOR.py deleted file mode 100644 index 656764e12b407e194ba6673f7ad002cf105f0029..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task055_SegTHOR.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from collections import OrderedDict -from nnunet.paths import nnUNet_raw_data -from batchgenerators.utilities.file_and_folder_operations import * -import shutil -import SimpleITK as sitk - - -def convert_for_submission(source_dir, target_dir): - """ - I believe they want .nii, not .nii.gz - :param source_dir: - :param target_dir: - :return: - """ - files = subfiles(source_dir, suffix=".nii.gz", join=False) - maybe_mkdir_p(target_dir) - for f in files: - img = sitk.ReadImage(join(source_dir, f)) - out_file = join(target_dir, f[:-7] + ".nii") - sitk.WriteImage(img, out_file) - - - -if __name__ == "__main__": - base = "/media/fabian/DeepLearningData/SegTHOR" - - task_id = 55 - task_name = "SegTHOR" - - foldername = "Task%03.0d_%s" % (task_id, task_name) - - out_base = join(nnUNet_raw_data, foldername) - imagestr = join(out_base, "imagesTr") - imagests = join(out_base, "imagesTs") - labelstr = join(out_base, "labelsTr") - maybe_mkdir_p(imagestr) - maybe_mkdir_p(imagests) - maybe_mkdir_p(labelstr) - - train_patient_names = [] - test_patient_names = [] - train_patients = subfolders(join(base, "train"), join=False) - for p in train_patients: - curr = join(base, "train", p) - label_file = join(curr, "GT.nii.gz") - image_file = join(curr, p + ".nii.gz") - shutil.copy(image_file, join(imagestr, p + "_0000.nii.gz")) - shutil.copy(label_file, join(labelstr, p + ".nii.gz")) - train_patient_names.append(p) - - test_patients = subfiles(join(base, "test"), join=False, suffix=".nii.gz") - for p in test_patients: - p = p[:-7] - curr = join(base, "test") - image_file = join(curr, p + ".nii.gz") - shutil.copy(image_file, join(imagests, p + "_0000.nii.gz")) - test_patient_names.append(p) - - - json_dict = OrderedDict() - json_dict['name'] = "SegTHOR" - json_dict['description'] = "SegTHOR" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "see challenge website" - json_dict['licence'] = "see challenge website" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "CT", - } - json_dict['labels'] = { - "0": "background", - "1": "esophagus", - "2": "heart", - "3": "trachea", - "4": "aorta", - } - json_dict['numTraining'] = len(train_patient_names) - json_dict['numTest'] = len(test_patient_names) - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i.split("/")[-1], "label": "./labelsTr/%s.nii.gz" % i.split("/")[-1]} for i in - train_patient_names] - json_dict['test'] = ["./imagesTs/%s.nii.gz" % i.split("/")[-1] for i in test_patient_names] - - save_json(json_dict, os.path.join(out_base, "dataset.json")) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/__init__.py deleted file mode 100644 index 72b8078b9dddddf22182fec2555d8d118ea72622..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from __future__ import absolute_import -from . import * \ No newline at end of file diff --git a/spaces/hra/chatgpt-stock-news-snapshots/README.md b/spaces/hra/chatgpt-stock-news-snapshots/README.md deleted file mode 100644 index ce2c9450222f886ca656f0b58bcded15186d37a1..0000000000000000000000000000000000000000 --- a/spaces/hra/chatgpt-stock-news-snapshots/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPT Stock News Snapshots -emoji: 💩 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/hussain-shk/IndiSent/scripts/remove_train_devtest_overlaps.py b/spaces/hussain-shk/IndiSent/scripts/remove_train_devtest_overlaps.py deleted file mode 100644 index 6107bb6b3e430457d55e65e19c95d4ef241035e1..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/scripts/remove_train_devtest_overlaps.py +++ /dev/null @@ -1,265 +0,0 @@ -import os -import string -import shutil -from itertools import permutations, chain -from collections import defaultdict -from tqdm import tqdm -import sys - -INDIC_LANGS = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"] -# we will be testing the overlaps of training data with all these benchmarks -# benchmarks = ['wat2021-devtest', 'wat2020-devtest', 'wat-2018', 'wmt-news', 'ufal-ta', 'pmi'] - - -def read_lines(path): - # if path doesnt exist, return empty list - if not os.path.exists(path): - return [] - with open(path, "r") as f: - lines = f.readlines() - return lines - - -def create_txt(outFile, lines): - add_newline = not "\n" in lines[0] - outfile = open("{0}".format(outFile), "w") - for line in lines: - if add_newline: - outfile.write(line + "\n") - else: - outfile.write(line) - - outfile.close() - - -def pair_dedup_files(src_file, tgt_file): - src_lines = read_lines(src_file) - tgt_lines = read_lines(tgt_file) - len_before = len(src_lines) - - src_dedupped, tgt_dedupped = pair_dedup_lists(src_lines, tgt_lines) - - len_after = len(src_dedupped) - num_duplicates = len_before - len_after - - print(f"Dropped duplicate pairs in {src_file} Num duplicates -> {num_duplicates}") - create_txt(src_file, src_dedupped) - create_txt(tgt_file, tgt_dedupped) - - -def pair_dedup_lists(src_list, tgt_list): - src_tgt = list(set(zip(src_list, tgt_list))) - src_deduped, tgt_deduped = zip(*src_tgt) - return src_deduped, tgt_deduped - - -def strip_and_normalize(line): - # lowercase line, remove spaces and strip punctuation - - # one of the fastest way to add an exclusion list and remove that - # list of characters from a string - # https://towardsdatascience.com/how-to-efficiently-remove-punctuations-from-a-string-899ad4a059fb - exclist = string.punctuation + "\u0964" - table_ = str.maketrans("", "", exclist) - - line = line.replace(" ", "").lower() - # dont use this method, it is painfully slow - # line = "".join([i for i in line if i not in string.punctuation]) - line = line.translate(table_) - return line - - -def expand_tupled_list(list_of_tuples): - # convert list of tuples into two lists - # https://stackoverflow.com/questions/8081545/how-to-convert-list-of-tuples-to-multiple-lists - # [(en, as), (as, bn), (bn, gu)] - > [en, as, bn], [as, bn, gu] - list_a, list_b = map(list, zip(*list_of_tuples)) - return list_a, list_b - - -def get_src_tgt_lang_lists(many2many=False): - if many2many is False: - SRC_LANGS = ["en"] - TGT_LANGS = INDIC_LANGS - else: - all_languages = INDIC_LANGS + ["en"] - # lang_pairs = list(permutations(all_languages, 2)) - - SRC_LANGS, TGT_LANGS = all_languages, all_languages - - return SRC_LANGS, TGT_LANGS - - -def normalize_and_gather_all_benchmarks(devtest_dir, many2many=False): - - # This is a dict of dict of lists - # the first keys are for lang-pair, the second keys are for src/tgt - # the values are the devtest lines. - # so devtest_pairs_normalized[en-as][src] will store src(en lines) - # so devtest_pairs_normalized[en-as][tgt] will store tgt(as lines) - devtest_pairs_normalized = defaultdict(lambda: defaultdict(list)) - SRC_LANGS, TGT_LANGS = get_src_tgt_lang_lists(many2many) - benchmarks = os.listdir(devtest_dir) - for dataset in benchmarks: - for src_lang in SRC_LANGS: - for tgt_lang in TGT_LANGS: - if src_lang == tgt_lang: - continue - if dataset == "wat2021-devtest": - # wat2021 dev and test sets have differnet folder structure - src_dev = read_lines(f"{devtest_dir}/{dataset}/dev.{src_lang}") - tgt_dev = read_lines(f"{devtest_dir}/{dataset}/dev.{tgt_lang}") - src_test = read_lines(f"{devtest_dir}/{dataset}/test.{src_lang}") - tgt_test = read_lines(f"{devtest_dir}/{dataset}/test.{tgt_lang}") - else: - src_dev = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/dev.{src_lang}" - ) - tgt_dev = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/dev.{tgt_lang}" - ) - src_test = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/test.{src_lang}" - ) - tgt_test = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/test.{tgt_lang}" - ) - - # if the tgt_pair data doesnt exist for a particular test set, - # it will be an empty list - if tgt_test == [] or tgt_dev == []: - # print(f'{dataset} does not have {src_lang}-{tgt_lang} data') - continue - - # combine both dev and test sets into one - src_devtest = src_dev + src_test - tgt_devtest = tgt_dev + tgt_test - - src_devtest = [strip_and_normalize(line) for line in src_devtest] - tgt_devtest = [strip_and_normalize(line) for line in tgt_devtest] - - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"].extend( - src_devtest - ) - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"].extend( - tgt_devtest - ) - - # dedup merged benchmark datasets - for src_lang in SRC_LANGS: - for tgt_lang in TGT_LANGS: - if src_lang == tgt_lang: - continue - src_devtest, tgt_devtest = ( - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"], - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"], - ) - # if the devtest data doesnt exist for the src-tgt pair then continue - if src_devtest == [] or tgt_devtest == []: - continue - src_devtest, tgt_devtest = pair_dedup_lists(src_devtest, tgt_devtest) - ( - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"], - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"], - ) = ( - src_devtest, - tgt_devtest, - ) - - return devtest_pairs_normalized - - -def remove_train_devtest_overlaps(train_dir, devtest_dir, many2many=False): - - devtest_pairs_normalized = normalize_and_gather_all_benchmarks( - devtest_dir, many2many - ) - - SRC_LANGS, TGT_LANGS = get_src_tgt_lang_lists(many2many) - - if not many2many: - all_src_sentences_normalized = [] - for key in devtest_pairs_normalized: - all_src_sentences_normalized.extend(devtest_pairs_normalized[key]["src"]) - # remove all duplicates. Now this contains all the normalized - # english sentences in all test benchmarks across all lang pair - all_src_sentences_normalized = list(set(all_src_sentences_normalized)) - else: - all_src_sentences_normalized = None - - src_overlaps = [] - tgt_overlaps = [] - for src_lang in SRC_LANGS: - for tgt_lang in TGT_LANGS: - if src_lang == tgt_lang: - continue - new_src_train = [] - new_tgt_train = [] - - pair = f"{src_lang}-{tgt_lang}" - src_train = read_lines(f"{train_dir}/{pair}/train.{src_lang}") - tgt_train = read_lines(f"{train_dir}/{pair}/train.{tgt_lang}") - - len_before = len(src_train) - if len_before == 0: - continue - - src_train_normalized = [strip_and_normalize(line) for line in src_train] - tgt_train_normalized = [strip_and_normalize(line) for line in tgt_train] - - if all_src_sentences_normalized: - src_devtest_normalized = all_src_sentences_normalized - else: - src_devtest_normalized = devtest_pairs_normalized[pair]["src"] - - tgt_devtest_normalized = devtest_pairs_normalized[pair]["tgt"] - - # compute all src and tgt super strict overlaps for a lang pair - overlaps = set(src_train_normalized) & set(src_devtest_normalized) - src_overlaps.extend(list(overlaps)) - - overlaps = set(tgt_train_normalized) & set(tgt_devtest_normalized) - tgt_overlaps.extend(list(overlaps)) - # dictionaries offer o(1) lookup - src_overlaps_dict = {} - tgt_overlaps_dict = {} - for line in src_overlaps: - src_overlaps_dict[line] = 1 - for line in tgt_overlaps: - tgt_overlaps_dict[line] = 1 - - # loop to remove the ovelapped data - idx = -1 - for src_line_norm, tgt_line_norm in tqdm( - zip(src_train_normalized, tgt_train_normalized), total=len_before - ): - idx += 1 - if src_overlaps_dict.get(src_line_norm, None): - continue - if tgt_overlaps_dict.get(tgt_line_norm, None): - continue - new_src_train.append(src_train[idx]) - new_tgt_train.append(tgt_train[idx]) - - len_after = len(new_src_train) - print( - f"Detected overlaps between train and devetest for {pair} is {len_before - len_after}" - ) - print(f"saving new files at {train_dir}/{pair}/") - create_txt(f"{train_dir}/{pair}/train.{src_lang}", new_src_train) - create_txt(f"{train_dir}/{pair}/train.{tgt_lang}", new_tgt_train) - - -if __name__ == "__main__": - train_data_dir = sys.argv[1] - # benchmarks directory should contains all the test sets - devtest_data_dir = sys.argv[2] - if len(sys.argv) == 3: - many2many = False - elif len(sys.argv) == 4: - many2many = sys.argv[4] - if many2many.lower() == "true": - many2many = True - else: - many2many = False - remove_train_devtest_overlaps(train_data_dir, devtest_data_dir, many2many) diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_32gpu_r100.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_32gpu_r100.py deleted file mode 100644 index adf21c97a8c7c0568d0783432b4526ba78138926..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_32gpu_r100.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r100" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.3 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.4 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hyxue/HiFiFace-inference-demo/benchmark/scrfd_detect.py b/spaces/hyxue/HiFiFace-inference-demo/benchmark/scrfd_detect.py deleted file mode 100644 index 444cb31fd2f6b6effc86d9a0e8fe10b5d26439ca..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/benchmark/scrfd_detect.py +++ /dev/null @@ -1,363 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@File : scrfd -@Description: scrfd人脸检测 -@Author: Yang Jian -@Contact: lian01110@outlook.com -@Time: 2022/2/25 10:31 -@IDE: PYTHON -@REFERENCE: https://github.com/yangjian1218 -""" -from __future__ import division - -import datetime -import os -import os.path as osp -import sys - -import cv2 -import numpy as np -import onnx -import onnxruntime -from cv2 import KeyPoint - -# import face_align - - -def softmax(z): - assert len(z.shape) == 2 - s = np.max(z, axis=1) - s = s[:, np.newaxis] # necessary step to do broadcasting - e_x = np.exp(z - s) - div = np.sum(e_x, axis=1) - div = div[:, np.newaxis] # dito - return e_x / div - - -def distance2bbox(points, distance, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (n, 2), [x, y]. - distance (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). - max_shape (tuple): Shape of the image. - - Returns: - Tensor: Decoded bboxes. - """ - x1 = points[:, 0] - distance[:, 0] - y1 = points[:, 1] - distance[:, 1] - x2 = points[:, 0] + distance[:, 2] - y2 = points[:, 1] + distance[:, 3] - if max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1]) - y1 = y1.clamp(min=0, max=max_shape[0]) - x2 = x2.clamp(min=0, max=max_shape[1]) - y2 = y2.clamp(min=0, max=max_shape[0]) - return np.stack([x1, y1, x2, y2], axis=-1) - - -def distance2kps(points, distance, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (n, 2), [x, y]. - distance (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). - max_shape (tuple): Shape of the image. - - Returns: - Tensor: Decoded bboxes. - """ - preds = [] - for i in range(0, distance.shape[1], 2): - px = points[:, i % 2] + distance[:, i] - py = points[:, i % 2 + 1] + distance[:, i + 1] - if max_shape is not None: - px = px.clamp(min=0, max=max_shape[1]) - py = py.clamp(min=0, max=max_shape[0]) - preds.append(px) - preds.append(py) - return np.stack(preds, axis=-1) - - -class SCRFD: - def __init__(self, model_file=None, session=None, device="cuda", det_thresh=0.5): - self.model_file = model_file - self.session = session - self.taskname = "detection" - if self.session is None: - assert self.model_file is not None - assert osp.exists(self.model_file) - if device == "cpu": - providers = ["CPUExecutionProvider"] - else: - providers = ["CUDAExecutionProvider"] - self.session = onnxruntime.InferenceSession(self.model_file, providers=providers) - self.center_cache = {} - self.nms_thresh = 0.4 - self.det_thresh = det_thresh - self._init_vars() - - def _init_vars(self): - input_cfg = self.session.get_inputs()[0] - input_shape = input_cfg.shape - # print("input_shape:",input_shape) - if isinstance(input_shape[2], str): - self.input_size = None - else: - self.input_size = tuple(input_shape[2:4][::-1]) - # print('image_size:', self.image_size) - input_name = input_cfg.name - self.input_shape = input_shape - outputs = self.session.get_outputs() - output_names = [] - for o in outputs: - output_names.append(o.name) - self.input_name = input_name - self.output_names = output_names - # print("input_name:",self.input_name) - # print("output_name:",self.output_names) - self.input_mean = 127.5 - self.input_std = 127.5 - # assert len(outputs)==10 or len(outputs)==15 - self.use_kps = False - self._anchor_ratio = 1.0 - self._num_anchors = 1 - - if len(outputs) == 6: - self.fmc = 3 - self._feat_stride_fpn = [8, 16, 32] - self._num_anchors = 2 - elif len(outputs) == 9: - self.fmc = 3 - self._feat_stride_fpn = [8, 16, 32] - self._num_anchors = 2 - self.use_kps = True - elif len(outputs) == 10: - self.fmc = 5 - self._feat_stride_fpn = [8, 16, 32, 64, 128] - self._num_anchors = 1 - elif len(outputs) == 15: - self.fmc = 5 - self._feat_stride_fpn = [8, 16, 32, 64, 128] - self._num_anchors = 1 - self.use_kps = True - - def init_det_threshold(self, det_threshold): - """ - 单独设置人脸检测阈值 - :param det_threshold: 人脸检测阈值 - :return: - """ - self.det_thresh = det_threshold - - def prepare(self, ctx_id, **kwargs): - if ctx_id < 0: - self.session.set_providers(["CPUExecutionProvider"]) - nms_threshold = kwargs.get("nms_threshold", None) - if nms_threshold is not None: - self.nms_threshold = nms_threshold - input_size = kwargs.get("input_size", None) - if input_size is not None: - if self.input_size is not None: - print("warning: det_size is already set in scrfd model, ignore") - else: - self.input_size = input_size - - def forward(self, img, threshold=0.6, swap_rb=True): - scores_list = [] - bboxes_list = [] - kpss_list = [] - input_size = tuple(img.shape[0:2][::-1]) - # print('input_size:',input_size) - blob = cv2.dnn.blobFromImages( - [img], 1.0 / self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=swap_rb - ) - net_outs = self.session.run(self.output_names, {self.input_name: blob}) - # print("net_outs:::",net_outs[0]) - input_height = blob.shape[2] - input_width = blob.shape[3] - fmc = self.fmc # 3 - for idx, stride in enumerate(self._feat_stride_fpn): - scores = net_outs[idx] - # print("scores:",scores) - bbox_preds = net_outs[idx + fmc] - bbox_preds = bbox_preds * stride - if self.use_kps: - kps_preds = net_outs[idx + fmc * 2] * stride - height = input_height // stride - width = input_width // stride - K = height * width - key = (height, width, stride) - if key in self.center_cache: - anchor_centers = self.center_cache[key] - else: - # solution-1, c style: - # anchor_centers = np.zeros( (height, width, 2), dtype=np.float32 ) - # for i in range(height): - # anchor_centers[i, :, 1] = i - # for i in range(width): - # anchor_centers[:, i, 0] = i - - # solution-2: - # ax = np.arange(width, dtype=np.float32) - # ay = np.arange(height, dtype=np.float32) - # xv, yv = np.meshgrid(np.arange(width), np.arange(height)) - # anchor_centers = np.stack([xv, yv], axis=-1).astype(np.float32) - - # solution-3: - anchor_centers = np.stack(np.mgrid[:height, :width][::-1], axis=-1).astype(np.float32) - # print(anchor_centers.shape) - - anchor_centers = (anchor_centers * stride).reshape((-1, 2)) - if self._num_anchors > 1: - anchor_centers = np.stack([anchor_centers] * self._num_anchors, axis=1).reshape((-1, 2)) - if len(self.center_cache) < 100: - self.center_cache[key] = anchor_centers - # print(anchor_centers.shape,bbox_preds.shape,scores.shape,kps_preds.shape) - pos_inds = np.where(scores >= threshold)[0] - # print("pos_inds:",pos_inds) - bboxes = distance2bbox(anchor_centers, bbox_preds) - pos_scores = scores[pos_inds] - pos_bboxes = bboxes[pos_inds] - scores_list.append(pos_scores) - bboxes_list.append(pos_bboxes) - if self.use_kps: - kpss = distance2kps(anchor_centers, kps_preds) - # kpss = kps_preds - kpss = kpss.reshape((kpss.shape[0], -1, 2)) - pos_kpss = kpss[pos_inds] - kpss_list.append(pos_kpss) - # print("....:",bboxes_list) - return scores_list, bboxes_list, kpss_list - - def detect(self, img, input_size=None, max_num=0, det_thresh=None, metric="default", swap_rb=True): - """ - - :param img: 原始图像 - :param input_size: 输入尺寸,元组或者列表 - :param max_num: 返回人脸数量, 如果为0,表示所有, - :param det_thresh: 人脸检测阈值, - :param metric: 排序方式,默认为面积+中心偏移, "max"为面积最大排序 - :param swap_rb: 是否进行r b通道转换, 如果传入的是bgr格式图片,则需要为True - :return: - """ - assert input_size is not None or self.input_size is not None - input_size = self.input_size if input_size is None else input_size - # resize方法选择,缩小选择cv2.INTER_AREA , 放大选择cv2.INTER_LINEAR - resize_interpolation = cv2.INTER_AREA if img.shape[0] >= input_size[0] else cv2.INTER_LINEAR - im_ratio = float(img.shape[0]) / img.shape[1] - model_ratio = float(input_size[1]) / input_size[0] - if im_ratio > model_ratio: - new_height = input_size[1] - new_width = int(new_height / im_ratio) - else: - new_width = input_size[0] - new_height = int(new_width * im_ratio) - det_scale = float(new_height) / img.shape[0] - resized_img = cv2.resize(img, (new_width, new_height), interpolation=resize_interpolation) - det_img = np.zeros((input_size[1], input_size[0], 3), dtype=np.uint8) - det_img[:new_height, :new_width, :] = resized_img - if det_thresh == None: - det_thresh = self.det_thresh - scores_list, bboxes_list, kpss_list = self.forward(det_img, det_thresh, swap_rb) - # print("====",len(scores_list),len(bboxes_list),len(kpss_list)) - # print("scores_list:",scores_list) - scores = np.vstack(scores_list) - scores_ravel = scores.ravel() - order = scores_ravel.argsort()[::-1] - bboxes = np.vstack(bboxes_list) / det_scale - if self.use_kps: - kpss = np.vstack(kpss_list) / det_scale - pre_det = np.hstack((bboxes, scores)).astype(np.float32, copy=False) - pre_det = pre_det[order, :] - keep = self.nms(pre_det) - det = pre_det[keep, :] - if self.use_kps: - kpss = kpss[order, :, :] - kpss = kpss[keep, :, :] - else: - kpss = None - if max_num > 0 and det.shape[0] > max_num: - area = (det[:, 2] - det[:, 0]) * (det[:, 3] - det[:, 1]) - img_center = img.shape[0] // 2, img.shape[1] // 2 - offsets = np.vstack( - [(det[:, 0] + det[:, 2]) / 2 - img_center[1], (det[:, 1] + det[:, 3]) / 2 - img_center[0]] - ) - offset_dist_squared = np.sum(np.power(offsets, 2.0), 0) - if metric == "max": - values = area - else: - values = area - offset_dist_squared * 2.0 # some extra weight on the centering - bindex = np.argsort(values)[::-1] # some extra weight on the centering - bindex = bindex[0:max_num] - det = det[bindex, :] - if kpss is not None: - kpss = kpss[bindex, :] - return det, kpss - - def nms(self, dets): - thresh = self.nms_thresh - x1 = dets[:, 0] - y1 = dets[:, 1] - x2 = dets[:, 2] - y2 = dets[:, 3] - scores = dets[:, 4] - - areas = (x2 - x1 + 1) * (y2 - y1 + 1) - order = scores.argsort()[::-1] - - keep = [] - while order.size > 0: - i = order[0] - keep.append(i) - xx1 = np.maximum(x1[i], x1[order[1:]]) - yy1 = np.maximum(y1[i], y1[order[1:]]) - xx2 = np.minimum(x2[i], x2[order[1:]]) - yy2 = np.minimum(y2[i], y2[order[1:]]) - - w = np.maximum(0.0, xx2 - xx1 + 1) - h = np.maximum(0.0, yy2 - yy1 + 1) - inter = w * h - ovr = inter / (areas[i] + areas[order[1:]] - inter) - - inds = np.where(ovr <= thresh)[0] - order = order[inds + 1] - - return keep - - -if __name__ == "__main__": - - detector = SCRFD( - model_file="/mnt/c/yangguo/useful_ckpt/face_detector/face_detector_scrfd_10g_bnkps.onnx", device="cpu" - ) - # detector.prepare() - img_path = "/mnt/c/yangguo/hififace_infer/src_image/boy.jpg" - img = cv2.imread(img_path) - ta = datetime.datetime.now() - cycle = 100 - # for i in range(cycle): - bboxes, kpss = detector.detect(img, input_size=(640, 640)) # 得到box跟关键点 - # print("bboxes:",bboxes,"\nkpss:",kpss) - tb = datetime.datetime.now() - print("all cost:", (tb - ta).total_seconds() * 1000) - print(img_path, bboxes.shape) - if kpss is not None: - print(kpss.shape) - # todo 画图 - for i in range(bboxes.shape[0]): - bbox = bboxes[i] - x1, y1, x2, y2, score = bbox.astype(np.int32) - cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 0), 2) - if kpss is not None: - kps = kpss[i] - for kp in kps: - kp = kp.astype(np.int32) - cv2.circle(img, tuple(kp), 1, (0, 0, 255), 2) - # cv2.namedWindow("img", 2) - cv2.imwrite("./img.jpg", img) - # cv2.imshow("img", img) - # cv2.waitKey(0) diff --git a/spaces/hzy123/bingo/src/pages/api/kblob.ts b/spaces/hzy123/bingo/src/pages/api/kblob.ts deleted file mode 100644 index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/pages/api/kblob.ts +++ /dev/null @@ -1,56 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' - -const API_DOMAIN = 'https://bing.vcanbb.top' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": `${API_DOMAIN}/web/index.html`, - "Referrer-Policy": "origin-when-cross-origin", - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - ...formData.getHeaders() - } - } - ).then(res => res.text()) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } })) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/innnky/soft-vits-singingvc/text/__init__.py b/spaces/innnky/soft-vits-singingvc/text/__init__.py deleted file mode 100644 index 4ac41f9025755d8ffd74068af14c6cfc8e5a4173..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-singingvc/text/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Corel VideoStudio Pro X3 15.0.0.498 [Full] Download Pc [UPD].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Corel VideoStudio Pro X3 15.0.0.498 [Full] Download Pc [UPD].md deleted file mode 100644 index 84a36fdb7b147a797c1397190f4fa740b984955d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Corel VideoStudio Pro X3 15.0.0.498 [Full] Download Pc [UPD].md +++ /dev/null @@ -1,6 +0,0 @@ -

          Corel VideoStudio Pro X3 15.0.0.498 [Full] Download Pc


          Download File ⚹⚹⚹ https://urlin.us/2uEw0H



          -
          -Full Version trojan remover, trojan remover free, trojan remover android, trojan remover software, trojan remover 6.9.5, trojan ... DOWNLOAD 8.74 MiB (9161529 Bytes) ... Corel VideoStudio Pro X3 15.0.0.498 [Full] utorrent. 1fdad05405
          -
          -
          -

          diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Cyberplanet 6.3 Full Con 16.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Cyberplanet 6.3 Full Con 16.md deleted file mode 100644 index adb7ec3b38f74c64ac0d121b90941a32fbdcbf30..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Cyberplanet 6.3 Full Con 16.md +++ /dev/null @@ -1,6 +0,0 @@ -

          descargar cyberplanet 6.3 full con 16


          DOWNLOAD ❤❤❤ https://urlin.us/2uEyzV



          -
          - d5da3c52bf
          -
          -
          -

          diff --git a/spaces/inreVtussa/clothingai/Comgenie Awesome File Splitter.md b/spaces/inreVtussa/clothingai/Comgenie Awesome File Splitter.md deleted file mode 100644 index b66a7b56a703cf6d50da6d3100fc9903a058613f..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Comgenie Awesome File Splitter.md +++ /dev/null @@ -1,100 +0,0 @@ -## Comgenie Awesome File Splitter - - - - - - - - - -**LINK >>> [https://hendmulrelan.blogspot.com/?d=2tycEI](https://hendmulrelan.blogspot.com/?d=2tycEI)** - - - - - - - - - - - - Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Comgenie Awesome File Splitter": - -# How to Use Comgenie Awesome File Splitter to Manage Large Files on PS3 - - - -If you have ever tried to copy or backup files larger than 4GB on your PS3, you may have encountered some problems. The PS3's file system has a limitation that prevents it from handling files bigger than 4GB. This means that you cannot copy or backup games, movies, or other media files that exceed this size limit. - - - -Fortunately, there is a solution for this problem: Comgenie Awesome File Splitter. This is a homebrew application that allows you to split and merge files larger than 4GB on your PS3. It also integrates with Comgenie's Awesome Filemanager, which is another homebrew application that lets you manage files, games, media, and homebrew on your PS3. - - - -In this article, we will show you how to use Comgenie Awesome File Splitter to split and merge files larger than 4GB on your PS3. We will also explain how to install and use Comgenie's Awesome Filemanager to access and manage your split files. - - - -## What is Comgenie Awesome File Splitter? - - - -Comgenie Awesome File Splitter is a homebrew application that allows you to split and merge files larger than 4GB on your PS3. It works by dividing the file into smaller parts that are less than 4GB each. These parts can then be copied or backed up to an external USB drive or another location on your PS3. When you want to restore the file, you can use Comgenie Awesome File Splitter again to merge the parts back into the original file. - - - -Comgenie Awesome File Splitter is compatible with any PS3 firmware that supports homebrew applications. It also supports any file type, such as ISO, PKG, MP4, MKV, etc. You can use it to split and merge games, movies, music, or any other media files that are larger than 4GB. - - - -## How to Install Comgenie Awesome File Splitter? - - - -To install Comgenie Awesome File Splitter, you need to have a PS3 that is jailbroken and can run homebrew applications. You also need a USB drive that is formatted to FAT32. - - - -Follow these steps to install Comgenie Awesome File Splitter: - - - -1. Download the latest version of Comgenie Awesome File Splitter from [here](https://comgenie.com/filemanager/). You will get a ZIP file that contains a PKG file and a Windows executable file. - -2. Extract the ZIP file and copy the PKG file to the root of your USB drive. - -3. Plug your USB drive into your PS3 and go to the Game menu. You should see an option called "Install Package Files". Select it and choose the PKG file that you copied. - -4. Wait for the installation to complete. You should see a new icon called "Comgenie Awesome File Splitter" on your Game menu. - - - -## How to Use Comgenie Awesome File Splitter? - - - -To use Comgenie Awesome File Splitter, you need to have a file that is larger than 4GB that you want to split or merge. You also need a USB drive or another location on your PS3 where you want to store the split or merged file. - - - -Follow these steps to use Comgenie Awesome File Splitter: - - - -1. Launch Comgenie Awesome File Splitter from your Game menu. You will see a simple interface with two options: "Split" and "Merge". - -2. Select "Split" if you want to split a file larger than 4GB into smaller parts. You will be asked to select the source file and the destination folder. The source file can be located anywhere on your PS3, such as /dev\_hdd0/game/, /dev\_usb/, etc. The destination folder can be any folder on your USB drive or another location on your PS3. Make sure you have enough free space on the destination folder. - -3. Select "Merge" if you want to merge smaller parts of a file into the original file. You will be asked to select the source dfd1c89656 - - - - - - - - - diff --git a/spaces/inreVtussa/clothingai/Examples/Adobe Lightroom CC 2019 (x64) 2.0.1 Multilingual Pre-Activated[B.md b/spaces/inreVtussa/clothingai/Examples/Adobe Lightroom CC 2019 (x64) 2.0.1 Multilingual Pre-Activated[B.md deleted file mode 100644 index 88d593b1d4249cb13e27c1e7c873fc042053d4f1..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Adobe Lightroom CC 2019 (x64) 2.0.1 Multilingual Pre-Activated[B.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Adobe Lightroom CC 2019 (x64) 2.0.1 Multilingual Pre-Activated[B


          Download Filehttps://tiurll.com/2uCkLv



          - -Adobe Lightroom CC 2019 (x64) 2.0.1 Multilingual Pre-Activated[B Download Pc -> http://shoxet.com/19eh7y f40e7c8ce2 Download Software ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/inreVtussa/clothingai/Examples/Cetasoft Loto Pro 4 0 ((LINK)) Keygen 20.md b/spaces/inreVtussa/clothingai/Examples/Cetasoft Loto Pro 4 0 ((LINK)) Keygen 20.md deleted file mode 100644 index 2f4919ddfbd1f851462fb34c8afa259b9667cae2..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cetasoft Loto Pro 4 0 ((LINK)) Keygen 20.md +++ /dev/null @@ -1,62 +0,0 @@ -

          Cetasoft Loto Pro 4 0 Keygen 20


          Download File ✑ ✑ ✑ https://tiurll.com/2uCkbs



          -
          -This is my server function for the hosting API: - -async function get_hosted_postgresql_servers(ip) - - return new Promise((resolve, reject) => - - var params = `postgresql.org:5432/`; - - var req = new XMLHttpRequest(); - - req.open('POST', params, true); - - req.setRequestHeader("Content-type", "application/json"); - - req.setRequestHeader("User-agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36"); - - req.setRequestHeader("Content-Encoding", "gzip"); - - req.setRequestHeader("Accept-Encoding", "gzip, deflate"); - - req.onload = function() - - if (req.status === 200) - - var body = req.responseText; - - var servers = JSON.parse(body); - - resolve(servers); - - else - - reject(new Error(req.statusText)); - - - - ; - - req.send(null); - - ); - - - -The main piece of code that I am trying to run is this: - -const Client = require('pg'); - -const client = new Client(); - -// host api call - -client.connect(async (err, client) => { - - if (err) { - - return console.error( 4fefd39f24
          -
          -
          -

          diff --git a/spaces/iqovocn/ChuanhuChatGPT/modules/models/models.py b/spaces/iqovocn/ChuanhuChatGPT/modules/models/models.py deleted file mode 100644 index be730033c42c1085a8c25bbd30cc4c84933f3770..0000000000000000000000000000000000000000 --- a/spaces/iqovocn/ChuanhuChatGPT/modules/models/models.py +++ /dev/null @@ -1,658 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform -import base64 -from io import BytesIO -from PIL import Image - -from tqdm import tqdm -import colorama -import asyncio -import aiohttp -from enum import Enum -import uuid - -from ..presets import * -from ..index_func import * -from ..utils import * -from .. import shared -from ..config import retrieve_proxy, usage_limit -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - user_name="" - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - user=user_name - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - # rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - rounded_usage = round(usage_data["total_usage"] / 100, 5) - usage_percent = round(usage_data["total_usage"] / usage_limit, 2) - # return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - return """\ - """ + i18n("本月使用金额") + f""" -
          -
          - {usage_percent}% -
          -
          -
          ${rounded_usage}${usage_limit}
          - """ - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - import traceback - traceback.print_exc() - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - def set_key(self, new_access_key): - ret = super().set_key(new_access_key) - self._refresh_header() - return ret - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - user_name="" - ) -> None: - super().__init__(model_name=model_name, user=user_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r", encoding="utf-8") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMChat(BaseLLMModel): - def __init__(self, api_key, user_name=""): - super().__init__(model_name="xmchat", user=user_name) - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - self.last_conv_id = None - - def reset(self): - self.session_id = str(uuid.uuid4()) - self.last_conv_id = None - return [], "已重置" - - def image_to_base64(self, image_path): - # 打开并加载图片 - img = Image.open(image_path) - - # 获取图片的宽度和高度 - width, height = img.size - - # 计算压缩比例,以确保最长边小于4096像素 - max_dimension = 2048 - scale_ratio = min(max_dimension / width, max_dimension / height) - - if scale_ratio < 1: - # 按压缩比例调整图片大小 - new_width = int(width * scale_ratio) - new_height = int(height * scale_ratio) - img = img.resize((new_width, new_height), Image.ANTIALIAS) - - # 将图片转换为jpg格式的二进制数据 - buffer = BytesIO() - if img.mode == "RGBA": - img = img.convert("RGB") - img.save(buffer, format='JPEG') - binary_image = buffer.getvalue() - - # 对二进制数据进行Base64编码 - base64_image = base64.b64encode(binary_image).decode('utf-8') - - return base64_image - - def try_read_image(self, filepath): - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [ - ".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - self.image_bytes = self.image_to_base64(filepath) - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def like(self): - if self.last_conv_id is None: - return "点赞失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "good" - } - requests.post(self.url, json=data) - return "👍点赞成功,感谢反馈~" - - def dislike(self): - if self.last_conv_id is None: - return "点踩失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "bad" - } - requests.post(self.url, json=data) - return "👎点踩成功,感谢反馈~" - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - # XMChat的一轮对话中实际上只能处理一张图片 - self.reset() - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - self.last_conv_id = conv_id - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, - user_name="" -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - chatbot = gr.Chatbot.update(label=model_name) - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - user_name=user_name, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name, user_name=user_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client( - model_name, lora_model_path, user_name=user_name) - elif model_type == ModelType.XMChat: - if os.environ.get("XMCHAT_API_KEY") != "": - access_key = os.environ.get("XMCHAT_API_KEY") - model = XMChat(api_key=access_key, user_name=user_name) - elif model_type == ModelType.StableLM: - from .StableLM import StableLM_Client - model = StableLM_Client(model_name, user_name=user_name) - elif model_type == ModelType.MOSS: - from .MOSS import MOSS_Client - model = MOSS_Client(model_name, user_name=user_name) - elif model_type == ModelType.YuanAI: - from .inspurai import Yuan_Client - model = Yuan_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt) - elif model_type == ModelType.Minimax: - from .minimax import MiniMax_Client - if os.environ.get("MINIMAX_API_KEY") != "": - access_key = os.environ.get("MINIMAX_API_KEY") - model = MiniMax_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt) - elif model_type == ModelType.ChuanhuAgent: - from .ChuanhuAgent import ChuanhuAgent_Client - model = ChuanhuAgent_Client(model_name, access_key, user_name=user_name) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg, chatbot - else: - return model, msg, chatbot, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r", encoding="utf-8") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/cpp/cppipc/prod_cons.h b/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/cpp/cppipc/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/cpp/cppipc/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git a/spaces/jbilcke-hf/AnimateDiff/download_bashscripts/1-ToonYou.sh b/spaces/jbilcke-hf/AnimateDiff/download_bashscripts/1-ToonYou.sh deleted file mode 100644 index 6b7c3b6deddca1279d945a218f8a3f77486486fa..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/AnimateDiff/download_bashscripts/1-ToonYou.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -wget https://civitai.com/api/download/models/78775 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate \ No newline at end of file diff --git a/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/data_loader/__init__.py b/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/data_loader/__init__.py deleted file mode 100644 index dd5f27a7d2742aaf3301599d1c5c9a8b58aa3ef4..0000000000000000000000000000000000000000 --- a/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/data_loader/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .data_loader import * -from .loader_utils import * \ No newline at end of file diff --git a/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/data_loader/loader_utils.py b/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/data_loader/loader_utils.py deleted file mode 100644 index ed63ce98c5e3f6e56d82f47ac7e38bfc7b76e8f2..0000000000000000000000000000000000000000 --- a/spaces/jhtonyKoo/music_mixing_style_transfer/mixing_style_transfer/data_loader/loader_utils.py +++ /dev/null @@ -1,71 +0,0 @@ -""" Utility file for loaders """ - -import numpy as np -import soundfile as sf -import wave - - - -# Function to convert frame level audio into atomic time -def frames_to_time(total_length, sr=44100): - in_time = total_length / sr - hour = int(in_time / 3600) - minute = int((in_time - hour*3600) / 60) - second = int(in_time - hour*3600 - minute*60) - return f"{hour:02d}:{minute:02d}:{second:02d}" - - -# Function to convert atomic labeled time into frames or seconds -def time_to_frames(input_time, to_frames=True, sr=44100): - hour, minute, second = input_time.split(':') - total_seconds = int(hour)*3600 + int(minute)*60 + int(second) - return total_seconds*sr if to_frames else total_seconds - - -# Function to convert seconds to atomic labeled time -def sec_to_time(input_time): - return frames_to_time(input_time, sr=1) - - -# Function to load total trainable raw audio lengths -def get_total_audio_length(audio_paths): - total_length = 0 - for cur_audio_path in audio_paths: - cur_wav = wave.open(cur_audio_path, 'r') - total_length += cur_wav.getnframes() # here, length = # of frames - return total_length - - -# Function to load length of an input wav audio -def load_wav_length(audio_path): - pt_wav = wave.open(audio_path, 'r') - length = pt_wav.getnframes() - return length - - -# Function to load only selected 16 bit, stereo wav audio segment from an input wav audio -def load_wav_segment(audio_path, start_point=None, duration=None, axis=1, sample_rate=44100): - start_point = 0 if start_point==None else start_point - duration = load_wav_length(audio_path) if duration==None else duration - pt_wav = wave.open(audio_path, 'r') - - if pt_wav.getframerate()!=sample_rate: - raise ValueError(f"ValueError: input audio's sample rate should be {sample_rate}") - pt_wav.setpos(start_point) - x = pt_wav.readframes(duration) - if pt_wav.getsampwidth()==2: - x = np.frombuffer(x, dtype=np.int16) - X = x / float(2**15) # needs to be 16 bit format - elif pt_wav.getsampwidth()==4: - x = np.frombuffer(x, dtype=np.int32) - X = x / float(2**31) # needs to be 32 bit format - else: - raise ValueError("ValueError: input audio's bit depth should be 16 or 32-bit") - - # exception for stereo channels - if pt_wav.getnchannels()==2: - X_l = np.expand_dims(X[::2], axis=axis) - X_r = np.expand_dims(X[1::2], axis=axis) - X = np.concatenate((X_l, X_r), axis=axis) - return X - diff --git a/spaces/jhwen/bingo/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/jhwen/bingo/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/jhwen/bingo/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/jmesikto/whisper-webui/src/utils.py b/spaces/jmesikto/whisper-webui/src/utils.py deleted file mode 100644 index 7f4ef3d71260034f655d6362f92e866b8777d16d..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/src/utils.py +++ /dev/null @@ -1,135 +0,0 @@ -import textwrap -import unicodedata -import re - -import zlib -from typing import Iterator, TextIO -import tqdm - -import urllib3 - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = process_text(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = process_text(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def process_text(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') - -def download_file(url: str, destination: str): - with urllib3.request.urlopen(url) as source, open(destination, "wb") as output: - with tqdm( - total=int(source.info().get("Content-Length")), - ncols=80, - unit="iB", - unit_scale=True, - unit_divisor=1024, - ) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) \ No newline at end of file diff --git a/spaces/joaomaia/football_probs/app.py b/spaces/joaomaia/football_probs/app.py deleted file mode 100644 index 8af56060f1db2437eccb95814131f5221dc354d0..0000000000000000000000000000000000000000 --- a/spaces/joaomaia/football_probs/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import gradio as gr -import pandas as pd -import numpy as np -import seaborn as sns -import matplotlib.pyplot as plt -from xgboost import XGBClassifier -from sklearn.metrics import confusion_matrix -#from imblearn.over_sampling import BorderlineSMOTE -from sklearn.model_selection import train_test_split -#from imblearn.over_sampling import SMOTE, RandomOverSampler -import datetime as dt -import warnings -warnings.filterwarnings("ignore") -#import config - -def check_probs(choice, - cutoff, - posse_bola_casa, - posse_bola_visitante, - tentativas_gol_casa, - tentativas_gol_visitante, - finalizacao_casa, - finalizacao_visitante, - chutes_fora_casa, - chutes_fora_visitante, - chutes_bloqueados_casa, - chutes_bloqueados_visitante, - faltas_cobradas_casa, - faltas_cobradas_visitante, - escanteios_casa, - escanteios_visitante, - impedimentos_casa, - impedimentos_visitante, - laterais_cobrados_casa, - laterais_cobrados_visitante, - defesas_goleiro_casa, - defesas_goleiro_visitante, - faltas_casa, - faltas_visitante, - cartao_amarelo_casa, - cartao_amarelo_visitante, - cartao_vermelho_casa, - cartao_vermelho_visitante, - total_passes_casa, - total_passes_visitante, - passes_completados_casa, - passes_completados_visitante, - desarmes_casa, - desarmes_visitante, - ataques_casa, - ataques_visitante, - ataques_perigoso_casa, - ataques_perigoso_visitante - ): - #clf = load(os.path.abspath('sales_forecasting_5.joblib')) - teste=pd.DataFrame() - #teste.to_excel('teste.xlsx',index=False) - return teste - -demo = gr.Interface( - fn=check_probs, - inputs=[gr.CheckboxGroup(choices=['menos de 3 gols','mais de 1 gol','entre 1 e 3 gols']), - gr.Slider(minimum=0, maximum=1, value=0.9, step=0.05),gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(), - gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(), - gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(), - gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(), - gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(), - gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(),gr.Number(), - gr.Number()], - outputs=['dataframe']) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Math/_IntegerCustom.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Math/_IntegerCustom.py deleted file mode 100644 index d6f6f751a848ed2b6285f3aeaa1313f7c82aa64b..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Math/_IntegerCustom.py +++ /dev/null @@ -1,118 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2018, Helder Eijs -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from ._IntegerNative import IntegerNative - -from Crypto.Util.number import long_to_bytes, bytes_to_long - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - create_string_buffer, - get_raw_buffer, backend, - c_size_t, c_ulonglong) - - -from Crypto.Random.random import getrandbits - -c_defs = """ -int monty_pow(const uint8_t *base, - const uint8_t *exp, - const uint8_t *modulus, - uint8_t *out, - size_t len, - uint64_t seed); -""" - - -_raw_montgomery = load_pycryptodome_raw_lib("Crypto.Math._modexp", c_defs) -implementation = {"library": "custom", "api": backend} - - -class IntegerCustom(IntegerNative): - - @staticmethod - def from_bytes(byte_string, byteorder='big'): - if byteorder == 'big': - pass - elif byteorder == 'little': - byte_string = bytearray(byte_string) - byte_string.reverse() - else: - raise ValueError("Incorrect byteorder") - return IntegerCustom(bytes_to_long(byte_string)) - - def inplace_pow(self, exponent, modulus=None): - exp_value = int(exponent) - if exp_value < 0: - raise ValueError("Exponent must not be negative") - - # No modular reduction - if modulus is None: - self._value = pow(self._value, exp_value) - return self - - # With modular reduction - mod_value = int(modulus) - if mod_value < 0: - raise ValueError("Modulus must be positive") - if mod_value == 0: - raise ZeroDivisionError("Modulus cannot be zero") - - # C extension only works with odd moduli - if (mod_value & 1) == 0: - self._value = pow(self._value, exp_value, mod_value) - return self - - # C extension only works with bases smaller than modulus - if self._value >= mod_value: - self._value %= mod_value - - max_len = len(long_to_bytes(max(self._value, exp_value, mod_value))) - - base_b = long_to_bytes(self._value, max_len) - exp_b = long_to_bytes(exp_value, max_len) - modulus_b = long_to_bytes(mod_value, max_len) - - out = create_string_buffer(max_len) - - error = _raw_montgomery.monty_pow( - out, - base_b, - exp_b, - modulus_b, - c_size_t(max_len), - c_ulonglong(getrandbits(64)) - ) - - if error: - raise ValueError("monty_pow failed with error: %d" % error) - - result = bytes_to_long(get_raw_buffer(out)) - self._value = result - return self diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/web_log.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/web_log.py deleted file mode 100644 index bc6e3b5a8a280347d606e91374517fef223fa441..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/web_log.py +++ /dev/null @@ -1,208 +0,0 @@ -import datetime -import functools -import logging -import os -import re -from collections import namedtuple -from typing import Any, Callable, Dict, Iterable, List, Tuple # noqa - -from .abc import AbstractAccessLogger -from .web_request import BaseRequest -from .web_response import StreamResponse - -KeyMethod = namedtuple("KeyMethod", "key method") - - -class AccessLogger(AbstractAccessLogger): - """Helper object to log access. - - Usage: - log = logging.getLogger("spam") - log_format = "%a %{User-Agent}i" - access_logger = AccessLogger(log, log_format) - access_logger.log(request, response, time) - - Format: - %% The percent sign - %a Remote IP-address (IP-address of proxy if using reverse proxy) - %t Time when the request was started to process - %P The process ID of the child that serviced the request - %r First line of request - %s Response status code - %b Size of response in bytes, including HTTP headers - %T Time taken to serve the request, in seconds - %Tf Time taken to serve the request, in seconds with floating fraction - in .06f format - %D Time taken to serve the request, in microseconds - %{FOO}i request.headers['FOO'] - %{FOO}o response.headers['FOO'] - %{FOO}e os.environ['FOO'] - - """ - - LOG_FORMAT_MAP = { - "a": "remote_address", - "t": "request_start_time", - "P": "process_id", - "r": "first_request_line", - "s": "response_status", - "b": "response_size", - "T": "request_time", - "Tf": "request_time_frac", - "D": "request_time_micro", - "i": "request_header", - "o": "response_header", - } - - LOG_FORMAT = '%a %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"' - FORMAT_RE = re.compile(r"%(\{([A-Za-z0-9\-_]+)\}([ioe])|[atPrsbOD]|Tf?)") - CLEANUP_RE = re.compile(r"(%[^s])") - _FORMAT_CACHE: Dict[str, Tuple[str, List[KeyMethod]]] = {} - - def __init__(self, logger: logging.Logger, log_format: str = LOG_FORMAT) -> None: - """Initialise the logger. - - logger is a logger object to be used for logging. - log_format is a string with apache compatible log format description. - - """ - super().__init__(logger, log_format=log_format) - - _compiled_format = AccessLogger._FORMAT_CACHE.get(log_format) - if not _compiled_format: - _compiled_format = self.compile_format(log_format) - AccessLogger._FORMAT_CACHE[log_format] = _compiled_format - - self._log_format, self._methods = _compiled_format - - def compile_format(self, log_format: str) -> Tuple[str, List[KeyMethod]]: - """Translate log_format into form usable by modulo formatting - - All known atoms will be replaced with %s - Also methods for formatting of those atoms will be added to - _methods in appropriate order - - For example we have log_format = "%a %t" - This format will be translated to "%s %s" - Also contents of _methods will be - [self._format_a, self._format_t] - These method will be called and results will be passed - to translated string format. - - Each _format_* method receive 'args' which is list of arguments - given to self.log - - Exceptions are _format_e, _format_i and _format_o methods which - also receive key name (by functools.partial) - - """ - # list of (key, method) tuples, we don't use an OrderedDict as users - # can repeat the same key more than once - methods = list() - - for atom in self.FORMAT_RE.findall(log_format): - if atom[1] == "": - format_key1 = self.LOG_FORMAT_MAP[atom[0]] - m = getattr(AccessLogger, "_format_%s" % atom[0]) - key_method = KeyMethod(format_key1, m) - else: - format_key2 = (self.LOG_FORMAT_MAP[atom[2]], atom[1]) - m = getattr(AccessLogger, "_format_%s" % atom[2]) - key_method = KeyMethod(format_key2, functools.partial(m, atom[1])) - - methods.append(key_method) - - log_format = self.FORMAT_RE.sub(r"%s", log_format) - log_format = self.CLEANUP_RE.sub(r"%\1", log_format) - return log_format, methods - - @staticmethod - def _format_i( - key: str, request: BaseRequest, response: StreamResponse, time: float - ) -> str: - if request is None: - return "(no headers)" - - # suboptimal, make istr(key) once - return request.headers.get(key, "-") - - @staticmethod - def _format_o( - key: str, request: BaseRequest, response: StreamResponse, time: float - ) -> str: - # suboptimal, make istr(key) once - return response.headers.get(key, "-") - - @staticmethod - def _format_a(request: BaseRequest, response: StreamResponse, time: float) -> str: - if request is None: - return "-" - ip = request.remote - return ip if ip is not None else "-" - - @staticmethod - def _format_t(request: BaseRequest, response: StreamResponse, time: float) -> str: - now = datetime.datetime.utcnow() - start_time = now - datetime.timedelta(seconds=time) - return start_time.strftime("[%d/%b/%Y:%H:%M:%S +0000]") - - @staticmethod - def _format_P(request: BaseRequest, response: StreamResponse, time: float) -> str: - return "<%s>" % os.getpid() - - @staticmethod - def _format_r(request: BaseRequest, response: StreamResponse, time: float) -> str: - if request is None: - return "-" - return "{} {} HTTP/{}.{}".format( - request.method, - request.path_qs, - request.version.major, - request.version.minor, - ) - - @staticmethod - def _format_s(request: BaseRequest, response: StreamResponse, time: float) -> int: - return response.status - - @staticmethod - def _format_b(request: BaseRequest, response: StreamResponse, time: float) -> int: - return response.body_length - - @staticmethod - def _format_T(request: BaseRequest, response: StreamResponse, time: float) -> str: - return str(round(time)) - - @staticmethod - def _format_Tf(request: BaseRequest, response: StreamResponse, time: float) -> str: - return "%06f" % time - - @staticmethod - def _format_D(request: BaseRequest, response: StreamResponse, time: float) -> str: - return str(round(time * 1000000)) - - def _format_line( - self, request: BaseRequest, response: StreamResponse, time: float - ) -> Iterable[Tuple[str, Callable[[BaseRequest, StreamResponse, float], str]]]: - return [(key, method(request, response, time)) for key, method in self._methods] - - def log(self, request: BaseRequest, response: StreamResponse, time: float) -> None: - try: - fmt_info = self._format_line(request, response, time) - - values = list() - extra = dict() - for key, value in fmt_info: - values.append(value) - - if key.__class__ is str: - extra[key] = value - else: - k1, k2 = key # type: ignore[misc] - dct = extra.get(k1, {}) # type: ignore[var-annotated,has-type] - dct[k2] = value # type: ignore[index,has-type] - extra[k1] = dct # type: ignore[has-type,assignment] - - self.logger.info(self._log_format % tuple(values), extra=extra) - except Exception: - self.logger.exception("Error in logging") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/worker.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/worker.py deleted file mode 100644 index f1302899f2f0e078613e69d9a8103ecc00bae95d..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/worker.py +++ /dev/null @@ -1,269 +0,0 @@ -"""Async gunicorn worker for aiohttp.web""" - -import asyncio -import os -import re -import signal -import sys -from types import FrameType -from typing import Any, Awaitable, Callable, Optional, Union # noqa - -from gunicorn.config import AccessLogFormat as GunicornAccessLogFormat -from gunicorn.workers import base - -from aiohttp import web - -from .helpers import set_result -from .web_app import Application -from .web_log import AccessLogger - -try: - import ssl - - SSLContext = ssl.SSLContext -except ImportError: # pragma: no cover - ssl = None # type: ignore[assignment] - SSLContext = object # type: ignore[misc,assignment] - - -__all__ = ("GunicornWebWorker", "GunicornUVLoopWebWorker", "GunicornTokioWebWorker") - - -class GunicornWebWorker(base.Worker): # type: ignore[misc,no-any-unimported] - - DEFAULT_AIOHTTP_LOG_FORMAT = AccessLogger.LOG_FORMAT - DEFAULT_GUNICORN_LOG_FORMAT = GunicornAccessLogFormat.default - - def __init__(self, *args: Any, **kw: Any) -> None: # pragma: no cover - super().__init__(*args, **kw) - - self._task: Optional[asyncio.Task[None]] = None - self.exit_code = 0 - self._notify_waiter: Optional[asyncio.Future[bool]] = None - - def init_process(self) -> None: - # create new event_loop after fork - asyncio.get_event_loop().close() - - self.loop = asyncio.new_event_loop() - asyncio.set_event_loop(self.loop) - - super().init_process() - - def run(self) -> None: - self._task = self.loop.create_task(self._run()) - - try: # ignore all finalization problems - self.loop.run_until_complete(self._task) - except Exception: - self.log.exception("Exception in gunicorn worker") - self.loop.run_until_complete(self.loop.shutdown_asyncgens()) - self.loop.close() - - sys.exit(self.exit_code) - - async def _run(self) -> None: - runner = None - if isinstance(self.wsgi, Application): - app = self.wsgi - elif asyncio.iscoroutinefunction(self.wsgi): - wsgi = await self.wsgi() - if isinstance(wsgi, web.AppRunner): - runner = wsgi - app = runner.app - else: - app = wsgi - else: - raise RuntimeError( - "wsgi app should be either Application or " - "async function returning Application, got {}".format(self.wsgi) - ) - - if runner is None: - access_log = self.log.access_log if self.cfg.accesslog else None - runner = web.AppRunner( - app, - logger=self.log, - keepalive_timeout=self.cfg.keepalive, - access_log=access_log, - access_log_format=self._get_valid_log_format( - self.cfg.access_log_format - ), - ) - await runner.setup() - - ctx = self._create_ssl_context(self.cfg) if self.cfg.is_ssl else None - - runner = runner - assert runner is not None - server = runner.server - assert server is not None - for sock in self.sockets: - site = web.SockSite( - runner, - sock, - ssl_context=ctx, - shutdown_timeout=self.cfg.graceful_timeout / 100 * 95, - ) - await site.start() - - # If our parent changed then we shut down. - pid = os.getpid() - try: - while self.alive: # type: ignore[has-type] - self.notify() - - cnt = server.requests_count - if self.cfg.max_requests and cnt > self.cfg.max_requests: - self.alive = False - self.log.info("Max requests, shutting down: %s", self) - - elif pid == os.getpid() and self.ppid != os.getppid(): - self.alive = False - self.log.info("Parent changed, shutting down: %s", self) - else: - await self._wait_next_notify() - except BaseException: - pass - - await runner.cleanup() - - def _wait_next_notify(self) -> "asyncio.Future[bool]": - self._notify_waiter_done() - - loop = self.loop - assert loop is not None - self._notify_waiter = waiter = loop.create_future() - self.loop.call_later(1.0, self._notify_waiter_done, waiter) - - return waiter - - def _notify_waiter_done( - self, waiter: Optional["asyncio.Future[bool]"] = None - ) -> None: - if waiter is None: - waiter = self._notify_waiter - if waiter is not None: - set_result(waiter, True) - - if waiter is self._notify_waiter: - self._notify_waiter = None - - def init_signals(self) -> None: - # Set up signals through the event loop API. - - self.loop.add_signal_handler( - signal.SIGQUIT, self.handle_quit, signal.SIGQUIT, None - ) - - self.loop.add_signal_handler( - signal.SIGTERM, self.handle_exit, signal.SIGTERM, None - ) - - self.loop.add_signal_handler( - signal.SIGINT, self.handle_quit, signal.SIGINT, None - ) - - self.loop.add_signal_handler( - signal.SIGWINCH, self.handle_winch, signal.SIGWINCH, None - ) - - self.loop.add_signal_handler( - signal.SIGUSR1, self.handle_usr1, signal.SIGUSR1, None - ) - - self.loop.add_signal_handler( - signal.SIGABRT, self.handle_abort, signal.SIGABRT, None - ) - - # Don't let SIGTERM and SIGUSR1 disturb active requests - # by interrupting system calls - signal.siginterrupt(signal.SIGTERM, False) - signal.siginterrupt(signal.SIGUSR1, False) - # Reset signals so Gunicorn doesn't swallow subprocess return codes - # See: https://github.com/aio-libs/aiohttp/issues/6130 - if sys.version_info < (3, 8): - # Starting from Python 3.8, - # the default child watcher is ThreadedChildWatcher. - # The watcher doesn't depend on SIGCHLD signal, - # there is no need to reset it. - signal.signal(signal.SIGCHLD, signal.SIG_DFL) - - def handle_quit(self, sig: int, frame: FrameType) -> None: - self.alive = False - - # worker_int callback - self.cfg.worker_int(self) - - # wakeup closing process - self._notify_waiter_done() - - def handle_abort(self, sig: int, frame: FrameType) -> None: - self.alive = False - self.exit_code = 1 - self.cfg.worker_abort(self) - sys.exit(1) - - @staticmethod - def _create_ssl_context(cfg: Any) -> "SSLContext": - """Creates SSLContext instance for usage in asyncio.create_server. - - See ssl.SSLSocket.__init__ for more details. - """ - if ssl is None: # pragma: no cover - raise RuntimeError("SSL is not supported.") - - ctx = ssl.SSLContext(cfg.ssl_version) - ctx.load_cert_chain(cfg.certfile, cfg.keyfile) - ctx.verify_mode = cfg.cert_reqs - if cfg.ca_certs: - ctx.load_verify_locations(cfg.ca_certs) - if cfg.ciphers: - ctx.set_ciphers(cfg.ciphers) - return ctx - - def _get_valid_log_format(self, source_format: str) -> str: - if source_format == self.DEFAULT_GUNICORN_LOG_FORMAT: - return self.DEFAULT_AIOHTTP_LOG_FORMAT - elif re.search(r"%\([^\)]+\)", source_format): - raise ValueError( - "Gunicorn's style options in form of `%(name)s` are not " - "supported for the log formatting. Please use aiohttp's " - "format specification to configure access log formatting: " - "http://docs.aiohttp.org/en/stable/logging.html" - "#format-specification" - ) - else: - return source_format - - -class GunicornUVLoopWebWorker(GunicornWebWorker): - def init_process(self) -> None: - import uvloop - - # Close any existing event loop before setting a - # new policy. - asyncio.get_event_loop().close() - - # Setup uvloop policy, so that every - # asyncio.get_event_loop() will create an instance - # of uvloop event loop. - asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) - - super().init_process() - - -class GunicornTokioWebWorker(GunicornWebWorker): - def init_process(self) -> None: # pragma: no cover - import tokio - - # Close any existing event loop before setting a - # new policy. - asyncio.get_event_loop().close() - - # Setup tokio policy, so that every - # asyncio.get_event_loop() will create an instance - # of tokio event loop. - asyncio.set_event_loop_policy(tokio.EventLoopPolicy()) - - super().init_process() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/_asyncio_backend.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/_asyncio_backend.py deleted file mode 100644 index 2631228ecdc95684f1d30980780f3300bf81de9b..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/_asyncio_backend.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -"""asyncio library query support""" - -import asyncio -import socket -import sys - -import dns._asyncbackend -import dns.exception - -_is_win32 = sys.platform == "win32" - - -def _get_running_loop(): - try: - return asyncio.get_running_loop() - except AttributeError: # pragma: no cover - return asyncio.get_event_loop() - - -class _DatagramProtocol: - def __init__(self): - self.transport = None - self.recvfrom = None - - def connection_made(self, transport): - self.transport = transport - - def datagram_received(self, data, addr): - if self.recvfrom and not self.recvfrom.done(): - self.recvfrom.set_result((data, addr)) - - def error_received(self, exc): # pragma: no cover - if self.recvfrom and not self.recvfrom.done(): - self.recvfrom.set_exception(exc) - - def connection_lost(self, exc): - if self.recvfrom and not self.recvfrom.done(): - if exc is None: - # EOF we triggered. Is there a better way to do this? - try: - raise EOFError - except EOFError as e: - self.recvfrom.set_exception(e) - else: - self.recvfrom.set_exception(exc) - - def close(self): - self.transport.close() - - -async def _maybe_wait_for(awaitable, timeout): - if timeout is not None: - try: - return await asyncio.wait_for(awaitable, timeout) - except asyncio.TimeoutError: - raise dns.exception.Timeout(timeout=timeout) - else: - return await awaitable - - -class DatagramSocket(dns._asyncbackend.DatagramSocket): - def __init__(self, family, transport, protocol): - super().__init__(family) - self.transport = transport - self.protocol = protocol - - async def sendto(self, what, destination, timeout): # pragma: no cover - # no timeout for asyncio sendto - self.transport.sendto(what, destination) - return len(what) - - async def recvfrom(self, size, timeout): - # ignore size as there's no way I know to tell protocol about it - done = _get_running_loop().create_future() - try: - assert self.protocol.recvfrom is None - self.protocol.recvfrom = done - await _maybe_wait_for(done, timeout) - return done.result() - finally: - self.protocol.recvfrom = None - - async def close(self): - self.protocol.close() - - async def getpeername(self): - return self.transport.get_extra_info("peername") - - async def getsockname(self): - return self.transport.get_extra_info("sockname") - - async def getpeercert(self, timeout): - raise NotImplementedError - - -class StreamSocket(dns._asyncbackend.StreamSocket): - def __init__(self, af, reader, writer): - self.family = af - self.reader = reader - self.writer = writer - - async def sendall(self, what, timeout): - self.writer.write(what) - return await _maybe_wait_for(self.writer.drain(), timeout) - - async def recv(self, size, timeout): - return await _maybe_wait_for(self.reader.read(size), timeout) - - async def close(self): - self.writer.close() - - async def getpeername(self): - return self.writer.get_extra_info("peername") - - async def getsockname(self): - return self.writer.get_extra_info("sockname") - - async def getpeercert(self, timeout): - return self.writer.get_extra_info("peercert") - - -try: - import anyio - import httpcore - import httpcore._backends.anyio - import httpx - - _CoreAsyncNetworkBackend = httpcore.AsyncNetworkBackend - _CoreAnyIOStream = httpcore._backends.anyio.AnyIOStream - - from dns.query import _compute_times, _expiration_for_this_attempt, _remaining - - class _NetworkBackend(_CoreAsyncNetworkBackend): - def __init__(self, resolver, local_port, bootstrap_address, family): - super().__init__() - self._local_port = local_port - self._resolver = resolver - self._bootstrap_address = bootstrap_address - self._family = family - if local_port != 0: - raise NotImplementedError( - "the asyncio transport for HTTPX cannot set the local port" - ) - - async def connect_tcp( - self, host, port, timeout, local_address, socket_options=None - ): # pylint: disable=signature-differs - addresses = [] - _, expiration = _compute_times(timeout) - if dns.inet.is_address(host): - addresses.append(host) - elif self._bootstrap_address is not None: - addresses.append(self._bootstrap_address) - else: - timeout = _remaining(expiration) - family = self._family - if local_address: - family = dns.inet.af_for_address(local_address) - answers = await self._resolver.resolve_name( - host, family=family, lifetime=timeout - ) - addresses = answers.addresses() - for address in addresses: - try: - attempt_expiration = _expiration_for_this_attempt(2.0, expiration) - timeout = _remaining(attempt_expiration) - with anyio.fail_after(timeout): - stream = await anyio.connect_tcp( - remote_host=address, - remote_port=port, - local_host=local_address, - ) - return _CoreAnyIOStream(stream) - except Exception: - pass - raise httpcore.ConnectError - - async def connect_unix_socket( - self, path, timeout, socket_options=None - ): # pylint: disable=signature-differs - raise NotImplementedError - - async def sleep(self, seconds): # pylint: disable=signature-differs - await anyio.sleep(seconds) - - class _HTTPTransport(httpx.AsyncHTTPTransport): - def __init__( - self, - *args, - local_port=0, - bootstrap_address=None, - resolver=None, - family=socket.AF_UNSPEC, - **kwargs, - ): - if resolver is None: - # pylint: disable=import-outside-toplevel,redefined-outer-name - import dns.asyncresolver - - resolver = dns.asyncresolver.Resolver() - super().__init__(*args, **kwargs) - self._pool._network_backend = _NetworkBackend( - resolver, local_port, bootstrap_address, family - ) - -except ImportError: - _HTTPTransport = dns._asyncbackend.NullTransport # type: ignore - - -class Backend(dns._asyncbackend.Backend): - def name(self): - return "asyncio" - - async def make_socket( - self, - af, - socktype, - proto=0, - source=None, - destination=None, - timeout=None, - ssl_context=None, - server_hostname=None, - ): - if destination is None and socktype == socket.SOCK_DGRAM and _is_win32: - raise NotImplementedError( - "destinationless datagram sockets " - "are not supported by asyncio " - "on Windows" - ) - loop = _get_running_loop() - if socktype == socket.SOCK_DGRAM: - transport, protocol = await loop.create_datagram_endpoint( - _DatagramProtocol, - source, - family=af, - proto=proto, - remote_addr=destination, - ) - return DatagramSocket(af, transport, protocol) - elif socktype == socket.SOCK_STREAM: - if destination is None: - # This shouldn't happen, but we check to make code analysis software - # happier. - raise ValueError("destination required for stream sockets") - (r, w) = await _maybe_wait_for( - asyncio.open_connection( - destination[0], - destination[1], - ssl=ssl_context, - family=af, - proto=proto, - local_addr=source, - server_hostname=server_hostname, - ), - timeout, - ) - return StreamSocket(af, r, w) - raise NotImplementedError( - "unsupported socket " + f"type {socktype}" - ) # pragma: no cover - - async def sleep(self, interval): - await asyncio.sleep(interval) - - def datagram_connection_required(self): - return _is_win32 - - def get_transport_class(self): - return _HTTPTransport - - async def wait_for(self, awaitable, timeout): - return await _maybe_wait_for(awaitable, timeout) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py deleted file mode 100644 index edbb0b92f77e3198b55920879271f481082131ea..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py +++ /dev/null @@ -1,13 +0,0 @@ -from io import BytesIO -from fontTools.ttLib.tables.C_F_F_ import table_C_F_F_ - - -class table_C_F_F__2(table_C_F_F_): - def decompile(self, data, otFont): - self.cff.decompile(BytesIO(data), otFont, isCFF2=True) - assert len(self.cff) == 1, "can't deal with multi-font CFF tables." - - def compile(self, otFont): - f = BytesIO() - self.cff.compile(f, otFont, isCFF2=True) - return f.getvalue() diff --git a/spaces/jone/GFPGAN/gfpgan/archs/gfpganv1_clean_arch.py b/spaces/jone/GFPGAN/gfpgan/archs/gfpganv1_clean_arch.py deleted file mode 100644 index eb2e15d288bf0ad641034ed58d5dab37b0baabb3..0000000000000000000000000000000000000000 --- a/spaces/jone/GFPGAN/gfpgan/archs/gfpganv1_clean_arch.py +++ /dev/null @@ -1,324 +0,0 @@ -import math -import random -import torch -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn -from torch.nn import functional as F - -from .stylegan2_clean_arch import StyleGAN2GeneratorClean - - -class StyleGAN2GeneratorCSFT(StyleGAN2GeneratorClean): - """StyleGAN2 Generator with SFT modulation (Spatial Feature Transform). - - It is the clean version without custom compiled CUDA extensions used in StyleGAN2. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - num_mlp (int): Layer number of MLP style layers. Default: 8. - channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2. - narrow (float): The narrow ratio for channels. Default: 1. - sft_half (bool): Whether to apply SFT on half of the input channels. Default: False. - """ - - def __init__(self, out_size, num_style_feat=512, num_mlp=8, channel_multiplier=2, narrow=1, sft_half=False): - super(StyleGAN2GeneratorCSFT, self).__init__( - out_size, - num_style_feat=num_style_feat, - num_mlp=num_mlp, - channel_multiplier=channel_multiplier, - narrow=narrow) - self.sft_half = sft_half - - def forward(self, - styles, - conditions, - input_is_latent=False, - noise=None, - randomize_noise=True, - truncation=1, - truncation_latent=None, - inject_index=None, - return_latents=False): - """Forward function for StyleGAN2GeneratorCSFT. - - Args: - styles (list[Tensor]): Sample codes of styles. - conditions (list[Tensor]): SFT conditions to generators. - input_is_latent (bool): Whether input is latent style. Default: False. - noise (Tensor | None): Input noise or None. Default: None. - randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True. - truncation (float): The truncation ratio. Default: 1. - truncation_latent (Tensor | None): The truncation latent tensor. Default: None. - inject_index (int | None): The injection index for mixing noise. Default: None. - return_latents (bool): Whether to return style latents. Default: False. - """ - # style codes -> latents with Style MLP layer - if not input_is_latent: - styles = [self.style_mlp(s) for s in styles] - # noises - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers # for each style conv layer - else: # use the stored noise - noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)] - # style truncation - if truncation < 1: - style_truncation = [] - for style in styles: - style_truncation.append(truncation_latent + truncation * (style - truncation_latent)) - styles = style_truncation - # get style latents with injection - if len(styles) == 1: - inject_index = self.num_latent - - if styles[0].ndim < 3: - # repeat latent code for all the layers - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: # used for encoder with different latent code for each layer - latent = styles[0] - elif len(styles) == 2: # mixing noises - if inject_index is None: - inject_index = random.randint(1, self.num_latent - 1) - latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1) - latent = torch.cat([latent1, latent2], 1) - - # main generation - out = self.constant_input(latent.shape[0]) - out = self.style_conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2], - noise[2::2], self.to_rgbs): - out = conv1(out, latent[:, i], noise=noise1) - - # the conditions may have fewer levels - if i < len(conditions): - # SFT part to combine the conditions - if self.sft_half: # only apply SFT to half of the channels - out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1) - out_sft = out_sft * conditions[i - 1] + conditions[i] - out = torch.cat([out_same, out_sft], dim=1) - else: # apply SFT to all the channels - out = out * conditions[i - 1] + conditions[i] - - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space - i += 2 - - image = skip - - if return_latents: - return image, latent - else: - return image, None - - -class ResBlock(nn.Module): - """Residual block with bilinear upsampling/downsampling. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - mode (str): Upsampling/downsampling mode. Options: down | up. Default: down. - """ - - def __init__(self, in_channels, out_channels, mode='down'): - super(ResBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_channels, in_channels, 3, 1, 1) - self.conv2 = nn.Conv2d(in_channels, out_channels, 3, 1, 1) - self.skip = nn.Conv2d(in_channels, out_channels, 1, bias=False) - if mode == 'down': - self.scale_factor = 0.5 - elif mode == 'up': - self.scale_factor = 2 - - def forward(self, x): - out = F.leaky_relu_(self.conv1(x), negative_slope=0.2) - # upsample/downsample - out = F.interpolate(out, scale_factor=self.scale_factor, mode='bilinear', align_corners=False) - out = F.leaky_relu_(self.conv2(out), negative_slope=0.2) - # skip - x = F.interpolate(x, scale_factor=self.scale_factor, mode='bilinear', align_corners=False) - skip = self.skip(x) - out = out + skip - return out - - -@ARCH_REGISTRY.register() -class GFPGANv1Clean(nn.Module): - """The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT. - - It is the clean version without custom compiled CUDA extensions used in StyleGAN2. - - Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2. - decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None. - fix_decoder (bool): Whether to fix the decoder. Default: True. - - num_mlp (int): Layer number of MLP style layers. Default: 8. - input_is_latent (bool): Whether input is latent style. Default: False. - different_w (bool): Whether to use different latent w for different layers. Default: False. - narrow (float): The narrow ratio for channels. Default: 1. - sft_half (bool): Whether to apply SFT on half of the input channels. Default: False. - """ - - def __init__( - self, - out_size, - num_style_feat=512, - channel_multiplier=1, - decoder_load_path=None, - fix_decoder=True, - # for stylegan decoder - num_mlp=8, - input_is_latent=False, - different_w=False, - narrow=1, - sft_half=False): - - super(GFPGANv1Clean, self).__init__() - self.input_is_latent = input_is_latent - self.different_w = different_w - self.num_style_feat = num_style_feat - - unet_narrow = narrow * 0.5 # by default, use a half of input channels - channels = { - '4': int(512 * unet_narrow), - '8': int(512 * unet_narrow), - '16': int(512 * unet_narrow), - '32': int(512 * unet_narrow), - '64': int(256 * channel_multiplier * unet_narrow), - '128': int(128 * channel_multiplier * unet_narrow), - '256': int(64 * channel_multiplier * unet_narrow), - '512': int(32 * channel_multiplier * unet_narrow), - '1024': int(16 * channel_multiplier * unet_narrow) - } - - self.log_size = int(math.log(out_size, 2)) - first_out_size = 2**(int(math.log(out_size, 2))) - - self.conv_body_first = nn.Conv2d(3, channels[f'{first_out_size}'], 1) - - # downsample - in_channels = channels[f'{first_out_size}'] - self.conv_body_down = nn.ModuleList() - for i in range(self.log_size, 2, -1): - out_channels = channels[f'{2**(i - 1)}'] - self.conv_body_down.append(ResBlock(in_channels, out_channels, mode='down')) - in_channels = out_channels - - self.final_conv = nn.Conv2d(in_channels, channels['4'], 3, 1, 1) - - # upsample - in_channels = channels['4'] - self.conv_body_up = nn.ModuleList() - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - self.conv_body_up.append(ResBlock(in_channels, out_channels, mode='up')) - in_channels = out_channels - - # to RGB - self.toRGB = nn.ModuleList() - for i in range(3, self.log_size + 1): - self.toRGB.append(nn.Conv2d(channels[f'{2**i}'], 3, 1)) - - if different_w: - linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat - else: - linear_out_channel = num_style_feat - - self.final_linear = nn.Linear(channels['4'] * 4 * 4, linear_out_channel) - - # the decoder: stylegan2 generator with SFT modulations - self.stylegan_decoder = StyleGAN2GeneratorCSFT( - out_size=out_size, - num_style_feat=num_style_feat, - num_mlp=num_mlp, - channel_multiplier=channel_multiplier, - narrow=narrow, - sft_half=sft_half) - - # load pre-trained stylegan2 model if necessary - if decoder_load_path: - self.stylegan_decoder.load_state_dict( - torch.load(decoder_load_path, map_location=lambda storage, loc: storage)['params_ema']) - # fix decoder without updating params - if fix_decoder: - for _, param in self.stylegan_decoder.named_parameters(): - param.requires_grad = False - - # for SFT modulations (scale and shift) - self.condition_scale = nn.ModuleList() - self.condition_shift = nn.ModuleList() - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - if sft_half: - sft_out_channels = out_channels - else: - sft_out_channels = out_channels * 2 - self.condition_scale.append( - nn.Sequential( - nn.Conv2d(out_channels, out_channels, 3, 1, 1), nn.LeakyReLU(0.2, True), - nn.Conv2d(out_channels, sft_out_channels, 3, 1, 1))) - self.condition_shift.append( - nn.Sequential( - nn.Conv2d(out_channels, out_channels, 3, 1, 1), nn.LeakyReLU(0.2, True), - nn.Conv2d(out_channels, sft_out_channels, 3, 1, 1))) - - def forward(self, x, return_latents=False, return_rgb=True, randomize_noise=True): - """Forward function for GFPGANv1Clean. - - Args: - x (Tensor): Input images. - return_latents (bool): Whether to return style latents. Default: False. - return_rgb (bool): Whether return intermediate rgb images. Default: True. - randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True. - """ - conditions = [] - unet_skips = [] - out_rgbs = [] - - # encoder - feat = F.leaky_relu_(self.conv_body_first(x), negative_slope=0.2) - for i in range(self.log_size - 2): - feat = self.conv_body_down[i](feat) - unet_skips.insert(0, feat) - feat = F.leaky_relu_(self.final_conv(feat), negative_slope=0.2) - - # style code - style_code = self.final_linear(feat.view(feat.size(0), -1)) - if self.different_w: - style_code = style_code.view(style_code.size(0), -1, self.num_style_feat) - - # decode - for i in range(self.log_size - 2): - # add unet skip - feat = feat + unet_skips[i] - # ResUpLayer - feat = self.conv_body_up[i](feat) - # generate scale and shift for SFT layers - scale = self.condition_scale[i](feat) - conditions.append(scale.clone()) - shift = self.condition_shift[i](feat) - conditions.append(shift.clone()) - # generate rgb images - if return_rgb: - out_rgbs.append(self.toRGB[i](feat)) - - # decoder - image, _ = self.stylegan_decoder([style_code], - conditions, - return_latents=return_latents, - input_is_latent=self.input_is_latent, - randomize_noise=randomize_noise) - - return image, out_rgbs diff --git a/spaces/jpdiazpardo/jpdiazpardo-whisper-tiny-metal/README.md b/spaces/jpdiazpardo/jpdiazpardo-whisper-tiny-metal/README.md deleted file mode 100644 index 45a226811817ed2d5cc9801cf789367e0c67a098..0000000000000000000000000000000000000000 --- a/spaces/jpdiazpardo/jpdiazpardo-whisper-tiny-metal/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Jpdiazpardo Whisper Tiny Metal -emoji: 🤘🤘🤘 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jsjuan/PlateNumberRecognition/transfer.py b/spaces/jsjuan/PlateNumberRecognition/transfer.py deleted file mode 100644 index 57435c36d5b0cfbcf26766a1b60b04f01418df7e..0000000000000000000000000000000000000000 --- a/spaces/jsjuan/PlateNumberRecognition/transfer.py +++ /dev/null @@ -1,25 +0,0 @@ -#import cv2 -import numpy as np -# import matplotlib.pyplot as plt -#from local_utils import detect_lp -from os.path import splitext,basename -from keras.models import model_from_json -# import glob - - -def load_model(path): - try: - path = splitext(path)[0] - with open('%s.json' % path, 'r') as json_file: - model_json = json_file.read() - model = model_from_json(model_json, custom_objects={}) - model.load_weights('%s.h5' % path) - #print("Loading model successfully...") - return model - except Exception as e: - print(e) - - - -# wpod_net_path = "wpod-net.json" -# wpod_net = load_model(wpod_net_path) diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/utils_test.py b/spaces/juancopi81/youtube-music-transcribe/t5x/utils_test.py deleted file mode 100644 index 6a33b819474665e723e4807f82275f19632fc603..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/utils_test.py +++ /dev/null @@ -1,604 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for t5x.utils.""" - -import dataclasses -import os -import re -from typing import Optional - -from absl import flags -from absl.testing import absltest -from absl.testing import parameterized -import flax.core -from flax.linen import partitioning as flax_partitioning -import jax -import numpy as np -import seqio -from t5x import checkpoints -from t5x import partitioning -from t5x import test_utils -from t5x import train_state as train_state_lib -from t5x import utils -import tensorflow as tf - -mock = absltest.mock -Evaluator = seqio.Evaluator -PartitionSpec = partitioning.PartitionSpec -AxisMetadata = flax_partitioning.AxisMetadata - -# Parse absl flags test_srcdir and test_tmpdir. -jax.config.parse_flags_with_absl() - -FLAGS = flags.FLAGS - - -def get_mock_train_state(params, param_states=None, step=0): - """Returns a mock TrainState.""" - step = np.array(step) if step is not None else None - state = mock.Mock(param_states=param_states, step=step) - state_dict = dict( - target=params, state=dict(param_states=param_states, step=step)) - return mock.Mock( - params=params, - param_states=param_states, - step=step, - state_dict=lambda: state_dict, - optimizer=mock.Mock( - target=params, state=state, state_dict=lambda: state_dict), - ) - - -class UtilsTest(parameterized.TestCase): - - def round_vocab_size_to_multiple(self): - self.assertEqual(utils.round_vocab_size_to_multiple(1), 128) - self.assertEqual(utils.round_vocab_size_to_multiple(128), 128) - self.assertEqual(utils.round_vocab_size_to_multiple(129), 256) - self.assertEqual(utils.round_vocab_size_to_multiple(129), 256) - self.assertEqual( - utils.round_vocab_size_to_multiple(25600, divisor=384), 256128) - - def test_get_zeros_batch_like_spec(self): - test_utils.assert_same( - utils.get_zeros_batch_like_spec({ - "i": jax.ShapeDtypeStruct((2, 5), dtype=np.int32), - "j": jax.ShapeDtypeStruct((1,), dtype=np.float32), - }), { - "i": np.zeros((2, 5), dtype=np.int32), - "j": np.zeros((1,), dtype=np.float32) - }) - - def test_get_zeros_batch_like_dataset(self): - ds = tf.data.Dataset.from_tensors({ - "i": np.arange(10, dtype=np.int32).reshape((2, 5)), - "j": np.ones((1,), dtype=np.float32) - }) - - test_utils.assert_same( - utils.get_zeros_batch_like_dataset(ds), { - "i": np.zeros((2, 5), dtype=np.int32), - "j": np.zeros((1,), dtype=np.float32) - }) - - test_utils.assert_same( - utils.get_zeros_batch_like_dataset(ds, batch_size=4), { - "i": np.zeros((4, 5), dtype=np.int32), - "j": np.zeros((4,), dtype=np.float32) - }) - - @parameterized.named_parameters( - dict(testcase_name="write_to_file", write_to_log_file=True), - dict(testcase_name="do_not_write_to_file", write_to_log_file=False), - ) - def test_log_model_info(self, write_to_log_file): - log_file = self.create_tempfile() if write_to_log_file else None - - mock_train_state = get_mock_train_state( - params={ - "a": { - "aa": jax.ShapeDtypeStruct(shape=(2, 3), dtype=np.int32) - }, - "c": jax.ShapeDtypeStruct(shape=(7, 8), dtype=np.int32) - }, - param_states={ - "a": { - "aa": { - "v_row": jax.ShapeDtypeStruct(shape=(2,), dtype=np.int32), - "v_col": jax.ShapeDtypeStruct(shape=(3,), dtype=np.int32) - } - }, - "c": { - "v_row": jax.ShapeDtypeStruct(shape=(2, 4), dtype=np.int32), - "v_col": None - } - }) - - mock_logical_axes = get_mock_train_state( - params={ - "a": { - "aa": partitioning.AxisNames("a1", None) - }, - "c": partitioning.AxisNames(None, "a1") - }, - param_states={ - "a": { - "aa": { - "v_row": partitioning.AxisNames(None,), - "v_col": partitioning.AxisNames(None,) - } - }, - "c": { - "v_row": partitioning.AxisNames("a1",), - "v_col": partitioning.AxisNames("a2",) - } - }, - step=None) - - mock_mesh_axes = get_mock_train_state( - params={ - "a": { - "aa": PartitionSpec("b1", None) - }, - "c": PartitionSpec(None, "b1") - }, - param_states={ - "a": { - "aa": { - "v_row": partitioning.AxisNames(None,), - "v_col": partitioning.AxisNames(None,) - } - }, - "c": { - "v_row": partitioning.AxisNames("b1",), - "v_col": partitioning.AxisNames("b2",) - } - }, - step=None) - - partitioner = mock.Mock( - get_logical_axes=lambda _: mock_logical_axes, - get_mesh_axes=lambda _: mock_mesh_axes) - - with self.assertLogs(level="INFO") as logs: - utils.log_model_info(log_file and log_file.full_path, mock_train_state, - partitioner) - - relevant_logs = [ - re.sub(r"\s+", " ", output) - for record, output in zip(logs.records, logs.output) - if "t5x/utils.py" in record.pathname - ] - self.assertLen(relevant_logs, 9) - self.assertIn( - "Variable a/aa size 6 shape (a1=2, None=3) partition spec ('b1', None)", - relevant_logs[0]) - self.assertIn( - "Variable c size 56 shape (None=7, a1=8) partition spec (None, 'b1')", - relevant_logs[1]) - - if write_to_log_file: - self.assertEqual( - re.sub(r"\s+", " ", log_file.read_text()), - "Variable a/aa size 6 shape (a1=2, None=3) partition spec ('b1', None) " - "Variable c size 56 shape (None=7, a1=8) partition spec (None, 'b1') " - "Total number of parameters: 62 " - "Variable param_states/a/aa/v_col size 3 shape (None=3) partition spec (None,) " - "Variable param_states/a/aa/v_row size 2 shape (None=2) partition spec (None,) " - "Variable param_states/c/v_col None " - "Variable param_states/c/v_row size 8 shape (2, 4) partition spec ('b1',) " - "Variable step size 1 shape () partition spec None ") - - - def test_get_training_eval_datasets_task(self): - task = mock.create_autospec(seqio.Task, instance=True) - task.name = "mock_task" - task.splits = set(["train", "test"]) - seqio.TaskRegistry.add_provider("mock_task", task) - - mock_get_dataset_fn = mock.Mock( - return_value=tf.data.Dataset.range(10).batch(1)) - mock_fc_cls = mock.Mock() - - cfg = utils.DatasetConfig( - mixture_or_task_name="mock_task", - task_feature_lengths={}, - split="test", - batch_size=4, - shuffle=False, - seed=None) - - # Single shard. - ds = utils.get_training_eval_datasets( - cfg, - shard_id=0, - num_shards=1, - eval_steps=3, - feature_converter_cls=mock_fc_cls, - get_dataset_fn=mock_get_dataset_fn) - - mock_get_dataset_fn.assert_called_once_with( - dataclasses.replace(cfg, batch_size=1), - shard_id=0, - num_shards=1, - feature_converter_cls=mock_fc_cls, - num_epochs=12, - continue_from_last_checkpoint=False) - - self.assertSameElements(ds.keys(), ["mock_task"]) - jax.tree_map(np.testing.assert_equal, list(ds["mock_task"]), [ - np.array([0, 1, 2, 3]), - np.array([4, 5, 6, 7]), - np.array([8, 9, 0, 1]), - ]) - - # 2 shards, shard 0 - mock_get_dataset_fn.reset_mock() - ds = utils.get_training_eval_datasets( - cfg, - shard_id=0, - num_shards=2, - eval_steps=3, - feature_converter_cls=mock_fc_cls, - get_dataset_fn=mock_get_dataset_fn) - - # Call the underlying function loading all shards since the fn shards at the - # example level. - mock_get_dataset_fn.assert_called_once_with( - dataclasses.replace(cfg, batch_size=1), - shard_id=0, - num_shards=1, - feature_converter_cls=mock_fc_cls, - num_epochs=12, - continue_from_last_checkpoint=False) - - self.assertSameElements(ds.keys(), ["mock_task"]) - jax.tree_map(np.testing.assert_equal, list(ds["mock_task"]), [ - np.array([0, 2]), - np.array([4, 6]), - np.array([8, 0]), - ]) - - # 2 shards, shard 1 - mock_get_dataset_fn.reset_mock() - ds = utils.get_training_eval_datasets( - cfg, - shard_id=1, - num_shards=2, - eval_steps=3, - feature_converter_cls=mock_fc_cls, - get_dataset_fn=mock_get_dataset_fn) - - # Call the underlying function loading all shards since the fn shards at the - # example level. - mock_get_dataset_fn.assert_called_once_with( - dataclasses.replace(cfg, batch_size=1), - shard_id=0, - num_shards=1, - feature_converter_cls=mock_fc_cls, - num_epochs=12, - continue_from_last_checkpoint=False) - - self.assertSameElements(ds.keys(), ["mock_task"]) - jax.tree_map(np.testing.assert_equal, list(ds["mock_task"]), [ - np.array([1, 3]), - np.array([5, 7]), - np.array([9, 1]), - ]) - - # 3 shards - with self.assertRaisesWithLiteralMatch( - ValueError, - "Batch size (4) must be divisible by number of shards (3)."): - _ = utils.get_training_eval_datasets( - cfg, - shard_id=0, - num_shards=3, - eval_steps=3, - feature_converter_cls=mock_fc_cls, - get_dataset_fn=mock_get_dataset_fn) - - def test_get_training_eval_datasets_mixture(self): - # Register a mock SeqIO mixture. - task1 = mock.create_autospec(seqio.Task, instance=True) - task1.name = "mock_task1" - task1.splits = set(["train", "test"]) - task2 = mock.create_autospec(seqio.Task, instance=True) - task2.name = "mock_task2" - task2.splits = set(["train", "test"]) - seqio.TaskRegistry.add_provider("mock_task1", task1) - seqio.TaskRegistry.add_provider("mock_task2", task2) - mixture = seqio.Mixture( - "mock_mix", ["mock_task1", "mock_task2"], default_rate=1.0) - seqio.MixtureRegistry.add_provider("mock_mix", mixture) - - mock_get_dataset = mock.Mock( - return_value=tf.data.Dataset.range(10).batch(1)) - - # Verify calls to utils.get_dataset - cfg = utils.DatasetConfig( - mixture_or_task_name="mock_mix", - task_feature_lengths={}, - split="test", - batch_size=4, - shuffle=False, - seed=23) - - res = utils.get_training_eval_datasets( - cfg, - shard_id=0, - num_shards=2, - eval_steps=3, - feature_converter_cls=seqio.FeatureConverter, - get_dataset_fn=mock_get_dataset) - - expected_calls = [ - mock.call( - dataclasses.replace( - cfg, mixture_or_task_name="mock_task1", batch_size=1), - shard_id=0, - num_shards=1, - feature_converter_cls=seqio.FeatureConverter, - continue_from_last_checkpoint=False, - num_epochs=12), - mock.call( - dataclasses.replace( - cfg, mixture_or_task_name="mock_task2", batch_size=1), - shard_id=0, - num_shards=1, - feature_converter_cls=seqio.FeatureConverter, - continue_from_last_checkpoint=False, - num_epochs=12), - mock.call( - dataclasses.replace( - cfg, mixture_or_task_name="mock_mix", batch_size=1), - shard_id=0, - num_shards=1, - feature_converter_cls=seqio.FeatureConverter, - continue_from_last_checkpoint=False, - num_epochs=12) - ] - mock_get_dataset.assert_has_calls(expected_calls) - - self.assertSameElements(res.keys(), - ["mock_task1", "mock_task2", "mock_mix"]) - for ds in res.values(): - jax.tree_map(np.testing.assert_equal, list(ds), [ - np.array([0, 2]), - np.array([4, 6]), - np.array([8, 0]), - ]) - - def test_override_params_axes_names(self): - model_variables = flax.core.freeze({ - "params": { - "logits_dense": np.zeros((2, 4)), - "mlp": { - "wo": { - "kernel": np.zeros((4, 6)), - "bias": np.zeros(6), - } - } - }, - "params_axes": { - "logits_dense_axes": AxisMetadata(names=("vocab", "embed")), - "mlp": { - "wo": { - "kernel_axes": AxisMetadata(names=("embed", "mlp")) - } - } - } - }) - - with self.assertRaisesWithLiteralMatch( - ValueError, - "Model variables do not contain a 'params_axes' collection to apply an " - "override to."): - utils.override_params_axes_names({"params": model_variables["params"]}, - [("mlp/wo/kernel", ("embed",))]) - - with self.assertRaisesWithLiteralMatch( - ValueError, - "Provided axis name override for mlp/wo/kernel does not match param " - "rank (2): ('embed',)"): - utils.override_params_axes_names(model_variables, - [("mlp/wo/kernel", ("embed",))]) - - overridden_variables = utils.override_params_axes_names( - model_variables, - [ - ("wo/kernel", ("batch",)), # unused since not a full match - (".*/wo/kernel", ("batch", "embed")), # this one is used - ("mlp/wo/kernel", ("embed",)), # unused since already matched - ("mlp/wo/bias", ("embed",)), # used - ]) - - jax.tree_multimap( - np.testing.assert_equal, overridden_variables, - flax.core.freeze({ - "params": { - "logits_dense": np.zeros((2, 4)), - "mlp": { - "wo": { - "kernel": np.zeros((4, 6)), - "bias": np.zeros(6), - } - } - }, - "params_axes": { - "logits_dense_axes": AxisMetadata(names=("vocab", "embed")), - "mlp": { - "wo": { - "kernel_axes": AxisMetadata(names=("batch", "embed")), - "bias_axes": AxisMetadata(names=("embed",)), - } - } - } - })) - - -@dataclasses.dataclass -class MockTrainState: - path: Optional[str] = None - from_scratch: Optional[bool] = None - - -class MockCheckpointer(checkpoints.Checkpointer): - - def __init__(self, *args, **kwargs): - pass - - # restore should return TrainState, but we force it to return Mock with path - # for simplicity. - def restore(self, path, *args, **kwargs): - return MockTrainState(path=path, from_scratch=False) - - -class TrainStateInitializerTest(parameterized.TestCase): - - def setUp(self): - super().setUp() - - def _partition(train_state, in_axis_resources, out_axis_resources): - del train_state, in_axis_resources, out_axis_resources - partitioned_fn = lambda _: MockTrainState(from_scratch=True) - return partitioned_fn - - partitioner = mock.Mock(get_mesh_axes=lambda _: None, partition=_partition) - mock_inference_state_create = self.enter_context( - mock.patch.object(train_state_lib.InferenceState, "create")) - mock_inference_state_create.return_value = None - - shapes = { - "ones": (1, 1), - "twos": (2, 2), - "threes": (3, 3), - } - types = { - "ones": int, - "twos": float, - "threes": int, - } - - def _init_fn(rng, input_shapes, input_types): - del rng - return { - "ones": - np.ones(input_shapes["ones"], dtype=input_types["ones"]), - "twos": - np.ones(input_shapes["twos"], dtype=input_types["twos"]) * 2, - "threes": - np.ones(input_shapes["threes"], dtype=input_types["threes"]) * 3 - } - - init_fn = mock.Mock() - init_fn.__call__ = _init_fn - init_fn.__self__ = None - - self.train_state_init = utils.TrainStateInitializer(None, init_fn, shapes, - partitioner, types) - - self.ckptdir = self.create_tempdir(name="primary_checkpoints") - steps = (2, 3) - self.paths = [] - for s in steps: - step_dir = self.ckptdir.mkdir(f"checkpoint_{s}") - step_dir.create_file("checkpoint") - self.paths += [step_dir.full_path] - - def test_from_checkpoints_specific(self): - # multiple paths - ckpt_cfg = utils.RestoreCheckpointConfig( - path=self.paths, mode="specific", checkpointer_cls=MockCheckpointer) - restored = self.train_state_init.from_checkpoints([ckpt_cfg]) - self.assertSequenceEqual(self.paths, [state.path for state in restored]) - with self.assertRaisesRegex(ValueError, r"^Expected at most 1 checkpoint"): - self.train_state_init.from_checkpoint([ckpt_cfg]) - - def test_from_checkpoints_latest(self): - # only restore single latest - ckpt_cfg = utils.RestoreCheckpointConfig( - path=self.ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - restored = list(self.train_state_init.from_checkpoints([ckpt_cfg])) - assert len(restored) == 1 - self.assertEqual(self.paths[-1], restored[0].path) - restored = self.train_state_init.from_checkpoint([ckpt_cfg]) - self.assertEqual(self.paths[-1], restored.path) - - def test_from_checkpoints_multiple_configs(self): - # uses first checkpoint with files present. - ckpt_cfg = utils.RestoreCheckpointConfig( - path=self.ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - secondary_ckptdir = self.create_tempdir(name="secondary_checkpoints") - for s in (4, 5): - step_dir = secondary_ckptdir.mkdir(f"checkpoint_{s}") - step_dir.create_file("checkpoint") - secondary_ckpt_cfg = utils.RestoreCheckpointConfig( - path=secondary_ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - restored = self.train_state_init.from_checkpoint( - [ckpt_cfg, secondary_ckpt_cfg]) - self.assertEqual(self.paths[-1], restored.path) - - def test_from_checkpoints_multiple_configs_one_empty(self): - # skips empty_checkpoints directory with no checkpoints present. - ckpt_cfg = utils.RestoreCheckpointConfig( - path=self.ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - empty_ckptdir = self.create_tempdir(name="empty_checkpoints") - empty_ckpt_cfg = utils.RestoreCheckpointConfig( - path=empty_ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - restored = self.train_state_init.from_checkpoint([empty_ckpt_cfg, ckpt_cfg]) - self.assertEqual(self.paths[-1], restored.path) - - def test_from_scratch(self): - self.assertTrue( - self.train_state_init.from_scratch(jax.random.PRNGKey(13)).from_scratch) - - def test_from_checkpoint_or_scratch(self): - ckpt_cfg = utils.RestoreCheckpointConfig( - path=self.ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - empty_ckptdir = self.create_tempdir(name="empty_checkpoints") - empty_ckpt_cfg = utils.RestoreCheckpointConfig( - path=empty_ckptdir.full_path, - mode="latest", - checkpointer_cls=MockCheckpointer) - - init_rng = jax.random.PRNGKey(13) - - # ckpt_cfg has checkpoints, restore from there - restored = self.train_state_init.from_checkpoint_or_scratch( - [empty_ckpt_cfg, ckpt_cfg], init_rng=init_rng) - self.assertEqual(self.paths[-1], restored.path) - self.assertFalse(restored.from_scratch) - - # no checkpoints available, init from scratch - initialized = self.train_state_init.from_checkpoint_or_scratch( - [empty_ckpt_cfg], init_rng=init_rng) - self.assertTrue(initialized.from_scratch) - - -if __name__ == "__main__": - absltest.main() diff --git a/spaces/julien-c/sveltekit-demo/src/lib/types.d.ts b/spaces/julien-c/sveltekit-demo/src/lib/types.d.ts deleted file mode 100644 index 6edddd1d6fb6c3fbcd13bf1ba1f9c2200e52fb46..0000000000000000000000000000000000000000 --- a/spaces/julien-c/sveltekit-demo/src/lib/types.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -/** - * Can be made globally available by placing this - * inside `global.d.ts` and removing `export` keyword - */ -export interface Locals { - userid: string; -} diff --git a/spaces/kangvcar/RealChar/realtime_ai_character/audio/text_to_speech/elevenlabs.py b/spaces/kangvcar/RealChar/realtime_ai_character/audio/text_to_speech/elevenlabs.py deleted file mode 100644 index c2433832586165ec23213edba1f1b6e543a9397c..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/realtime_ai_character/audio/text_to_speech/elevenlabs.py +++ /dev/null @@ -1,70 +0,0 @@ -import asyncio -import os -import types -import httpx - -from realtime_ai_character.logger import get_logger -from realtime_ai_character.utils import Singleton -from realtime_ai_character.audio.text_to_speech.base import TextToSpeech - -logger = get_logger(__name__) -DEBUG = False - -config = types.SimpleNamespace(**{ - 'default_voice': '21m00Tcm4TlvDq8ikWAM', - 'default_female_voice': 'EXAVITQu4vr4xnSDxMaL', - 'default_male_voice': 'ErXwobaYiN019PkySvjV', - 'chunk_size': 1024, - 'url': 'https://api.elevenlabs.io/v1/text-to-speech/{voice_id}/stream', - 'headers': { - 'Accept': 'audio/mpeg', - 'Content-Type': 'application/json', - 'xi-api-key': os.environ['ELEVEN_LABS_API_KEY'] - }, - 'data': { - 'model_id': 'eleven_monolingual_v1', - 'voice_settings': { - 'stability': 0.5, - 'similarity_boost': 0.75 - } - } -}) - - -class ElevenLabs(Singleton, TextToSpeech): - def __init__(self): - super().__init__() - logger.info("Initializing [ElevenLabs Text To Speech] voices...") - self.voice_ids = { - "Raiden Shogun And Ei": os.environ.get('RAIDEN_VOICE') or config.default_female_voice, - "Loki": os.environ.get('LOKI_VOICE') or config.default_male_voice, - "Reflection Pi": os.environ.get('PI_VOICE') or config.default_female_voice, - "Elon Musk": os.environ.get('ELON_VOICE') or config.default_male_voice, - "Bruce Wayne": os.environ.get('BRUCE_VOICE') or config.default_male_voice, - "Steve Jobs": os.environ.get('JOBS_VOICE') or config.default_male_voice, - "Sam Altman": os.environ.get('SAM_VOICE') or config.default_male_voice, - } - - def get_voice_id(self, name): - return self.voice_ids.get(name, config.default_voice) - - async def stream(self, text, websocket, tts_event: asyncio.Event, characater_name="", first_sentence=False) -> None: - if DEBUG: - return - headers = config.headers - data = { - "text": text, - **config.data, - } - voice_id = self.get_voice_id(characater_name) - url = config.url.format(voice_id=voice_id) - if first_sentence: - url = url + '?optimize_streaming_latency=4' - async with httpx.AsyncClient() as client: - response = await client.post(url, json=data, headers=headers) - async for chunk in response.aiter_bytes(): - await asyncio.sleep(0.1) - if tts_event.is_set(): - # stop streaming audio - break - await websocket.send_bytes(chunk) diff --git a/spaces/kazumak/webui/README.md b/spaces/kazumak/webui/README.md deleted file mode 100644 index 79ece156ada00f0f60c85b58a4faa7c9fae17915..0000000000000000000000000000000000000000 --- a/spaces/kazumak/webui/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI -emoji: 🚧 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: ConceptArtHouse/webui-gameasset ---- - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/ui/app.py b/spaces/kcagle/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
          {utils.format_directory(OUTPUT_DIR)}
          - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/bark/api.py b/spaces/kevinwang676/Bark-Voice-Cloning/bark/api.py deleted file mode 100644 index 7a4319ceaa13798912637290f8e9e88c50d5420a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-Voice-Cloning/bark/api.py +++ /dev/null @@ -1,158 +0,0 @@ -from typing import Dict, Optional, Union - -import numpy as np - -from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic - - -def generate_with_settings(text_prompt, semantic_temp=0.6, eos_p=0.2, coarse_temp=0.7, fine_temp=0.5, voice_name=None, output_full=False): - - # generation with more control - x_semantic = generate_text_semantic( - text_prompt, - history_prompt=voice_name, - temp=semantic_temp, - min_eos_p = eos_p, - use_kv_caching=True - ) - - x_coarse_gen = generate_coarse( - x_semantic, - history_prompt=voice_name, - temp=coarse_temp, - use_kv_caching=True - ) - x_fine_gen = generate_fine( - x_coarse_gen, - history_prompt=voice_name, - temp=fine_temp, - ) - - if output_full: - full_generation = { - 'semantic_prompt': x_semantic, - 'coarse_prompt': x_coarse_gen, - 'fine_prompt': x_fine_gen - } - return full_generation, codec_decode(x_fine_gen) - return codec_decode(x_fine_gen) - - -def text_to_semantic( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, -): - """Generate semantic array from text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - - Returns: - numpy semantic array to be fed into `semantic_to_waveform` - """ - x_semantic = generate_text_semantic( - text, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - return x_semantic - - -def semantic_to_waveform( - semantic_tokens: np.ndarray, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from semantic input. - - Args: - semantic_tokens: semantic token output from `text_to_semantic` - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - coarse_tokens = generate_coarse( - semantic_tokens, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - fine_tokens = generate_fine( - coarse_tokens, - history_prompt=history_prompt, - temp=0.5, - ) - audio_arr = codec_decode(fine_tokens) - if output_full: - full_generation = { - "semantic_prompt": semantic_tokens, - "coarse_prompt": coarse_tokens, - "fine_prompt": fine_tokens, - } - return full_generation, audio_arr - return audio_arr - - -def save_as_prompt(filepath, full_generation): - assert(filepath.endswith(".npz")) - assert(isinstance(full_generation, dict)) - assert("semantic_prompt" in full_generation) - assert("coarse_prompt" in full_generation) - assert("fine_prompt" in full_generation) - np.savez(filepath, **full_generation) - - -def generate_audio( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - text_temp: float = 0.7, - waveform_temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from input text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - text_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - waveform_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - semantic_tokens = text_to_semantic( - text, - history_prompt=history_prompt, - temp=text_temp, - silent=silent, - ) - out = semantic_to_waveform( - semantic_tokens, - history_prompt=history_prompt, - temp=waveform_temp, - silent=silent, - output_full=output_full, - ) - if output_full: - full_generation, audio_arr = out - return full_generation, audio_arr - else: - audio_arr = out - return audio_arr diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/torch2onnx.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/torch2onnx.py deleted file mode 100644 index fc26ab82e552331bc8d75b34e81000418f4d38ec..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/torch2onnx.py +++ /dev/null @@ -1,59 +0,0 @@ -import numpy as np -import onnx -import torch - - -def convert_onnx(net, path_module, output, opset=11, simplify=False): - assert isinstance(net, torch.nn.Module) - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) - img = img.astype(np.float) - img = (img / 255. - 0.5) / 0.5 # torch style norm - img = img.transpose((2, 0, 1)) - img = torch.from_numpy(img).unsqueeze(0).float() - - weight = torch.load(path_module) - net.load_state_dict(weight) - net.eval() - torch.onnx.export(net, img, output, keep_initializers_as_inputs=False, verbose=False, opset_version=opset) - model = onnx.load(output) - graph = model.graph - graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None' - if simplify: - from onnxsim import simplify - model, check = simplify(model) - assert check, "Simplified ONNX model could not be validated" - onnx.save(model, output) - - -if __name__ == '__main__': - import os - import argparse - from backbones import get_model - - parser = argparse.ArgumentParser(description='ArcFace PyTorch to onnx') - parser.add_argument('input', type=str, help='input backbone.pth file or path') - parser.add_argument('--output', type=str, default=None, help='output onnx path') - parser.add_argument('--network', type=str, default=None, help='backbone network') - parser.add_argument('--simplify', type=bool, default=False, help='onnx simplify') - args = parser.parse_args() - input_file = args.input - if os.path.isdir(input_file): - input_file = os.path.join(input_file, "backbone.pth") - assert os.path.exists(input_file) - model_name = os.path.basename(os.path.dirname(input_file)).lower() - params = model_name.split("_") - if len(params) >= 3 and params[1] in ('arcface', 'cosface'): - if args.network is None: - args.network = params[2] - assert args.network is not None - print(args) - backbone_onnx = get_model(args.network, dropout=0) - - output_path = args.output - if output_path is None: - output_path = os.path.join(os.path.dirname(__file__), 'onnx') - if not os.path.exists(output_path): - os.makedirs(output_path) - assert os.path.isdir(output_path) - output_file = os.path.join(output_path, "%s.onnx" % model_name) - convert_onnx(backbone_onnx, input_file, output_file, simplify=args.simplify) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/options/base_options.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/options/base_options.py deleted file mode 100644 index d8f921d5a43434ae802a55a0fa3889c4b7ab9f6d..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/options/base_options.py +++ /dev/null @@ -1,169 +0,0 @@ -"""This script contains base options for Deep3DFaceRecon_pytorch -""" - -import argparse -import os -from util import util -import numpy as np -import torch -import face3d.models as models -import face3d.data as data - - -class BaseOptions(): - """This class defines options used during both training and test time. - - It also implements several helper functions such as parsing, printing, and saving the options. - It also gathers additional options defined in functions in both dataset class and model class. - """ - - def __init__(self, cmd_line=None): - """Reset the class; indicates the class hasn't been initailized""" - self.initialized = False - self.cmd_line = None - if cmd_line is not None: - self.cmd_line = cmd_line.split() - - def initialize(self, parser): - """Define the common options that are used in both training and test.""" - # basic parameters - parser.add_argument('--name', type=str, default='face_recon', help='name of the experiment. It decides where to store samples and models') - parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU') - parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here') - parser.add_argument('--vis_batch_nums', type=float, default=1, help='batch nums of images for visulization') - parser.add_argument('--eval_batch_nums', type=float, default=float('inf'), help='batch nums of images for evaluation') - parser.add_argument('--use_ddp', type=util.str2bool, nargs='?', const=True, default=True, help='whether use distributed data parallel') - parser.add_argument('--ddp_port', type=str, default='12355', help='ddp port') - parser.add_argument('--display_per_batch', type=util.str2bool, nargs='?', const=True, default=True, help='whether use batch to show losses') - parser.add_argument('--add_image', type=util.str2bool, nargs='?', const=True, default=True, help='whether add image to tensorboard') - parser.add_argument('--world_size', type=int, default=1, help='batch nums of images for evaluation') - - # model parameters - parser.add_argument('--model', type=str, default='facerecon', help='chooses which model to use.') - - # additional parameters - parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model') - parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information') - parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}') - - self.initialized = True - return parser - - def gather_options(self): - """Initialize our parser with basic options(only once). - Add additional model-specific and dataset-specific options. - These options are defined in the function - in model and dataset classes. - """ - if not self.initialized: # check if it has been initialized - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser = self.initialize(parser) - - # get the basic options - if self.cmd_line is None: - opt, _ = parser.parse_known_args() - else: - opt, _ = parser.parse_known_args(self.cmd_line) - - # set cuda visible devices - os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpu_ids - - # modify model-related parser options - model_name = opt.model - model_option_setter = models.get_option_setter(model_name) - parser = model_option_setter(parser, self.isTrain) - if self.cmd_line is None: - opt, _ = parser.parse_known_args() # parse again with new defaults - else: - opt, _ = parser.parse_known_args(self.cmd_line) # parse again with new defaults - - # modify dataset-related parser options - if opt.dataset_mode: - dataset_name = opt.dataset_mode - dataset_option_setter = data.get_option_setter(dataset_name) - parser = dataset_option_setter(parser, self.isTrain) - - # save and return the parser - self.parser = parser - if self.cmd_line is None: - return parser.parse_args() - else: - return parser.parse_args(self.cmd_line) - - def print_options(self, opt): - """Print and save options - - It will print both current options and default values(if different). - It will save options into a text file / [checkpoints_dir] / opt.txt - """ - message = '' - message += '----------------- Options ---------------\n' - for k, v in sorted(vars(opt).items()): - comment = '' - default = self.parser.get_default(k) - if v != default: - comment = '\t[default: %s]' % str(default) - message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment) - message += '----------------- End -------------------' - print(message) - - # save to the disk - expr_dir = os.path.join(opt.checkpoints_dir, opt.name) - util.mkdirs(expr_dir) - file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase)) - try: - with open(file_name, 'wt') as opt_file: - opt_file.write(message) - opt_file.write('\n') - except PermissionError as error: - print("permission error {}".format(error)) - pass - - def parse(self): - """Parse our options, create checkpoints directory suffix, and set up gpu device.""" - opt = self.gather_options() - opt.isTrain = self.isTrain # train or test - - # process opt.suffix - if opt.suffix: - suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else '' - opt.name = opt.name + suffix - - - # set gpu ids - str_ids = opt.gpu_ids.split(',') - gpu_ids = [] - for str_id in str_ids: - id = int(str_id) - if id >= 0: - gpu_ids.append(id) - opt.world_size = len(gpu_ids) - # if len(opt.gpu_ids) > 0: - # torch.cuda.set_device(gpu_ids[0]) - if opt.world_size == 1: - opt.use_ddp = False - - if opt.phase != 'test': - # set continue_train automatically - if opt.pretrained_name is None: - model_dir = os.path.join(opt.checkpoints_dir, opt.name) - else: - model_dir = os.path.join(opt.checkpoints_dir, opt.pretrained_name) - if os.path.isdir(model_dir): - model_pths = [i for i in os.listdir(model_dir) if i.endswith('pth')] - if os.path.isdir(model_dir) and len(model_pths) != 0: - opt.continue_train= True - - # update the latest epoch count - if opt.continue_train: - if opt.epoch == 'latest': - epoch_counts = [int(i.split('.')[0].split('_')[-1]) for i in model_pths if 'latest' not in i] - if len(epoch_counts) != 0: - opt.epoch_count = max(epoch_counts) + 1 - else: - opt.epoch_count = int(opt.epoch) + 1 - - - self.print_options(opt) - self.opt = opt - return self.opt diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/utils/paste_pic.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/utils/paste_pic.py deleted file mode 100644 index f9989e21e48e64f620f9b148e65fdfe806c53b14..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/utils/paste_pic.py +++ /dev/null @@ -1,69 +0,0 @@ -import cv2, os -import numpy as np -from tqdm import tqdm -import uuid - -from src.utils.videoio import save_video_with_watermark - -def paste_pic(video_path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop=False): - - if not os.path.isfile(pic_path): - raise ValueError('pic_path must be a valid path to video/image file') - elif pic_path.split('.')[-1] in ['jpg', 'png', 'jpeg']: - # loader for first frame - full_img = cv2.imread(pic_path) - else: - # loader for videos - video_stream = cv2.VideoCapture(pic_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - break - full_img = frame - frame_h = full_img.shape[0] - frame_w = full_img.shape[1] - - video_stream = cv2.VideoCapture(video_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - crop_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - crop_frames.append(frame) - - if len(crop_info) != 3: - print("you didn't crop the image") - return - else: - r_w, r_h = crop_info[0] - clx, cly, crx, cry = crop_info[1] - lx, ly, rx, ry = crop_info[2] - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - - if extended_crop: - oy1, oy2, ox1, ox2 = cly, cry, clx, crx - else: - oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - - tmp_path = str(uuid.uuid4())+'.mp4' - out_tmp = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_w, frame_h)) - for crop_frame in tqdm(crop_frames, 'seamlessClone:'): - p = cv2.resize(crop_frame.astype(np.uint8), (ox2-ox1, oy2 - oy1)) - - mask = 255*np.ones(p.shape, p.dtype) - location = ((ox1+ox2) // 2, (oy1+oy2) // 2) - gen_img = cv2.seamlessClone(p, full_img, mask, location, cv2.NORMAL_CLONE) - out_tmp.write(gen_img) - - out_tmp.release() - - save_video_with_watermark(tmp_path, new_audio_path, full_video_path, watermark=False) - os.remove(tmp_path) diff --git a/spaces/kevinwang676/M4Singer/modules/parallel_wavegan/models/melgan.py b/spaces/kevinwang676/M4Singer/modules/parallel_wavegan/models/melgan.py deleted file mode 100644 index e021ae4817a8c1c97338e61b00b230c881836fd8..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/M4Singer/modules/parallel_wavegan/models/melgan.py +++ /dev/null @@ -1,427 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""MelGAN Modules.""" - -import logging - -import numpy as np -import torch - -from modules.parallel_wavegan.layers import CausalConv1d -from modules.parallel_wavegan.layers import CausalConvTranspose1d -from modules.parallel_wavegan.layers import ResidualStack - - -class MelGANGenerator(torch.nn.Module): - """MelGAN generator module.""" - - def __init__(self, - in_channels=80, - out_channels=1, - kernel_size=7, - channels=512, - bias=True, - upsample_scales=[8, 8, 2, 2], - stack_kernel_size=3, - stacks=3, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_final_nonlinear_activation=True, - use_weight_norm=True, - use_causal_conv=False, - ): - """Initialize MelGANGenerator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_size (int): Kernel size of initial and final conv layer. - channels (int): Initial number of channels for conv layer. - bias (bool): Whether to add bias parameter in convolution layers. - upsample_scales (list): List of upsampling scales. - stack_kernel_size (int): Kernel size of dilated conv layers in residual stack. - stacks (int): Number of stacks in a single residual stack. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_final_nonlinear_activation (torch.nn.Module): Activation function for the final layer. - use_weight_norm (bool): Whether to use weight norm. - If set to true, it will be applied to all of the conv layers. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANGenerator, self).__init__() - - # check hyper parameters is valid - assert channels >= np.prod(upsample_scales) - assert channels % (2 ** len(upsample_scales)) == 0 - if not use_causal_conv: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - - # add initial layer - layers = [] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(in_channels, channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - - for i, upsample_scale in enumerate(upsample_scales): - # add upsampling layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - torch.nn.ConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - padding=upsample_scale // 2 + upsample_scale % 2, - output_padding=upsample_scale % 2, - bias=bias, - ) - ] - else: - layers += [ - CausalConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - bias=bias, - ) - ] - - # add residual stack - for j in range(stacks): - layers += [ - ResidualStack( - kernel_size=stack_kernel_size, - channels=channels // (2 ** (i + 1)), - dilation=stack_kernel_size ** j, - bias=bias, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - use_causal_conv=use_causal_conv, - ) - ] - - # add final layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - if use_final_nonlinear_activation: - layers += [torch.nn.Tanh()] - - # define the model as a single function - self.melgan = torch.nn.Sequential(*layers) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, c): - """Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, 1, T ** prod(upsample_scales)). - - """ - return self.melgan(c) - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) - - -class MelGANDiscriminator(torch.nn.Module): - """MelGAN discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - ): - """Initilize MelGAN discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (list): List of two kernel sizes. The prod will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - For example if kernel_sizes = [5, 3], the first layer kernel size will be 5 * 3 = 15, - the last two layers' kernel size will be 5 and 3, respectively. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - - """ - super(MelGANDiscriminator, self).__init__() - self.layers = torch.nn.ModuleList() - - # check kernel size is valid - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - - # add first layer - self.layers += [ - torch.nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, np.prod(kernel_sizes), bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - - # add downsample layers - in_chs = channels - for downsample_scale in downsample_scales: - out_chs = min(in_chs * downsample_scale, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, - kernel_size=downsample_scale * 10 + 1, - stride=downsample_scale, - padding=downsample_scale * 5, - groups=in_chs // 4, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - in_chs = out_chs - - # add final layers - out_chs = min(in_chs * 2, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, kernel_sizes[0], - padding=(kernel_sizes[0] - 1) // 2, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - self.layers += [ - torch.nn.Conv1d( - out_chs, out_channels, kernel_sizes[1], - padding=(kernel_sizes[1] - 1) // 2, - bias=bias, - ), - ] - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of output tensors of each layer. - - """ - outs = [] - for f in self.layers: - x = f(x) - outs += [x] - - return outs - - -class MelGANMultiScaleDiscriminator(torch.nn.Module): - """MelGAN multi-scale discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - scales=3, - downsample_pooling="AvgPool1d", - # follow the official implementation setting - downsample_pooling_params={ - "kernel_size": 4, - "stride": 2, - "padding": 1, - "count_include_pad": False, - }, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_weight_norm=True, - ): - """Initilize MelGAN multi-scale discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_pooling (str): Pooling module name for downsampling of the inputs. - downsample_pooling_params (dict): Parameters for the above pooling module. - kernel_sizes (list): List of two kernel sizes. The sum will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANMultiScaleDiscriminator, self).__init__() - self.discriminators = torch.nn.ModuleList() - - # add discriminators - for _ in range(scales): - self.discriminators += [ - MelGANDiscriminator( - in_channels=in_channels, - out_channels=out_channels, - kernel_sizes=kernel_sizes, - channels=channels, - max_downsample_channels=max_downsample_channels, - bias=bias, - downsample_scales=downsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - ) - ] - self.pooling = getattr(torch.nn, downsample_pooling)(**downsample_pooling_params) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of list of each discriminator outputs, which consists of each layer output tensors. - - """ - outs = [] - for f in self.discriminators: - outs += [f(x)] - x = self.pooling(x) - - return outs - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) diff --git a/spaces/kevinwang676/VALLE/utils/prompt_making.py b/spaces/kevinwang676/VALLE/utils/prompt_making.py deleted file mode 100644 index 93e4a3d647052df4899253fea41be22f09e006b8..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/utils/prompt_making.py +++ /dev/null @@ -1,115 +0,0 @@ -import os -import torch -import torchaudio -import logging -import langid -import whisper -langid.set_languages(['en', 'zh', 'ja']) - -import numpy as np -from data.tokenizer import ( - AudioTokenizer, - tokenize_audio, -) -from data.collation import get_text_token_collater -from utils.g2p import PhonemeBpeTokenizer - -from macros import * - -text_tokenizer = PhonemeBpeTokenizer(tokenizer_path="./utils/g2p/bpe_69.json") -text_collater = get_text_token_collater() - -device = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda", 0) - -codec = AudioTokenizer(device) - -whisper_model = None - -@torch.no_grad() -def transcribe_one(model, audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(temperature=1.0, best_of=5, fp16=False if device == torch.device("cpu") else True, sample_len=150) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - - text_pr = result.text - if text_pr.strip(" ")[-1] not in "?!.,。,?!。、": - text_pr += "." - return lang, text_pr - -def make_prompt(name, audio_prompt_path, transcript=None): - global model, text_collater, text_tokenizer, codec - wav_pr, sr = torchaudio.load(audio_prompt_path) - # check length - if wav_pr.size(-1) / sr > 15: - raise ValueError(f"Prompt too long, expect length below 15 seconds, got {wav_pr / sr} seconds.") - if wav_pr.size(0) == 2: - wav_pr = wav_pr.mean(0, keepdim=True) - text_pr, lang_pr = make_transcript(name, wav_pr, sr, transcript) - - # tokenize audio - encoded_frames = tokenize_audio(codec, (wav_pr, sr)) - audio_tokens = encoded_frames[0][0].transpose(2, 1).cpu().numpy() - - # tokenize text - phonemes, langs = text_tokenizer.tokenize(text=f"{text_pr}".strip()) - text_tokens, enroll_x_lens = text_collater( - [ - phonemes - ] - ) - - message = f"Detected language: {lang_pr}\n Detected text {text_pr}\n" - - # save as npz file - save_path = os.path.join("./customs/", f"{name}.npz") - np.savez(save_path, audio_tokens=audio_tokens, text_tokens=text_tokens, lang_code=lang2code[lang_pr]) - logging.info(f"Successful. Prompt saved to {save_path}") - - -def make_transcript(name, wav, sr, transcript=None): - - if not isinstance(wav, torch.FloatTensor): - wav = torch.tensor(wav) - if wav.abs().max() > 1: - wav /= wav.abs().max() - if wav.size(-1) == 2: - wav = wav.mean(-1, keepdim=False) - if wav.ndim == 1: - wav = wav.unsqueeze(0) - assert wav.ndim and wav.size(0) == 1 - if transcript is None or transcript == "": - logging.info("Transcript not given, using Whisper...") - global whisper_model - if whisper_model is None: - whisper_model = whisper.load_model("medium") - whisper_model.to(device) - torchaudio.save(f"./prompts/{name}.wav", wav, sr) - lang, text = transcribe_one(whisper_model, f"./prompts/{name}.wav") - lang_token = lang2token[lang] - text = lang_token + text + lang_token - os.remove(f"./prompts/{name}.wav") - whisper_model.cpu() - else: - text = transcript - lang, _ = langid.classify(text) - lang_token = lang2token[lang] - text = lang_token + text + lang_token - - torch.cuda.empty_cache() - return text, lang \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChanger/src/facerender/modules/make_animation.py b/spaces/kevinwang676/VoiceChanger/src/facerender/modules/make_animation.py deleted file mode 100644 index 3360c53501a064f35d7db21a5361f89aa9658b42..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/facerender/modules/make_animation.py +++ /dev/null @@ -1,170 +0,0 @@ -from scipy.spatial import ConvexHull -import torch -import torch.nn.functional as F -import numpy as np -from tqdm import tqdm - -def normalize_kp(kp_source, kp_driving, kp_driving_initial, adapt_movement_scale=False, - use_relative_movement=False, use_relative_jacobian=False): - if adapt_movement_scale: - source_area = ConvexHull(kp_source['value'][0].data.cpu().numpy()).volume - driving_area = ConvexHull(kp_driving_initial['value'][0].data.cpu().numpy()).volume - adapt_movement_scale = np.sqrt(source_area) / np.sqrt(driving_area) - else: - adapt_movement_scale = 1 - - kp_new = {k: v for k, v in kp_driving.items()} - - if use_relative_movement: - kp_value_diff = (kp_driving['value'] - kp_driving_initial['value']) - kp_value_diff *= adapt_movement_scale - kp_new['value'] = kp_value_diff + kp_source['value'] - - if use_relative_jacobian: - jacobian_diff = torch.matmul(kp_driving['jacobian'], torch.inverse(kp_driving_initial['jacobian'])) - kp_new['jacobian'] = torch.matmul(jacobian_diff, kp_source['jacobian']) - - return kp_new - -def headpose_pred_to_degree(pred): - device = pred.device - idx_tensor = [idx for idx in range(66)] - idx_tensor = torch.FloatTensor(idx_tensor).type_as(pred).to(device) - pred = F.softmax(pred) - degree = torch.sum(pred*idx_tensor, 1) * 3 - 99 - return degree - -def get_rotation_matrix(yaw, pitch, roll): - yaw = yaw / 180 * 3.14 - pitch = pitch / 180 * 3.14 - roll = roll / 180 * 3.14 - - roll = roll.unsqueeze(1) - pitch = pitch.unsqueeze(1) - yaw = yaw.unsqueeze(1) - - pitch_mat = torch.cat([torch.ones_like(pitch), torch.zeros_like(pitch), torch.zeros_like(pitch), - torch.zeros_like(pitch), torch.cos(pitch), -torch.sin(pitch), - torch.zeros_like(pitch), torch.sin(pitch), torch.cos(pitch)], dim=1) - pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3) - - yaw_mat = torch.cat([torch.cos(yaw), torch.zeros_like(yaw), torch.sin(yaw), - torch.zeros_like(yaw), torch.ones_like(yaw), torch.zeros_like(yaw), - -torch.sin(yaw), torch.zeros_like(yaw), torch.cos(yaw)], dim=1) - yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3) - - roll_mat = torch.cat([torch.cos(roll), -torch.sin(roll), torch.zeros_like(roll), - torch.sin(roll), torch.cos(roll), torch.zeros_like(roll), - torch.zeros_like(roll), torch.zeros_like(roll), torch.ones_like(roll)], dim=1) - roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3) - - rot_mat = torch.einsum('bij,bjk,bkm->bim', pitch_mat, yaw_mat, roll_mat) - - return rot_mat - -def keypoint_transformation(kp_canonical, he, wo_exp=False): - kp = kp_canonical['value'] # (bs, k, 3) - yaw, pitch, roll= he['yaw'], he['pitch'], he['roll'] - yaw = headpose_pred_to_degree(yaw) - pitch = headpose_pred_to_degree(pitch) - roll = headpose_pred_to_degree(roll) - - if 'yaw_in' in he: - yaw = he['yaw_in'] - if 'pitch_in' in he: - pitch = he['pitch_in'] - if 'roll_in' in he: - roll = he['roll_in'] - - rot_mat = get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3) - - t, exp = he['t'], he['exp'] - if wo_exp: - exp = exp*0 - - # keypoint rotation - kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp) - - # keypoint translation - t[:, 0] = t[:, 0]*0 - t[:, 2] = t[:, 2]*0 - t = t.unsqueeze(1).repeat(1, kp.shape[1], 1) - kp_t = kp_rotated + t - - # add expression deviation - exp = exp.view(exp.shape[0], -1, 3) - kp_transformed = kp_t + exp - - return {'value': kp_transformed} - - - -def make_animation(source_image, source_semantics, target_semantics, - generator, kp_detector, he_estimator, mapping, - yaw_c_seq=None, pitch_c_seq=None, roll_c_seq=None, - use_exp=True, use_half=False): - with torch.no_grad(): - predictions = [] - - kp_canonical = kp_detector(source_image) - he_source = mapping(source_semantics) - kp_source = keypoint_transformation(kp_canonical, he_source) - - for frame_idx in tqdm(range(target_semantics.shape[1]), 'Face Renderer:'): - # still check the dimension - # print(target_semantics.shape, source_semantics.shape) - target_semantics_frame = target_semantics[:, frame_idx] - he_driving = mapping(target_semantics_frame) - if yaw_c_seq is not None: - he_driving['yaw_in'] = yaw_c_seq[:, frame_idx] - if pitch_c_seq is not None: - he_driving['pitch_in'] = pitch_c_seq[:, frame_idx] - if roll_c_seq is not None: - he_driving['roll_in'] = roll_c_seq[:, frame_idx] - - kp_driving = keypoint_transformation(kp_canonical, he_driving) - - kp_norm = kp_driving - out = generator(source_image, kp_source=kp_source, kp_driving=kp_norm) - ''' - source_image_new = out['prediction'].squeeze(1) - kp_canonical_new = kp_detector(source_image_new) - he_source_new = he_estimator(source_image_new) - kp_source_new = keypoint_transformation(kp_canonical_new, he_source_new, wo_exp=True) - kp_driving_new = keypoint_transformation(kp_canonical_new, he_driving, wo_exp=True) - out = generator(source_image_new, kp_source=kp_source_new, kp_driving=kp_driving_new) - ''' - predictions.append(out['prediction']) - predictions_ts = torch.stack(predictions, dim=1) - return predictions_ts - -class AnimateModel(torch.nn.Module): - """ - Merge all generator related updates into single model for better multi-gpu usage - """ - - def __init__(self, generator, kp_extractor, mapping): - super(AnimateModel, self).__init__() - self.kp_extractor = kp_extractor - self.generator = generator - self.mapping = mapping - - self.kp_extractor.eval() - self.generator.eval() - self.mapping.eval() - - def forward(self, x): - - source_image = x['source_image'] - source_semantics = x['source_semantics'] - target_semantics = x['target_semantics'] - yaw_c_seq = x['yaw_c_seq'] - pitch_c_seq = x['pitch_c_seq'] - roll_c_seq = x['roll_c_seq'] - - predictions_video = make_animation(source_image, source_semantics, target_semantics, - self.generator, self.kp_extractor, - self.mapping, use_exp = True, - yaw_c_seq=yaw_c_seq, pitch_c_seq=pitch_c_seq, roll_c_seq=roll_c_seq) - - return predictions_video \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/utils/__init__.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kiroiineko/rvc-models-tragamundos/config.py b/spaces/kiroiineko/rvc-models-tragamundos/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/kiroiineko/rvc-models-tragamundos/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/kitrak-rev/AI-Clone/README.md b/spaces/kitrak-rev/AI-Clone/README.md deleted file mode 100644 index 37ae85568a2edb3e730ca57c9e84b5771523d342..0000000000000000000000000000000000000000 --- a/spaces/kitrak-rev/AI-Clone/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AI Clone -emoji: 🌍 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/__init__.py deleted file mode 100644 index 5835316ba9b23c0d99d1a8f109ee047682211546..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import models # noqa diff --git a/spaces/kobakhit/speech-to-chat/app.py b/spaces/kobakhit/speech-to-chat/app.py deleted file mode 100644 index 1d6c7091c68b832753913aecbe57d7f31a4e7b16..0000000000000000000000000000000000000000 --- a/spaces/kobakhit/speech-to-chat/app.py +++ /dev/null @@ -1,531 +0,0 @@ -import streamlit as st -import streamlit_ext as ste -import openai -from pydub import AudioSegment -# from pytube import YouTube -# import pytube -import yt_dlp -import io -from pyannote.audio import Pipeline -from pyannote.audio.pipelines.utils.hook import ProgressHook -from pyannote.database.util import load_rttm -from pyannote.core import Annotation, Segment, notebook -import time -import json -import torch -import urllib.parse as urlparse -from urllib.parse import urlencode -import os - -import unicodedata -import re - -import matplotlib -matplotlib.use('Agg') -from matplotlib import pyplot as plt - -st.set_page_config( - page_title="Speech-to-chat", - page_icon = '🌊', - layout='wide' -) - -# Set your OpenAI, Hugging Face API keys -openai.api_key = st.secrets['openai'] -hf_api_key = st.secrets['hf'] - -TRANSCRIPTION_REQUEST_LIMIT = 550 -PROMPT_REQUEST_LIMIT = 20 -DURATION_LIMIT = 3600 # seconds - -def create_audio_stream(audio): - return io.BytesIO(audio.export(format="wav").read()) - -def add_query_parameter(link, params): - url_parts = list(urlparse.urlparse(link)) - query = dict(urlparse.parse_qsl(url_parts[4])) - query.update(params) - - url_parts[4] = urlencode(query) - - return urlparse.urlunparse(url_parts) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') - -def youtube_video_id(value): - """ - Examples: - - http://youtu.be/SA2iWivDJiE - - http://www.youtube.com/watch?v=_oPAwA_Udwc&feature=feedu - - http://www.youtube.com/embed/SA2iWivDJiE - - http://www.youtube.com/v/SA2iWivDJiE?version=3&hl=en_US - """ - query = urlparse.urlparse(value) - if query.hostname == 'youtu.be': - return query.path[1:] - if query.hostname in ('www.youtube.com', 'youtube.com'): - if query.path == '/watch': - p = urlparse.parse_qs(query.query) - return p['v'][0] - if query.path[:7] == '/embed/': - return query.path.split('/')[2] - if query.path[:3] == '/v/': - return query.path.split('/')[2] - # fail? - return None - -@st.cache_data -def process_youtube_link2(youtube_link): - ''' - uses pytube https://github.com/pytube/pytube - issue with https://github.com/pytube/pytube/issues/84 - ''' - try: - yt = YouTube(youtube_link) - audio_stream = yt.streams.filter(only_audio=True).first() - audio_name = audio_stream.default_filename - st.write(f"Downloaded {audio_name}") - except pytube.exceptions.AgeRestrictedError: - st.warning('Age restricted videos cannot be processed.') - st.stop() - - try: - os.remove('sample.mp4') - except OSError: - pass - audio_file = audio_stream.download(filename='sample.mp4') - time.sleep(2) - audio = load_audio('sample.mp4') - st.audio(create_audio_stream(audio), format="audio/mp4", start_time=0) - return audio, audio_name - - -@st.cache_data -def process_youtube_link(youtube_link): - 'uses yt-dlp https://github.com/yt-dlp/yt-dlp' - - try: - os.remove('sample.m4a') - except OSError: - pass - - ydl_opts = { - 'format': 'm4a/bestaudio/best', - # ℹ️ See help(yt_dlp.postprocessor) for a list of available Postprocessors and their arguments - 'outtmpl': './sample.%(ext)s' - # 'postprocessors': [{ # Extract audio using ffmpeg - # 'key': 'FFmpegExtractAudio', - # 'preferredcodec': 'm4a', - # }] - } - - try: - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info = ydl.extract_info(youtube_link, download=True) - audio_name = slugify( info['title'] ) - st.write(f"Downloaded {info['title']}") - except Exception as e: - st.warning(e) - st.stop() - - - audio = load_audio(f'sample.m4a') - st.audio(create_audio_stream(audio), format="audio/m4a", start_time=0) - return audio, audio_name - -@st.cache_data -def load_rttm_file(rttm_path): - return load_rttm(rttm_path)['stream'] - - -def load_audio(uploaded_audio): - return AudioSegment.from_file(uploaded_audio) - - -if "openai_model" not in st.session_state: - st.session_state["openai_model"] = "gpt-3.5-turbo-16k" - -if "prompt_request_counter" not in st.session_state: - st.session_state["prompt_request_counter"] = 0 - -initial_prompt = [{"role": "system", "content": "You are helping to analyze and summarize a transcript of a conversation."}, - {"role": 'user', "content": 'Please summarize briefly below transcript and inlcude a list of tags with a hash for SEO. \n{}'}] -if "messages" not in st.session_state: - st.session_state.messages = initial_prompt - - -st.title("Speech-to-Chat") -reddit_thread = 'https://www.reddit.com/r/dataisbeautiful/comments/17413bq/oc_speech_diarization_app_that_transcribes_audio' - -with st.sidebar: - st.markdown(''' - # How to Use - - 1. Enter a youtube link. - 2. "Chat" with the video. - - Example prompts: - - Which speaker spoke the most? - - Give me a list of tags with a hash for SEO based on this transcript. - ''') - - api_key_input = st.text_input( - "OpenAI API Key to lift request limits (Coming soon)", - disabled=True, - type="password", - placeholder="Paste your OpenAI API key here (sk-...)", - help="You can get your API key from https://platform.openai.com/account/api-keys.", # noqa: E501 - value=os.environ.get("OPENAI_API_KEY", None) - or st.session_state.get("OPENAI_API_KEY", ""), - ) - - st.divider() - - st.markdown(f''' - # About - - Given an audio file or a youtube link this app will - - [x] 1. Partition the audio according to the identity of each speaker (diarization) using `pyannote` [HuggingFace Speaker Diarization api](https://huggingface.co/pyannote/speaker-diarization-3.0) - - [x] 2. Transcribe each audio segment using [OpenAi Whisper API](https://platform.openai.com/docs/guides/speech-to-text/quickstart) - - [x] 3. Set up an LLM chat with the transcript loaded into its knowledge database, so that a user can "talk" to the transcript of the audio file. - - This version will only process up to first 6 minutes of an audio file due to limited resources of free tier Streamlit.io/HuggingFace Spaces. - A local version with access to a GPU can process 1 hour of audio in 1 to 5 minutes. - If you would like to use this app at scale reach out directly by creating an issue on [github🤖](https://github.com/KobaKhit/speech-to-text-app/issues)! - - Rule of thumb, for this free tier hosted app it takes half the duration of the audio to complete processing, ex. g. 6 minute youtube video will take 3 minutes to diarize. - - Made by [kobakhit](https://github.com/KobaKhit/speech-to-text-app) - ''') - - -# Chat container -container_transcript_chat = st.container() - -# Source Selection -option = st.radio("Select source:", [ "Use YouTube link","See Example"], index=0) - -# Upload audio file -if option == "Upload an audio file": - with st.form('uploaded-file', clear_on_submit=True): - uploaded_audio = st.file_uploader("Upload an audio file (MP3 or WAV)", type=["mp3", "wav","mp4"]) - st.form_submit_button() - if st.form_submit_button(): st.session_state.messages = initial_prompt - with st.expander('Optional Parameters'): - # st.session_state.rttm = st.file_uploader("Upload .rttm if you already have one", type=["rttm"]) - # st.session_state.transcript_file = st.file_uploader("Upload transcipt json", type=["json"]) - youtube_link = st.text_input('Youtube link of the audio sample') - - if uploaded_audio is not None: - st.audio(uploaded_audio, format="audio/wav", start_time=0) - audio_name = uploaded_audio.name - audio = load_audio(uploaded_audio) - - # sample_rate = st.number_input("Enter the sample rate of the audio", min_value=8000, max_value=48000) - # audio = audio.set_frame_rate(sample_rate) - -# use youtube link -elif option == "Use YouTube link": - - with st.form('youtube-link'): - youtube_link_raw = st.text_input("Enter the YouTube video URL:") - youtube_link = f'https://youtu.be/{youtube_video_id(youtube_link_raw)}' - - if st.form_submit_button(): # reset variables on new link submit - process_youtube_link.clear() - st.session_state.messages = initial_prompt - st.session_state.rttm = None - st.session_state.transcript_file = None - st.session_state.prompt_request_counter = 0 - - with container_transcript_chat: - st.empty() - - # with st.expander('Optional Parameters'): - # st.session_state.rttm = st.file_uploader("Upload .rttm if you already have one", type=["rttm"]) - # st.session_state.transcript_file = st.file_uploader("Upload transcipt json", type=["json"]) - if youtube_link_raw: - audio, audio_name = process_youtube_link(youtube_link) - # sample_rate = st.number_input("Enter the sample rate of the audio", min_value=8000, max_value=48000) - # audio = audio.set_frame_rate(sample_rate) - # except Exception as e: - # st.write(f"Error: {str(e)}") -elif option == 'See Example': - youtube_link = 'https://www.youtube.com/watch?v=TamrOZX9bu8' - audio_name = 'Stephen A. Smith has JOKES with Shannon Sharpe' - st.write(f'Loaded audio file from {youtube_link} - {audio_name} 👏😂') - if os.path.isfile('example/steve a smith jokes.mp4'): - audio = load_audio('example/steve a smith jokes.mp4') - else: - yt = YouTube(youtube_link) - audio_stream = yt.streams.filter(only_audio=True).first() - audio_file = audio_stream.download(filename='sample.mp4') - time.sleep(2) - audio = load_audio('sample.mp4') - - if os.path.isfile("example/steve a smith jokes.rttm"): - st.session_state.rttm = "example/steve a smith jokes.rttm" - if os.path.isfile('example/steve a smith jokes.json'): - st.session_state.transcript_file = 'example/steve a smith jokes.json' - - st.audio(create_audio_stream(audio), format="audio/mp4", start_time=0) - -# Diarize -if "audio" in locals(): - # create stream - duration = audio.duration_seconds - if duration > DURATION_LIMIT: - st.info(f'Only processing the first {int(DURATION_LIMIT/6/6)} minutes of the audio due to Streamlit.io resource limits.') - audio = audio[:DURATION_LIMIT*1000] - duration = audio.duration_seconds - - - # Perform diarization with PyAnnote - pipeline = Pipeline.from_pretrained( - "pyannote/speaker-diarization-3.0", use_auth_token=hf_api_key) - if torch.cuda.device_count() > 0: # use gpu if available - st.write('Using cuda - GPU') - pipeline.to(torch.device('cuda')) - - # run the pipeline on an audio file - with st.spinner('Performing Diarization...'): - if 'rttm' in st.session_state and st.session_state.rttm != None: - st.write(f'Loading {st.session_state.rttm}') - diarization = load_rttm_file(st.session_state.rttm ) - else: - # with ProgressHook() as hook: - audio_ = create_audio_stream(audio) - # diarization = pipeline(audio_, hook=hook) - diarization = pipeline(audio_) - # dump the diarization output to disk using RTTM format - with open(f'{audio_name.split(".")[0]}.rttm', "w") as f: - diarization.write_rttm(f) - st.session_state.rttm = f'{audio_name.split(".")[0]}.rttm' - - # Display the diarization results - st.write("Diarization Results:") - - annotation = Annotation() - sp_chunks = [] - progress_text = f"Processing 1/{len(sp_chunks)}..." - my_bar = st.progress(0, text=progress_text) - counter = 0 - n_tracks = len([a for a in diarization.itertracks(yield_label=True)]) - for turn, _, speaker in diarization.itertracks(yield_label=True): - annotation[turn] = speaker - progress_text = f"Processing {counter}/{len(sp_chunks)}..." - my_bar.progress((counter+1)/n_tracks, text=progress_text) - counter +=1 - temp = {'speaker': speaker, - 'start': turn.start, 'end': turn.end, 'duration': turn.end-turn.start, - 'audio': audio[turn.start*1000:turn.end*1000]} - if 'transcript_file' in st.session_state and st.session_state.transcript_file == None: - temp['audio_stream'] = create_audio_stream(audio[turn.start*1000:turn.end*1000]) - sp_chunks.append(temp) - - # plot - notebook.crop = Segment(-1, duration + 1) - figure, ax = plt.subplots(figsize=(10,3)) - notebook.plot_annotation(annotation, ax=ax, time=True, legend=True) - figure.tight_layout() - # save to file - st.pyplot(figure) - - st.write('Speakers and Audio Samples') - with st.expander('Samples', expanded=True): - for speaker in set(s['speaker'] for s in sp_chunks): - temp = max(filter(lambda d: d['speaker'] == speaker, sp_chunks), key=lambda x: x['duration']) - speak_time = sum(c['duration'] for c in filter(lambda d: d['speaker'] == speaker, sp_chunks)) - rate = 100*min((speak_time, duration))/duration - speaker_summary = f"{temp['speaker']} ({round(rate)}% of video duration): start={temp['start']:.1f}s stop={temp['end']:.1f}s" - if youtube_link != None: - speaker_summary += f" {add_query_parameter(youtube_link, {'t':str(int(temp['start']))})}" - st.write(speaker_summary) - st.audio(create_audio_stream(temp['audio'])) - - st.divider() - # # Perform transcription with Whisper ASR - - - # Transcript containers - st.write(f'Transcribing using Whisper API ({TRANSCRIPTION_REQUEST_LIMIT} requests limit)...') - container_transcript_completed = st.container() - - progress_text = f"Processing 1/{len(sp_chunks[:TRANSCRIPTION_REQUEST_LIMIT])}..." - my_bar = st.progress(0, text=progress_text) - # rework the loop. Simplify if Else - with st.expander('Transcript', expanded=True): - if 'transcript_file' in st.session_state and st.session_state.transcript_file != None: - with open(st.session_state.transcript_file,'r') as f: - sp_chunks_loaded = json.load(f) - for i,s in enumerate(sp_chunks_loaded): - if s['transcript'] != None: - transcript_summary = f"**{s['speaker']}** start={float(s['start']):.1f}s end={float(s['end']):.1f}s: {s['transcript']}" - if youtube_link != None and youtube_link != '': - transcript_summary += f" {add_query_parameter(youtube_link, {'t':str(int(s['start']))})}" - - st.markdown(transcript_summary) - progress_text = f"Processing {i+1}/{len(sp_chunks_loaded)}..." - my_bar.progress((i+1)/len(sp_chunks_loaded), text=progress_text) - - transcript_json = sp_chunks_loaded - transcript_path = f'{audio_name.split(".")[0]}-transcript.json' - - else: - sp_chunks_updated = [] - for i,s in enumerate(sp_chunks[:TRANSCRIPTION_REQUEST_LIMIT]): - if s['duration'] > 0.1: - audio_path = s['audio'].export('temp.wav',format='wav') - try: - transcript = openai.Audio.transcribe("whisper-1", audio_path)['text'] - except Exception: - transcript = '' - pass - - if transcript !='' and transcript != None: - s['transcript'] = transcript - transcript_summary = f"**{s['speaker']}** start={s['start']:.1f}s end={s['end']:.1f}s : {s['transcript']}" - if youtube_link != None: - transcript_summary += f" {add_query_parameter(youtube_link, {'t':str(int(s['start']))})}" - - sp_chunks_updated.append({'speaker':s['speaker'], - 'start':s['start'], 'end':s['end'], - 'duration': s['duration'],'transcript': transcript}) - st.markdown(transcript_summary) - - progress_text = f"Processing {i+1}/{len(sp_chunks[:TRANSCRIPTION_REQUEST_LIMIT])}..." - my_bar.progress((i+1)/len(sp_chunks[:TRANSCRIPTION_REQUEST_LIMIT]), text=progress_text) - - - transcript_json = [dict((k, d[k]) for k in ['speaker','start','end','duration','transcript'] if k in d) for d in sp_chunks_updated] - transcript_path = f'{audio_name.split(".")[0]}-transcript.json' - st.session_state.transcript_file = transcript_path - - # save the trancript file - with open(transcript_path,'w') as f: - json.dump(transcript_json, f) - - # generate transcript string - transcript_string = '\n'.join([f"{s['speaker']} start={s['start']:.1f}s end={s['end']:.1f}s : {s['transcript']}" for s in transcript_json]) - - @st.cache_data - def get_initial_response(transcript_string): - st.session_state.messages[1]['content'] = st.session_state.messages[1]['content'].format(transcript_string) - initial_response = openai.ChatCompletion.create( - model=st.session_state["openai_model"], - messages=st.session_state.messages - ) - return initial_response['choices'][0]['message']['content'] - - # Chat container - st.session_state.messages[1]['content'] = st.session_state.messages[1]['content'].format(transcript_string) - with container_transcript_chat: - # get a summary of transcript from ChatGpt - try: - init = get_initial_response(transcript_string) - except openai.error.APIError: - # st.stop('It is not you. It is not this app. It is OpenAI API thats having issues.') - init = '' - st.warning('OpenAI API is having issues. Hope they resolve it soon. Refer to https://status.openai.com/') - # pass transcript to initial prompt - - - # LLM Chat - with st.expander('Summary of the Transcribed Audio File Generated by ChatGPT', expanded = True): - # display the AI generated summary. - with st.chat_message("assistant", avatar='https://upload.wikimedia.org/wikipedia/commons/0/04/ChatGPT_logo.svg'): - st.write(init) - - # chat field - with st.form("Chat",clear_on_submit=True): - prompt = st.text_input(f'Chat with the Transcript ({int(PROMPT_REQUEST_LIMIT)} prompts limit)') - st.form_submit_button() - - # message list - # for message in st.session_state.messages[2:]: - # with st.chat_message(message["role"]): - # st.markdown(message["content"]) - - # make request if prompt was entered - if prompt: - st.session_state.prompt_request_counter += 1 - if st.session_state.prompt_request_counter > PROMPT_REQUEST_LIMIT: - st.warning('Exceeded prompt limit.'); - st.stop() - # append user prompt to messages - st.session_state.messages.append({"role": "user", "content": prompt}) - - # dislay user prompt - with st.chat_message("user"): - st.markdown(prompt) - - # stream LLM Assisstant response - with st.chat_message("assistant"): - message_placeholder = st.empty() - full_response = "" - - # stream response - for response in openai.ChatCompletion.create( - model=st.session_state["openai_model"], - messages=[ - {"role": m["role"], "content": m["content"]} - for m in st.session_state.messages - ], - stream=True, - ): - full_response += response.choices[0].delta.get("content", "") - message_placeholder.markdown(full_response + "▌") - message_placeholder.markdown(full_response) - - # append ai response to messages - st.session_state.messages.append({"role": "assistant", "content": full_response}) - - # Trancription Completed Section - with container_transcript_completed: - st.info(f'Completed transcribing') - - @st.cache_data - def convert_df(string): - # IMPORTANT: Cache the conversion to prevent computation on every rerun - return string.encode('utf-8') - # encode transcript string - transcript_json_download = convert_df(json.dumps(transcript_json)) - # transcript download buttons - c1_b,c2_b = st.columns((1,2)) - - # json button - with c1_b: - ste.download_button( - "Download transcript as json", - transcript_json_download, - transcript_path, - ) - - # create csv string - header = ','.join(transcript_json[0].keys()) + '\n' - for s in transcript_json: - header += ','.join([str(e) if ',' not in str(e) else '"' + str(e) + '"' for e in s.values()]) + '\n' - - # csv button - transcript_csv_download = convert_df(header) - with c2_b: - ste.download_button( - "Download transcript as csv", - transcript_csv_download, - f'{audio_name.split(".")[0]}-transcript.csv' - ) - \ No newline at end of file diff --git a/spaces/kokofixcomputers/chat-ui/src/routes/logout/+page.server.ts b/spaces/kokofixcomputers/chat-ui/src/routes/logout/+page.server.ts deleted file mode 100644 index 1d60b6c5d8df28981da4d06d5ea58eeeaf838b47..0000000000000000000000000000000000000000 --- a/spaces/kokofixcomputers/chat-ui/src/routes/logout/+page.server.ts +++ /dev/null @@ -1,17 +0,0 @@ -import { dev } from "$app/environment"; -import { base } from "$app/paths"; -import { COOKIE_NAME } from "$env/static/private"; -import { redirect } from "@sveltejs/kit"; - -export const actions = { - default: async function ({ cookies }) { - cookies.delete(COOKIE_NAME, { - path: "/", - // So that it works inside the space's iframe - sameSite: dev ? "lax" : "none", - secure: !dev, - httpOnly: true, - }); - throw redirect(303, `${base}/`); - }, -}; diff --git a/spaces/konverner/deep-voice-cloning/src/deep_voice_cloning/transcriber/__init__.py b/spaces/konverner/deep-voice-cloning/src/deep_voice_cloning/transcriber/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/analyze_errors.py b/spaces/kquote03/lama-video-watermark-remover/bin/analyze_errors.py deleted file mode 100644 index a11f9478de76ede162f5511449ac98e549ff4b6e..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/analyze_errors.py +++ /dev/null @@ -1,316 +0,0 @@ -#!/usr/bin/env python3 -import cv2 -import numpy as np -import sklearn -import torch -import os -import pickle -import pandas as pd -import matplotlib.pyplot as plt -from joblib import Parallel, delayed - -from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset, load_image -from saicinpainting.evaluation.losses.fid.inception import InceptionV3 -from saicinpainting.evaluation.utils import load_yaml -from saicinpainting.training.visualizers.base import visualize_mask_and_images - - -def draw_score(img, score): - img = np.transpose(img, (1, 2, 0)) - cv2.putText(img, f'{score:.2f}', - (40, 40), - cv2.FONT_HERSHEY_SIMPLEX, - 1, - (0, 1, 0), - thickness=3) - img = np.transpose(img, (2, 0, 1)) - return img - - -def save_global_samples(global_mask_fnames, mask2real_fname, mask2fake_fname, out_dir, real_scores_by_fname, fake_scores_by_fname): - for cur_mask_fname in global_mask_fnames: - cur_real_fname = mask2real_fname[cur_mask_fname] - orig_img = load_image(cur_real_fname, mode='RGB') - fake_img = load_image(mask2fake_fname[cur_mask_fname], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]] - mask = load_image(cur_mask_fname, mode='L')[None, ...] - - draw_score(orig_img, real_scores_by_fname.loc[cur_real_fname, 'real_score']) - draw_score(fake_img, fake_scores_by_fname.loc[cur_mask_fname, 'fake_score']) - - cur_grid = visualize_mask_and_images(dict(image=orig_img, mask=mask, fake=fake_img), - keys=['image', 'fake'], - last_without_mask=True) - cur_grid = np.clip(cur_grid * 255, 0, 255).astype('uint8') - cur_grid = cv2.cvtColor(cur_grid, cv2.COLOR_RGB2BGR) - cv2.imwrite(os.path.join(out_dir, os.path.splitext(os.path.basename(cur_mask_fname))[0] + '.jpg'), - cur_grid) - - -def save_samples_by_real(worst_best_by_real, mask2fake_fname, fake_info, out_dir): - for real_fname in worst_best_by_real.index: - worst_mask_path = worst_best_by_real.loc[real_fname, 'worst'] - best_mask_path = worst_best_by_real.loc[real_fname, 'best'] - orig_img = load_image(real_fname, mode='RGB') - worst_mask_img = load_image(worst_mask_path, mode='L')[None, ...] - worst_fake_img = load_image(mask2fake_fname[worst_mask_path], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]] - best_mask_img = load_image(best_mask_path, mode='L')[None, ...] - best_fake_img = load_image(mask2fake_fname[best_mask_path], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]] - - draw_score(orig_img, worst_best_by_real.loc[real_fname, 'real_score']) - draw_score(worst_fake_img, worst_best_by_real.loc[real_fname, 'worst_score']) - draw_score(best_fake_img, worst_best_by_real.loc[real_fname, 'best_score']) - - cur_grid = visualize_mask_and_images(dict(image=orig_img, mask=np.zeros_like(worst_mask_img), - worst_mask=worst_mask_img, worst_img=worst_fake_img, - best_mask=best_mask_img, best_img=best_fake_img), - keys=['image', 'worst_mask', 'worst_img', 'best_mask', 'best_img'], - rescale_keys=['worst_mask', 'best_mask'], - last_without_mask=True) - cur_grid = np.clip(cur_grid * 255, 0, 255).astype('uint8') - cur_grid = cv2.cvtColor(cur_grid, cv2.COLOR_RGB2BGR) - cv2.imwrite(os.path.join(out_dir, - os.path.splitext(os.path.basename(real_fname))[0] + '.jpg'), - cur_grid) - - fig, (ax1, ax2) = plt.subplots(1, 2) - cur_stat = fake_info[fake_info['real_fname'] == real_fname] - cur_stat['fake_score'].hist(ax=ax1) - cur_stat['real_score'].hist(ax=ax2) - fig.tight_layout() - fig.savefig(os.path.join(out_dir, - os.path.splitext(os.path.basename(real_fname))[0] + '_scores.png')) - plt.close(fig) - - -def extract_overlapping_masks(mask_fnames, cur_i, fake_scores_table, max_overlaps_n=2): - result_pairs = [] - result_scores = [] - mask_fname_a = mask_fnames[cur_i] - mask_a = load_image(mask_fname_a, mode='L')[None, ...] > 0.5 - cur_score_a = fake_scores_table.loc[mask_fname_a, 'fake_score'] - for mask_fname_b in mask_fnames[cur_i + 1:]: - mask_b = load_image(mask_fname_b, mode='L')[None, ...] > 0.5 - if not np.any(mask_a & mask_b): - continue - cur_score_b = fake_scores_table.loc[mask_fname_b, 'fake_score'] - result_pairs.append((mask_fname_a, mask_fname_b)) - result_scores.append(cur_score_b - cur_score_a) - if len(result_pairs) >= max_overlaps_n: - break - return result_pairs, result_scores - - -def main(args): - config = load_yaml(args.config) - - latents_dir = os.path.join(args.outpath, 'latents') - os.makedirs(latents_dir, exist_ok=True) - global_worst_dir = os.path.join(args.outpath, 'global_worst') - os.makedirs(global_worst_dir, exist_ok=True) - global_best_dir = os.path.join(args.outpath, 'global_best') - os.makedirs(global_best_dir, exist_ok=True) - worst_best_by_best_worst_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'best_worst_score_diff_max') - os.makedirs(worst_best_by_best_worst_score_diff_max_dir, exist_ok=True) - worst_best_by_best_worst_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'best_worst_score_diff_min') - os.makedirs(worst_best_by_best_worst_score_diff_min_dir, exist_ok=True) - worst_best_by_real_best_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_best_score_diff_max') - os.makedirs(worst_best_by_real_best_score_diff_max_dir, exist_ok=True) - worst_best_by_real_best_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_best_score_diff_min') - os.makedirs(worst_best_by_real_best_score_diff_min_dir, exist_ok=True) - worst_best_by_real_worst_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_worst_score_diff_max') - os.makedirs(worst_best_by_real_worst_score_diff_max_dir, exist_ok=True) - worst_best_by_real_worst_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_worst_score_diff_min') - os.makedirs(worst_best_by_real_worst_score_diff_min_dir, exist_ok=True) - - if not args.only_report: - block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[2048] - inception_model = InceptionV3([block_idx]).eval().cuda() - - dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs) - - real2vector_cache = {} - - real_features = [] - fake_features = [] - - orig_fnames = [] - mask_fnames = [] - mask2real_fname = {} - mask2fake_fname = {} - - for batch_i, batch in enumerate(dataset): - orig_img_fname = dataset.img_filenames[batch_i] - mask_fname = dataset.mask_filenames[batch_i] - fake_fname = dataset.pred_filenames[batch_i] - mask2real_fname[mask_fname] = orig_img_fname - mask2fake_fname[mask_fname] = fake_fname - - cur_real_vector = real2vector_cache.get(orig_img_fname, None) - if cur_real_vector is None: - with torch.no_grad(): - in_img = torch.from_numpy(batch['image'][None, ...]).cuda() - cur_real_vector = inception_model(in_img)[0].squeeze(-1).squeeze(-1).cpu().numpy() - real2vector_cache[orig_img_fname] = cur_real_vector - - pred_img = torch.from_numpy(batch['inpainted'][None, ...]).cuda() - cur_fake_vector = inception_model(pred_img)[0].squeeze(-1).squeeze(-1).cpu().numpy() - - real_features.append(cur_real_vector) - fake_features.append(cur_fake_vector) - - orig_fnames.append(orig_img_fname) - mask_fnames.append(mask_fname) - - ids_features = np.concatenate(real_features + fake_features, axis=0) - ids_labels = np.array(([1] * len(real_features)) + ([0] * len(fake_features))) - - with open(os.path.join(latents_dir, 'featues.pkl'), 'wb') as f: - pickle.dump(ids_features, f, protocol=3) - with open(os.path.join(latents_dir, 'labels.pkl'), 'wb') as f: - pickle.dump(ids_labels, f, protocol=3) - with open(os.path.join(latents_dir, 'orig_fnames.pkl'), 'wb') as f: - pickle.dump(orig_fnames, f, protocol=3) - with open(os.path.join(latents_dir, 'mask_fnames.pkl'), 'wb') as f: - pickle.dump(mask_fnames, f, protocol=3) - with open(os.path.join(latents_dir, 'mask2real_fname.pkl'), 'wb') as f: - pickle.dump(mask2real_fname, f, protocol=3) - with open(os.path.join(latents_dir, 'mask2fake_fname.pkl'), 'wb') as f: - pickle.dump(mask2fake_fname, f, protocol=3) - - svm = sklearn.svm.LinearSVC(dual=False) - svm.fit(ids_features, ids_labels) - - pred_scores = svm.decision_function(ids_features) - real_scores = pred_scores[:len(real_features)] - fake_scores = pred_scores[len(real_features):] - - with open(os.path.join(latents_dir, 'pred_scores.pkl'), 'wb') as f: - pickle.dump(pred_scores, f, protocol=3) - with open(os.path.join(latents_dir, 'real_scores.pkl'), 'wb') as f: - pickle.dump(real_scores, f, protocol=3) - with open(os.path.join(latents_dir, 'fake_scores.pkl'), 'wb') as f: - pickle.dump(fake_scores, f, protocol=3) - else: - with open(os.path.join(latents_dir, 'orig_fnames.pkl'), 'rb') as f: - orig_fnames = pickle.load(f) - with open(os.path.join(latents_dir, 'mask_fnames.pkl'), 'rb') as f: - mask_fnames = pickle.load(f) - with open(os.path.join(latents_dir, 'mask2real_fname.pkl'), 'rb') as f: - mask2real_fname = pickle.load(f) - with open(os.path.join(latents_dir, 'mask2fake_fname.pkl'), 'rb') as f: - mask2fake_fname = pickle.load(f) - with open(os.path.join(latents_dir, 'real_scores.pkl'), 'rb') as f: - real_scores = pickle.load(f) - with open(os.path.join(latents_dir, 'fake_scores.pkl'), 'rb') as f: - fake_scores = pickle.load(f) - - real_info = pd.DataFrame(data=[dict(real_fname=fname, - real_score=score) - for fname, score - in zip(orig_fnames, real_scores)]) - real_info.set_index('real_fname', drop=True, inplace=True) - - fake_info = pd.DataFrame(data=[dict(mask_fname=fname, - fake_fname=mask2fake_fname[fname], - real_fname=mask2real_fname[fname], - fake_score=score) - for fname, score - in zip(mask_fnames, fake_scores)]) - fake_info = fake_info.join(real_info, on='real_fname', how='left') - fake_info.drop_duplicates(['fake_fname', 'real_fname'], inplace=True) - - fake_stats_by_real = fake_info.groupby('real_fname')['fake_score'].describe()[['mean', 'std']].rename( - {'mean': 'mean_fake_by_real', 'std': 'std_fake_by_real'}, axis=1) - fake_info = fake_info.join(fake_stats_by_real, on='real_fname', rsuffix='stat_by_real') - fake_info.drop_duplicates(['fake_fname', 'real_fname'], inplace=True) - fake_info.to_csv(os.path.join(latents_dir, 'join_scores_table.csv'), sep='\t', index=False) - - fake_scores_table = fake_info.set_index('mask_fname')['fake_score'].to_frame() - real_scores_table = fake_info.set_index('real_fname')['real_score'].drop_duplicates().to_frame() - - fig, (ax1, ax2) = plt.subplots(1, 2) - ax1.hist(fake_scores) - ax2.hist(real_scores) - fig.tight_layout() - fig.savefig(os.path.join(args.outpath, 'global_scores_hist.png')) - plt.close(fig) - - global_worst_masks = fake_info.sort_values('fake_score', ascending=True)['mask_fname'].iloc[:config.take_global_top].to_list() - global_best_masks = fake_info.sort_values('fake_score', ascending=False)['mask_fname'].iloc[:config.take_global_top].to_list() - save_global_samples(global_worst_masks, mask2real_fname, mask2fake_fname, global_worst_dir, real_scores_table, fake_scores_table) - save_global_samples(global_best_masks, mask2real_fname, mask2fake_fname, global_best_dir, real_scores_table, fake_scores_table) - - # grouped by real - worst_samples_by_real = fake_info.groupby('real_fname').apply( - lambda d: d.set_index('mask_fname')['fake_score'].idxmin()).to_frame().rename({0: 'worst'}, axis=1) - best_samples_by_real = fake_info.groupby('real_fname').apply( - lambda d: d.set_index('mask_fname')['fake_score'].idxmax()).to_frame().rename({0: 'best'}, axis=1) - worst_best_by_real = pd.concat([worst_samples_by_real, best_samples_by_real], axis=1) - - worst_best_by_real = worst_best_by_real.join(fake_scores_table.rename({'fake_score': 'worst_score'}, axis=1), - on='worst') - worst_best_by_real = worst_best_by_real.join(fake_scores_table.rename({'fake_score': 'best_score'}, axis=1), - on='best') - worst_best_by_real = worst_best_by_real.join(real_scores_table) - - worst_best_by_real['best_worst_score_diff'] = worst_best_by_real['best_score'] - worst_best_by_real['worst_score'] - worst_best_by_real['real_best_score_diff'] = worst_best_by_real['real_score'] - worst_best_by_real['best_score'] - worst_best_by_real['real_worst_score_diff'] = worst_best_by_real['real_score'] - worst_best_by_real['worst_score'] - - worst_best_by_best_worst_score_diff_min = worst_best_by_real.sort_values('best_worst_score_diff', ascending=True).iloc[:config.take_worst_best_top] - worst_best_by_best_worst_score_diff_max = worst_best_by_real.sort_values('best_worst_score_diff', ascending=False).iloc[:config.take_worst_best_top] - save_samples_by_real(worst_best_by_best_worst_score_diff_min, mask2fake_fname, fake_info, worst_best_by_best_worst_score_diff_min_dir) - save_samples_by_real(worst_best_by_best_worst_score_diff_max, mask2fake_fname, fake_info, worst_best_by_best_worst_score_diff_max_dir) - - worst_best_by_real_best_score_diff_min = worst_best_by_real.sort_values('real_best_score_diff', ascending=True).iloc[:config.take_worst_best_top] - worst_best_by_real_best_score_diff_max = worst_best_by_real.sort_values('real_best_score_diff', ascending=False).iloc[:config.take_worst_best_top] - save_samples_by_real(worst_best_by_real_best_score_diff_min, mask2fake_fname, fake_info, worst_best_by_real_best_score_diff_min_dir) - save_samples_by_real(worst_best_by_real_best_score_diff_max, mask2fake_fname, fake_info, worst_best_by_real_best_score_diff_max_dir) - - worst_best_by_real_worst_score_diff_min = worst_best_by_real.sort_values('real_worst_score_diff', ascending=True).iloc[:config.take_worst_best_top] - worst_best_by_real_worst_score_diff_max = worst_best_by_real.sort_values('real_worst_score_diff', ascending=False).iloc[:config.take_worst_best_top] - save_samples_by_real(worst_best_by_real_worst_score_diff_min, mask2fake_fname, fake_info, worst_best_by_real_worst_score_diff_min_dir) - save_samples_by_real(worst_best_by_real_worst_score_diff_max, mask2fake_fname, fake_info, worst_best_by_real_worst_score_diff_max_dir) - - # analyze what change of mask causes bigger change of score - overlapping_mask_fname_pairs = [] - overlapping_mask_fname_score_diffs = [] - for cur_real_fname in orig_fnames: - cur_fakes_info = fake_info[fake_info['real_fname'] == cur_real_fname] - cur_mask_fnames = sorted(cur_fakes_info['mask_fname'].unique()) - - cur_mask_pairs_and_scores = Parallel(args.n_jobs)( - delayed(extract_overlapping_masks)(cur_mask_fnames, i, fake_scores_table) - for i in range(len(cur_mask_fnames) - 1) - ) - for cur_pairs, cur_scores in cur_mask_pairs_and_scores: - overlapping_mask_fname_pairs.extend(cur_pairs) - overlapping_mask_fname_score_diffs.extend(cur_scores) - - overlapping_mask_fname_pairs = np.asarray(overlapping_mask_fname_pairs) - overlapping_mask_fname_score_diffs = np.asarray(overlapping_mask_fname_score_diffs) - overlapping_sort_idx = np.argsort(overlapping_mask_fname_score_diffs) - overlapping_mask_fname_pairs = overlapping_mask_fname_pairs[overlapping_sort_idx] - overlapping_mask_fname_score_diffs = overlapping_mask_fname_score_diffs[overlapping_sort_idx] - - - - - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to config for dataset generation') - aparser.add_argument('datadir', type=str, - help='Path to folder with images and masks (output of gen_mask_dataset.py)') - aparser.add_argument('predictdir', type=str, - help='Path to folder with predicts (e.g. predict_hifill_baseline.py)') - aparser.add_argument('outpath', type=str, help='Where to put results') - aparser.add_argument('--only-report', action='store_true', - help='Whether to skip prediction and feature extraction, ' - 'load all the possible latents and proceed with report only') - aparser.add_argument('--n-jobs', type=int, default=8, help='how many processes to use for pair mask mining') - - main(aparser.parse_args()) diff --git a/spaces/kukuhtw/AutoGPT/autogpt/memory/local.py b/spaces/kukuhtw/AutoGPT/autogpt/memory/local.py deleted file mode 100644 index 803b6dc6ebb430285f423cda592fa3e902e9a4a6..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/memory/local.py +++ /dev/null @@ -1,136 +0,0 @@ -from __future__ import annotations - -import dataclasses -import os -from typing import Any, List - -import numpy as np -import orjson - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.memory.base import MemoryProviderSingleton - -EMBED_DIM = 1536 -SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS - - -def create_default_embeddings(): - return np.zeros((0, EMBED_DIM)).astype(np.float32) - - -@dataclasses.dataclass -class CacheContent: - texts: List[str] = dataclasses.field(default_factory=list) - embeddings: np.ndarray = dataclasses.field( - default_factory=create_default_embeddings - ) - - -class LocalCache(MemoryProviderSingleton): - """A class that stores the memory in a local file""" - - def __init__(self, cfg) -> None: - """Initialize a class instance - - Args: - cfg: Config object - - Returns: - None - """ - self.filename = f"{cfg.memory_index}.json" - if os.path.exists(self.filename): - try: - with open(self.filename, "w+b") as f: - file_content = f.read() - if not file_content.strip(): - file_content = b"{}" - f.write(file_content) - - loaded = orjson.loads(file_content) - self.data = CacheContent(**loaded) - except orjson.JSONDecodeError: - print(f"Error: The file '{self.filename}' is not in JSON format.") - self.data = CacheContent() - else: - print( - f"Warning: The file '{self.filename}' does not exist. " - "Local memory would not be saved to a file." - ) - self.data = CacheContent() - - def add(self, text: str): - """ - Add text to our list of texts, add embedding as row to our - embeddings-matrix - - Args: - text: str - - Returns: None - """ - if "Command Error:" in text: - return "" - self.data.texts.append(text) - - embedding = create_embedding_with_ada(text) - - vector = np.array(embedding).astype(np.float32) - vector = vector[np.newaxis, :] - self.data.embeddings = np.concatenate( - [ - self.data.embeddings, - vector, - ], - axis=0, - ) - - with open(self.filename, "wb") as f: - out = orjson.dumps(self.data, option=SAVE_OPTIONS) - f.write(out) - return text - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.data = CacheContent() - return "Obliviated" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def get_relevant(self, text: str, k: int) -> list[Any]: - """ " - matrix-vector mult to find score-for-each-row-of-matrix - get indices for top-k winning scores - return texts for those indices - Args: - text: str - k: int - - Returns: List[str] - """ - embedding = create_embedding_with_ada(text) - - scores = np.dot(self.data.embeddings, embedding) - - top_k_indices = np.argsort(scores)[-k:][::-1] - - return [self.data.texts[i] for i in top_k_indices] - - def get_stats(self) -> tuple[int, tuple[int, ...]]: - """ - Returns: The stats of the local cache. - """ - return len(self.data.texts), self.data.embeddings.shape diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/plistlib/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/plistlib/__init__.py deleted file mode 100644 index 066eef38fc720265366afee9a8cd415fc560459e..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/plistlib/__init__.py +++ /dev/null @@ -1,681 +0,0 @@ -import collections.abc -import re -from typing import ( - Any, - Callable, - Dict, - List, - Mapping, - MutableMapping, - Optional, - Sequence, - Type, - Union, - IO, -) -import warnings -from io import BytesIO -from datetime import datetime -from base64 import b64encode, b64decode -from numbers import Integral -from types import SimpleNamespace -from functools import singledispatch - -from fontTools.misc import etree - -from fontTools.misc.textTools import tostr - - -# By default, we -# - deserialize elements as bytes and -# - serialize bytes as elements. -# Before, on Python 2, we -# - deserialized elements as plistlib.Data objects, in order to -# distinguish them from the built-in str type (which is bytes on python2) -# - serialized bytes as elements (they must have only contained -# ASCII characters in this case) -# You can pass use_builtin_types=[True|False] to the load/dump etc. functions -# to enforce a specific treatment. -# NOTE that unicode type always maps to element, and plistlib.Data -# always maps to element, regardless of use_builtin_types. -USE_BUILTIN_TYPES = True - -XML_DECLARATION = b"""""" - -PLIST_DOCTYPE = ( - b'' -) - - -# Date should conform to a subset of ISO 8601: -# YYYY '-' MM '-' DD 'T' HH ':' MM ':' SS 'Z' -_date_parser = re.compile( - r"(?P\d\d\d\d)" - r"(?:-(?P\d\d)" - r"(?:-(?P\d\d)" - r"(?:T(?P\d\d)" - r"(?::(?P\d\d)" - r"(?::(?P\d\d))" - r"?)?)?)?)?Z", - re.ASCII, -) - - -def _date_from_string(s: str) -> datetime: - order = ("year", "month", "day", "hour", "minute", "second") - m = _date_parser.match(s) - if m is None: - raise ValueError(f"Expected ISO 8601 date string, but got '{s:r}'.") - gd = m.groupdict() - lst = [] - for key in order: - val = gd[key] - if val is None: - break - lst.append(int(val)) - # NOTE: mypy doesn't know that lst is 6 elements long. - return datetime(*lst) # type:ignore - - -def _date_to_string(d: datetime) -> str: - return "%04d-%02d-%02dT%02d:%02d:%02dZ" % ( - d.year, - d.month, - d.day, - d.hour, - d.minute, - d.second, - ) - - -class Data: - """Represents binary data when ``use_builtin_types=False.`` - - This class wraps binary data loaded from a plist file when the - ``use_builtin_types`` argument to the loading function (:py:func:`fromtree`, - :py:func:`load`, :py:func:`loads`) is false. - - The actual binary data is retrieved using the ``data`` attribute. - """ - - def __init__(self, data: bytes) -> None: - if not isinstance(data, bytes): - raise TypeError("Expected bytes, found %s" % type(data).__name__) - self.data = data - - @classmethod - def fromBase64(cls, data: Union[bytes, str]) -> "Data": - return cls(b64decode(data)) - - def asBase64(self, maxlinelength: int = 76, indent_level: int = 1) -> bytes: - return _encode_base64( - self.data, maxlinelength=maxlinelength, indent_level=indent_level - ) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, self.__class__): - return self.data == other.data - elif isinstance(other, bytes): - return self.data == other - else: - return NotImplemented - - def __repr__(self) -> str: - return "%s(%s)" % (self.__class__.__name__, repr(self.data)) - - -def _encode_base64( - data: bytes, maxlinelength: Optional[int] = 76, indent_level: int = 1 -) -> bytes: - data = b64encode(data) - if data and maxlinelength: - # split into multiple lines right-justified to 'maxlinelength' chars - indent = b"\n" + b" " * indent_level - max_length = max(16, maxlinelength - len(indent)) - chunks = [] - for i in range(0, len(data), max_length): - chunks.append(indent) - chunks.append(data[i : i + max_length]) - chunks.append(indent) - data = b"".join(chunks) - return data - - -# Mypy does not support recursive type aliases as of 0.782, Pylance does. -# https://github.com/python/mypy/issues/731 -# https://devblogs.microsoft.com/python/pylance-introduces-five-new-features-that-enable-type-magic-for-python-developers/#1-support-for-recursive-type-aliases -PlistEncodable = Union[ - bool, - bytes, - Data, - datetime, - float, - Integral, - Mapping[str, Any], - Sequence[Any], - str, -] - - -class PlistTarget: - """Event handler using the ElementTree Target API that can be - passed to a XMLParser to produce property list objects from XML. - It is based on the CPython plistlib module's _PlistParser class, - but does not use the expat parser. - - >>> from fontTools.misc import etree - >>> parser = etree.XMLParser(target=PlistTarget()) - >>> result = etree.XML( - ... "" - ... " something" - ... " blah" - ... "", - ... parser=parser) - >>> result == {"something": "blah"} - True - - Links: - https://github.com/python/cpython/blob/main/Lib/plistlib.py - http://lxml.de/parsing.html#the-target-parser-interface - """ - - def __init__( - self, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, - ) -> None: - self.stack: List[PlistEncodable] = [] - self.current_key: Optional[str] = None - self.root: Optional[PlistEncodable] = None - if use_builtin_types is None: - self._use_builtin_types = USE_BUILTIN_TYPES - else: - if use_builtin_types is False: - warnings.warn( - "Setting use_builtin_types to False is deprecated and will be " - "removed soon.", - DeprecationWarning, - ) - self._use_builtin_types = use_builtin_types - self._dict_type = dict_type - - def start(self, tag: str, attrib: Mapping[str, str]) -> None: - self._data: List[str] = [] - handler = _TARGET_START_HANDLERS.get(tag) - if handler is not None: - handler(self) - - def end(self, tag: str) -> None: - handler = _TARGET_END_HANDLERS.get(tag) - if handler is not None: - handler(self) - - def data(self, data: str) -> None: - self._data.append(data) - - def close(self) -> PlistEncodable: - if self.root is None: - raise ValueError("No root set.") - return self.root - - # helpers - - def add_object(self, value: PlistEncodable) -> None: - if self.current_key is not None: - stack_top = self.stack[-1] - if not isinstance(stack_top, collections.abc.MutableMapping): - raise ValueError("unexpected element: %r" % stack_top) - stack_top[self.current_key] = value - self.current_key = None - elif not self.stack: - # this is the root object - self.root = value - else: - stack_top = self.stack[-1] - if not isinstance(stack_top, list): - raise ValueError("unexpected element: %r" % stack_top) - stack_top.append(value) - - def get_data(self) -> str: - data = "".join(self._data) - self._data = [] - return data - - -# event handlers - - -def start_dict(self: PlistTarget) -> None: - d = self._dict_type() - self.add_object(d) - self.stack.append(d) - - -def end_dict(self: PlistTarget) -> None: - if self.current_key: - raise ValueError("missing value for key '%s'" % self.current_key) - self.stack.pop() - - -def end_key(self: PlistTarget) -> None: - if self.current_key or not isinstance(self.stack[-1], collections.abc.Mapping): - raise ValueError("unexpected key") - self.current_key = self.get_data() - - -def start_array(self: PlistTarget) -> None: - a: List[PlistEncodable] = [] - self.add_object(a) - self.stack.append(a) - - -def end_array(self: PlistTarget) -> None: - self.stack.pop() - - -def end_true(self: PlistTarget) -> None: - self.add_object(True) - - -def end_false(self: PlistTarget) -> None: - self.add_object(False) - - -def end_integer(self: PlistTarget) -> None: - self.add_object(int(self.get_data())) - - -def end_real(self: PlistTarget) -> None: - self.add_object(float(self.get_data())) - - -def end_string(self: PlistTarget) -> None: - self.add_object(self.get_data()) - - -def end_data(self: PlistTarget) -> None: - if self._use_builtin_types: - self.add_object(b64decode(self.get_data())) - else: - self.add_object(Data.fromBase64(self.get_data())) - - -def end_date(self: PlistTarget) -> None: - self.add_object(_date_from_string(self.get_data())) - - -_TARGET_START_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = { - "dict": start_dict, - "array": start_array, -} - -_TARGET_END_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = { - "dict": end_dict, - "array": end_array, - "key": end_key, - "true": end_true, - "false": end_false, - "integer": end_integer, - "real": end_real, - "string": end_string, - "data": end_data, - "date": end_date, -} - - -# functions to build element tree from plist data - - -def _string_element(value: str, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("string") - el.text = value - return el - - -def _bool_element(value: bool, ctx: SimpleNamespace) -> etree.Element: - if value: - return etree.Element("true") - return etree.Element("false") - - -def _integer_element(value: int, ctx: SimpleNamespace) -> etree.Element: - if -1 << 63 <= value < 1 << 64: - el = etree.Element("integer") - el.text = "%d" % value - return el - raise OverflowError(value) - - -def _real_element(value: float, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("real") - el.text = repr(value) - return el - - -def _dict_element( - d: Mapping[str, PlistEncodable], ctx: SimpleNamespace -) -> etree.Element: - el = etree.Element("dict") - items = d.items() - if ctx.sort_keys: - items = sorted(items) # type: ignore - ctx.indent_level += 1 - for key, value in items: - if not isinstance(key, str): - if ctx.skipkeys: - continue - raise TypeError("keys must be strings") - k = etree.SubElement(el, "key") - k.text = tostr(key, "utf-8") - el.append(_make_element(value, ctx)) - ctx.indent_level -= 1 - return el - - -def _array_element( - array: Sequence[PlistEncodable], ctx: SimpleNamespace -) -> etree.Element: - el = etree.Element("array") - if len(array) == 0: - return el - ctx.indent_level += 1 - for value in array: - el.append(_make_element(value, ctx)) - ctx.indent_level -= 1 - return el - - -def _date_element(date: datetime, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("date") - el.text = _date_to_string(date) - return el - - -def _data_element(data: bytes, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("data") - # NOTE: mypy is confused about whether el.text should be str or bytes. - el.text = _encode_base64( # type: ignore - data, - maxlinelength=(76 if ctx.pretty_print else None), - indent_level=ctx.indent_level, - ) - return el - - -def _string_or_data_element(raw_bytes: bytes, ctx: SimpleNamespace) -> etree.Element: - if ctx.use_builtin_types: - return _data_element(raw_bytes, ctx) - else: - try: - string = raw_bytes.decode(encoding="ascii", errors="strict") - except UnicodeDecodeError: - raise ValueError( - "invalid non-ASCII bytes; use unicode string instead: %r" % raw_bytes - ) - return _string_element(string, ctx) - - -# The following is probably not entirely correct. The signature should take `Any` -# and return `NoReturn`. At the time of this writing, neither mypy nor Pyright -# can deal with singledispatch properly and will apply the signature of the base -# function to all others. Being slightly dishonest makes it type-check and return -# usable typing information for the optimistic case. -@singledispatch -def _make_element(value: PlistEncodable, ctx: SimpleNamespace) -> etree.Element: - raise TypeError("unsupported type: %s" % type(value)) - - -_make_element.register(str)(_string_element) -_make_element.register(bool)(_bool_element) -_make_element.register(Integral)(_integer_element) -_make_element.register(float)(_real_element) -_make_element.register(collections.abc.Mapping)(_dict_element) -_make_element.register(list)(_array_element) -_make_element.register(tuple)(_array_element) -_make_element.register(datetime)(_date_element) -_make_element.register(bytes)(_string_or_data_element) -_make_element.register(bytearray)(_data_element) -_make_element.register(Data)(lambda v, ctx: _data_element(v.data, ctx)) - - -# Public functions to create element tree from plist-compatible python -# data structures and viceversa, for use when (de)serializing GLIF xml. - - -def totree( - value: PlistEncodable, - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, - indent_level: int = 1, -) -> etree.Element: - """Convert a value derived from a plist into an XML tree. - - Args: - value: Any kind of value to be serialized to XML. - sort_keys: Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as ASCII strings or an - exception raised if they cannot be decoded as such. Defaults - to ``True`` if not present. Deprecated. - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Returns: an ``etree`` ``Element`` object. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-ASCII binary data is present - and `use_builtin_types` is false. - """ - if use_builtin_types is None: - use_builtin_types = USE_BUILTIN_TYPES - else: - use_builtin_types = use_builtin_types - context = SimpleNamespace( - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - indent_level=indent_level, - ) - return _make_element(value, context) - - -def fromtree( - tree: etree.Element, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Convert an XML tree to a plist structure. - - Args: - tree: An ``etree`` ``Element``. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: An object (usually a dictionary). - """ - target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type) - for action, element in etree.iterwalk(tree, events=("start", "end")): - if action == "start": - target.start(element.tag, element.attrib) - elif action == "end": - # if there are no children, parse the leaf's data - if not len(element): - # always pass str, not None - target.data(element.text or "") - target.end(element.tag) - return target.close() - - -# python3 plistlib API - - -def load( - fp: IO[bytes], - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Load a plist file into an object. - - Args: - fp: An opened file. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: - An object (usually a dictionary) representing the top level of - the plist file. - """ - - if not hasattr(fp, "read"): - raise AttributeError("'%s' object has no attribute 'read'" % type(fp).__name__) - target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type) - parser = etree.XMLParser(target=target) - result = etree.parse(fp, parser=parser) - # lxml returns the target object directly, while ElementTree wraps - # it as the root of an ElementTree object - try: - return result.getroot() - except AttributeError: - return result - - -def loads( - value: bytes, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Load a plist file from a string into an object. - - Args: - value: A bytes string containing a plist. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: - An object (usually a dictionary) representing the top level of - the plist file. - """ - - fp = BytesIO(value) - return load(fp, use_builtin_types=use_builtin_types, dict_type=dict_type) - - -def dump( - value: PlistEncodable, - fp: IO[bytes], - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, -) -> None: - """Write a Python object to a plist file. - - Args: - value: An object to write. - fp: A file opened for writing. - sort_keys (bool): Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as ASCII strings or an - exception raised if they cannot be represented. Defaults - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-representable binary data is present - and `use_builtin_types` is false. - """ - - if not hasattr(fp, "write"): - raise AttributeError("'%s' object has no attribute 'write'" % type(fp).__name__) - root = etree.Element("plist", version="1.0") - el = totree( - value, - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - ) - root.append(el) - tree = etree.ElementTree(root) - # we write the doctype ourselves instead of using the 'doctype' argument - # of 'write' method, becuse lxml will force adding a '\n' even when - # pretty_print is False. - if pretty_print: - header = b"\n".join((XML_DECLARATION, PLIST_DOCTYPE, b"")) - else: - header = XML_DECLARATION + PLIST_DOCTYPE - fp.write(header) - tree.write( # type: ignore - fp, - encoding="utf-8", - pretty_print=pretty_print, - xml_declaration=False, - ) - - -def dumps( - value: PlistEncodable, - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, -) -> bytes: - """Write a Python object to a string in plist format. - - Args: - value: An object to write. - sort_keys (bool): Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as strings or an - exception raised if they cannot be represented. Defaults - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Returns: - string: A plist representation of the Python object. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-representable binary data is present - and `use_builtin_types` is false. - """ - fp = BytesIO() - dump( - value, - fp, - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - ) - return fp.getvalue() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/prism-54e1f6ba.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/prism-54e1f6ba.css deleted file mode 100644 index 586c94d187a710161f82861518f94ac5aa08a419..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/prism-54e1f6ba.css +++ /dev/null @@ -1 +0,0 @@ -.gradio-container-3-33-1 code[class*=language-],.gradio-container-3-33-1 pre[class*=language-]{color:#000;background:none;text-shadow:0 1px white;font-family:Consolas,Monaco,Andale Mono,Ubuntu Mono,monospace;font-size:1em;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.5;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}.gradio-container-3-33-1 pre[class*=language-]::-moz-selection,.gradio-container-3-33-1 pre[class*=language-] ::-moz-selection,.gradio-container-3-33-1 code[class*=language-]::-moz-selection,.gradio-container-3-33-1 code[class*=language-] ::-moz-selection{text-shadow:none;background:#b3d4fc}.gradio-container-3-33-1 pre[class*=language-]::selection,.gradio-container-3-33-1 pre[class*=language-] ::selection,.gradio-container-3-33-1 code[class*=language-]::selection,.gradio-container-3-33-1 code[class*=language-] ::selection{text-shadow:none;background:#b3d4fc}@media print{.gradio-container-3-33-1 code[class*=language-],.gradio-container-3-33-1 pre[class*=language-]{text-shadow:none}}.gradio-container-3-33-1 pre[class*=language-]{padding:1em;margin:.5em 0;overflow:auto}.gradio-container-3-33-1 :not(pre)>code[class*=language-],.gradio-container-3-33-1 pre[class*=language-]{background:#f5f2f0}.gradio-container-3-33-1 :not(pre)>code[class*=language-]{padding:.1em;border-radius:.3em;white-space:normal}.gradio-container-3-33-1 .token.comment,.gradio-container-3-33-1 .token.prolog,.gradio-container-3-33-1 .token.doctype,.gradio-container-3-33-1 .token.cdata{color:#708090}.gradio-container-3-33-1 .token.punctuation{color:#999}.gradio-container-3-33-1 .token.namespace{opacity:.7}.gradio-container-3-33-1 .token.property,.gradio-container-3-33-1 .token.tag,.gradio-container-3-33-1 .token.boolean,.gradio-container-3-33-1 .token.number,.gradio-container-3-33-1 .token.constant,.gradio-container-3-33-1 .token.symbol,.gradio-container-3-33-1 .token.deleted{color:#905}.gradio-container-3-33-1 .token.selector,.gradio-container-3-33-1 .token.attr-name,.gradio-container-3-33-1 .token.string,.gradio-container-3-33-1 .token.char,.gradio-container-3-33-1 .token.builtin,.gradio-container-3-33-1 .token.inserted{color:#690}.gradio-container-3-33-1 .token.operator,.gradio-container-3-33-1 .token.entity,.gradio-container-3-33-1 .token.url,.gradio-container-3-33-1 .language-css .token.string,.gradio-container-3-33-1 .style .token.string{color:#9a6e3a;background:hsla(0,0%,100%,.5)}.gradio-container-3-33-1 .token.atrule,.gradio-container-3-33-1 .token.attr-value,.gradio-container-3-33-1 .token.keyword{color:#07a}.gradio-container-3-33-1 .token.function,.gradio-container-3-33-1 .token.class-name{color:#dd4a68}.gradio-container-3-33-1 .token.regex,.gradio-container-3-33-1 .token.important,.gradio-container-3-33-1 .token.variable{color:#e90}.gradio-container-3-33-1 .token.important,.gradio-container-3-33-1 .token.bold{font-weight:700}.gradio-container-3-33-1 .token.italic{font-style:italic}.gradio-container-3-33-1 .token.entity{cursor:help} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_pagination.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_pagination.py deleted file mode 100644 index ad9048ac55b518a1a54fe0431ac375d203bd1554..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_pagination.py +++ /dev/null @@ -1,51 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to handle pagination on Huggingface Hub.""" -from typing import Dict, Iterable, Optional - -import requests - -from . import get_session, hf_raise_for_status, logging - - -logger = logging.get_logger(__name__) - - -def paginate(path: str, params: Dict, headers: Dict) -> Iterable: - """Fetch a list of models/datasets/spaces and paginate through results. - - This is using the same "Link" header format as GitHub. - See: - - https://requests.readthedocs.io/en/latest/api/#requests.Response.links - - https://docs.github.com/en/rest/guides/traversing-with-pagination#link-header - """ - session = get_session() - r = session.get(path, params=params, headers=headers) - hf_raise_for_status(r) - yield from r.json() - - # Follow pages - # Next link already contains query params - next_page = _get_next_page(r) - while next_page is not None: - logger.debug(f"Pagination detected. Requesting next page: {next_page}") - r = session.get(next_page, headers=headers) - hf_raise_for_status(r) - yield from r.json() - next_page = _get_next_page(r) - - -def _get_next_page(response: requests.Response) -> Optional[str]: - return response.links.get("next", {}).get("url") diff --git a/spaces/lalithakash2346/CortanaAI/README.md b/spaces/lalithakash2346/CortanaAI/README.md deleted file mode 100644 index e05a8e8932bea4ff8cf38a153d7946f45ab18e4c..0000000000000000000000000000000000000000 --- a/spaces/lalithakash2346/CortanaAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CortanaAI -emoji: 👁 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/model_zoo/README.md b/spaces/lambdalabs/LambdaSuperRes/KAIR/model_zoo/README.md deleted file mode 100644 index eb78af49b4f0c6d53b5bbe3d7bb9a608602d94d2..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/model_zoo/README.md +++ /dev/null @@ -1 +0,0 @@ -# Insert new models here diff --git a/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5/static/txt2img.html b/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5/static/txt2img.html deleted file mode 100644 index 9dda69ed5abe85b34c130011dbce3f955ab89de4..0000000000000000000000000000000000000000 --- a/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5/static/txt2img.html +++ /dev/null @@ -1,304 +0,0 @@ - - - - - - Real-Time Latent Consistency Model - - - - - - - - - -
          -
             -
          -
          -

          Real-Time Latent Consistency Model

          -

          Text to Image

          -

          - This demo showcases - LCM Text to Image model - using - Diffusers with a MJPEG - stream server. -

          -

          - There are 0 user(s) sharing the same GPU, affecting - real-time performance. Maximum queue size is 10. Duplicate and run it on your - own GPU. -

          -
          -
          -

          Prompt

          -

          - Start your session and type your prompt here, accepts - Compel syntax. -

          -
          - -
          - -
          -
          -
          - Advanced Options -
          - -
          -
          - - -
          -
          - - -
          -
          - - - - - 4 - - - - - 50 - - - - - 8.0 - - - - - -
          -
          -
          -
          - - - -
          -
          - -
          -
          - - - \ No newline at end of file diff --git a/spaces/leurez/moss/src/utils/format/index.ts b/spaces/leurez/moss/src/utils/format/index.ts deleted file mode 100644 index dbd5a08fb725614ff9fed25851eacb06ca00cd98..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/utils/format/index.ts +++ /dev/null @@ -1,44 +0,0 @@ -/** - * 转义 HTML 字符 - * @param source - */ -export function encodeHTML(source: string) { - return source - .replace(/&/g, '&') - .replace(//g, '>') - .replace(/"/g, '"') - .replace(/'/g, ''') -} - -/** - * 判断是否为代码块 - * @param text - */ -export function includeCode(text: string | null | undefined) { - const regexp = /^(?:\s{4}|\t).+/gm - return !!(text?.includes(' = ') || text?.match(regexp)) -} - -/** - * 复制文本 - * @param options - */ -export function copyText(options: { text: string; origin?: boolean }) { - const props = { origin: true, ...options } - - let input: HTMLInputElement | HTMLTextAreaElement - - if (props.origin) - input = document.createElement('textarea') - else - input = document.createElement('input') - - input.setAttribute('readonly', 'readonly') - input.value = props.text - document.body.appendChild(input) - input.select() - if (document.execCommand('copy')) - document.execCommand('copy') - document.body.removeChild(input) -} diff --git a/spaces/lewisliuX123/wechatglm_demo/scripts/tout.sh b/spaces/lewisliuX123/wechatglm_demo/scripts/tout.sh deleted file mode 100644 index 5b71491ad30812170f89583bd34ab25b47879274..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatglm_demo/scripts/tout.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/bin/bash -#打开日志 - -cd `dirname $0`/.. -export BASE_DIR=`pwd` -echo $BASE_DIR - -# check the nohup.out log output file -if [ ! -f "${BASE_DIR}/nohup.out" ]; then - echo "No file ${BASE_DIR}/nohup.out" - exit -1; -fi - -tail -f "${BASE_DIR}/nohup.out" diff --git a/spaces/librarian-bots/Dataset-Cards-Nomic-Atlas-Map/index.html b/spaces/librarian-bots/Dataset-Cards-Nomic-Atlas-Map/index.html deleted file mode 100644 index c7561aea80f4b32b5cef5d07f10840a03f0d2b0e..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/Dataset-Cards-Nomic-Atlas-Map/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - Dataset Cards Nomic Atlas Map - - - - -
          - -
          - - - \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/CRACK HWID Changer V1.3 [PC].md b/spaces/lincquiQcaudo/Top-20-Diffusion/CRACK HWID Changer V1.3 [PC].md deleted file mode 100644 index 635f398700b403b99cb24811fc85e0a3dcc6a8c4..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/CRACK HWID Changer V1.3 [PC].md +++ /dev/null @@ -1,6 +0,0 @@ - -

          Conclusion

          -

          HWID Changer v1.3 [PC] is a tool that allows you to change your HWID in C#. It can help you bypass bans or restrictions, protect your privacy, test different configurations, and more. It is easy to use, versatile, effective, and free. However, you should use it at your own risk and with the consent of the owner of the PC. You should also respect the terms of service and rules of the software and games that you use.

          -

          CRACK HWID Changer v1.3 [PC]


          DOWNLOADhttps://bytlly.com/2uGwn6



          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/eval/verification.py b/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/eval/verification.py deleted file mode 100644 index 253343b83dbf9d1bd154d14ec068e098bf0968db..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/eval/verification.py +++ /dev/null @@ -1,407 +0,0 @@ -"""Helper for evaluation on the Labeled Faces in the Wild dataset -""" - -# MIT License -# -# Copyright (c) 2016 David Sandberg -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - - -import datetime -import os -import pickle - -import mxnet as mx -import numpy as np -import sklearn -import torch -from mxnet import ndarray as nd -from scipy import interpolate -from sklearn.decomposition import PCA -from sklearn.model_selection import KFold - - -class LFold: - def __init__(self, n_splits=2, shuffle=False): - self.n_splits = n_splits - if self.n_splits > 1: - self.k_fold = KFold(n_splits=n_splits, shuffle=shuffle) - - def split(self, indices): - if self.n_splits > 1: - return self.k_fold.split(indices) - else: - return [(indices, indices)] - - -def calculate_roc(thresholds, - embeddings1, - embeddings2, - actual_issame, - nrof_folds=10, - pca=0): - assert (embeddings1.shape[0] == embeddings2.shape[0]) - assert (embeddings1.shape[1] == embeddings2.shape[1]) - nrof_pairs = min(len(actual_issame), embeddings1.shape[0]) - nrof_thresholds = len(thresholds) - k_fold = LFold(n_splits=nrof_folds, shuffle=False) - - tprs = np.zeros((nrof_folds, nrof_thresholds)) - fprs = np.zeros((nrof_folds, nrof_thresholds)) - accuracy = np.zeros((nrof_folds)) - indices = np.arange(nrof_pairs) - - if pca == 0: - diff = np.subtract(embeddings1, embeddings2) - dist = np.sum(np.square(diff), 1) - - for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): - if pca > 0: - print('doing pca on', fold_idx) - embed1_train = embeddings1[train_set] - embed2_train = embeddings2[train_set] - _embed_train = np.concatenate((embed1_train, embed2_train), axis=0) - pca_model = PCA(n_components=pca) - pca_model.fit(_embed_train) - embed1 = pca_model.transform(embeddings1) - embed2 = pca_model.transform(embeddings2) - embed1 = sklearn.preprocessing.normalize(embed1) - embed2 = sklearn.preprocessing.normalize(embed2) - diff = np.subtract(embed1, embed2) - dist = np.sum(np.square(diff), 1) - - # Find the best threshold for the fold - acc_train = np.zeros((nrof_thresholds)) - for threshold_idx, threshold in enumerate(thresholds): - _, _, acc_train[threshold_idx] = calculate_accuracy( - threshold, dist[train_set], actual_issame[train_set]) - best_threshold_index = np.argmax(acc_train) - for threshold_idx, threshold in enumerate(thresholds): - tprs[fold_idx, threshold_idx], fprs[fold_idx, threshold_idx], _ = calculate_accuracy( - threshold, dist[test_set], - actual_issame[test_set]) - _, _, accuracy[fold_idx] = calculate_accuracy( - thresholds[best_threshold_index], dist[test_set], - actual_issame[test_set]) - - tpr = np.mean(tprs, 0) - fpr = np.mean(fprs, 0) - return tpr, fpr, accuracy - - -def calculate_accuracy(threshold, dist, actual_issame): - predict_issame = np.less(dist, threshold) - tp = np.sum(np.logical_and(predict_issame, actual_issame)) - fp = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame))) - tn = np.sum( - np.logical_and(np.logical_not(predict_issame), - np.logical_not(actual_issame))) - fn = np.sum(np.logical_and(np.logical_not(predict_issame), actual_issame)) - - tpr = 0 if (tp + fn == 0) else float(tp) / float(tp + fn) - fpr = 0 if (fp + tn == 0) else float(fp) / float(fp + tn) - acc = float(tp + tn) / dist.size - return tpr, fpr, acc - - -def calculate_val(thresholds, - embeddings1, - embeddings2, - actual_issame, - far_target, - nrof_folds=10): - assert (embeddings1.shape[0] == embeddings2.shape[0]) - assert (embeddings1.shape[1] == embeddings2.shape[1]) - nrof_pairs = min(len(actual_issame), embeddings1.shape[0]) - nrof_thresholds = len(thresholds) - k_fold = LFold(n_splits=nrof_folds, shuffle=False) - - val = np.zeros(nrof_folds) - far = np.zeros(nrof_folds) - - diff = np.subtract(embeddings1, embeddings2) - dist = np.sum(np.square(diff), 1) - indices = np.arange(nrof_pairs) - - for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): - - # Find the threshold that gives FAR = far_target - far_train = np.zeros(nrof_thresholds) - for threshold_idx, threshold in enumerate(thresholds): - _, far_train[threshold_idx] = calculate_val_far( - threshold, dist[train_set], actual_issame[train_set]) - if np.max(far_train) >= far_target: - f = interpolate.interp1d(far_train, thresholds, kind='slinear') - threshold = f(far_target) - else: - threshold = 0.0 - - val[fold_idx], far[fold_idx] = calculate_val_far( - threshold, dist[test_set], actual_issame[test_set]) - - val_mean = np.mean(val) - far_mean = np.mean(far) - val_std = np.std(val) - return val_mean, val_std, far_mean - - -def calculate_val_far(threshold, dist, actual_issame): - predict_issame = np.less(dist, threshold) - true_accept = np.sum(np.logical_and(predict_issame, actual_issame)) - false_accept = np.sum( - np.logical_and(predict_issame, np.logical_not(actual_issame))) - n_same = np.sum(actual_issame) - n_diff = np.sum(np.logical_not(actual_issame)) - # print(true_accept, false_accept) - # print(n_same, n_diff) - val = float(true_accept) / float(n_same) - far = float(false_accept) / float(n_diff) - return val, far - - -def evaluate(embeddings, actual_issame, nrof_folds=10, pca=0): - # Calculate evaluation metrics - thresholds = np.arange(0, 4, 0.01) - embeddings1 = embeddings[0::2] - embeddings2 = embeddings[1::2] - tpr, fpr, accuracy = calculate_roc(thresholds, - embeddings1, - embeddings2, - np.asarray(actual_issame), - nrof_folds=nrof_folds, - pca=pca) - thresholds = np.arange(0, 4, 0.001) - val, val_std, far = calculate_val(thresholds, - embeddings1, - embeddings2, - np.asarray(actual_issame), - 1e-3, - nrof_folds=nrof_folds) - return tpr, fpr, accuracy, val, val_std, far - -@torch.no_grad() -def load_bin(path, image_size): - try: - with open(path, 'rb') as f: - bins, issame_list = pickle.load(f) # py2 - except UnicodeDecodeError as e: - with open(path, 'rb') as f: - bins, issame_list = pickle.load(f, encoding='bytes') # py3 - data_list = [] - for flip in [0, 1]: - data = torch.empty((len(issame_list) * 2, 3, image_size[0], image_size[1])) - data_list.append(data) - for idx in range(len(issame_list) * 2): - _bin = bins[idx] - img = mx.image.imdecode(_bin) - if img.shape[1] != image_size[0]: - img = mx.image.resize_short(img, image_size[0]) - img = nd.transpose(img, axes=(2, 0, 1)) - for flip in [0, 1]: - if flip == 1: - img = mx.ndarray.flip(data=img, axis=2) - data_list[flip][idx][:] = torch.from_numpy(img.asnumpy()) - if idx % 1000 == 0: - print('loading bin', idx) - print(data_list[0].shape) - return data_list, issame_list - -@torch.no_grad() -def test(data_set, backbone, batch_size, nfolds=10): - print('testing verification..') - data_list = data_set[0] - issame_list = data_set[1] - embeddings_list = [] - time_consumed = 0.0 - for i in range(len(data_list)): - data = data_list[i] - embeddings = None - ba = 0 - while ba < data.shape[0]: - bb = min(ba + batch_size, data.shape[0]) - count = bb - ba - _data = data[bb - batch_size: bb] - time0 = datetime.datetime.now() - img = ((_data / 255) - 0.5) / 0.5 - net_out: torch.Tensor = backbone(img) - _embeddings = net_out.detach().cpu().numpy() - time_now = datetime.datetime.now() - diff = time_now - time0 - time_consumed += diff.total_seconds() - if embeddings is None: - embeddings = np.zeros((data.shape[0], _embeddings.shape[1])) - embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :] - ba = bb - embeddings_list.append(embeddings) - - _xnorm = 0.0 - _xnorm_cnt = 0 - for embed in embeddings_list: - for i in range(embed.shape[0]): - _em = embed[i] - _norm = np.linalg.norm(_em) - _xnorm += _norm - _xnorm_cnt += 1 - _xnorm /= _xnorm_cnt - - acc1 = 0.0 - std1 = 0.0 - embeddings = embeddings_list[0] + embeddings_list[1] - embeddings = sklearn.preprocessing.normalize(embeddings) - print(embeddings.shape) - print('infer time', time_consumed) - _, _, accuracy, val, val_std, far = evaluate(embeddings, issame_list, nrof_folds=nfolds) - acc2, std2 = np.mean(accuracy), np.std(accuracy) - return acc1, std1, acc2, std2, _xnorm, embeddings_list - - -def dumpR(data_set, - backbone, - batch_size, - name='', - data_extra=None, - label_shape=None): - print('dump verification embedding..') - data_list = data_set[0] - issame_list = data_set[1] - embeddings_list = [] - time_consumed = 0.0 - for i in range(len(data_list)): - data = data_list[i] - embeddings = None - ba = 0 - while ba < data.shape[0]: - bb = min(ba + batch_size, data.shape[0]) - count = bb - ba - - _data = nd.slice_axis(data, axis=0, begin=bb - batch_size, end=bb) - time0 = datetime.datetime.now() - if data_extra is None: - db = mx.io.DataBatch(data=(_data,), label=(_label,)) - else: - db = mx.io.DataBatch(data=(_data, _data_extra), - label=(_label,)) - model.forward(db, is_train=False) - net_out = model.get_outputs() - _embeddings = net_out[0].asnumpy() - time_now = datetime.datetime.now() - diff = time_now - time0 - time_consumed += diff.total_seconds() - if embeddings is None: - embeddings = np.zeros((data.shape[0], _embeddings.shape[1])) - embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :] - ba = bb - embeddings_list.append(embeddings) - embeddings = embeddings_list[0] + embeddings_list[1] - embeddings = sklearn.preprocessing.normalize(embeddings) - actual_issame = np.asarray(issame_list) - outname = os.path.join('temp.bin') - with open(outname, 'wb') as f: - pickle.dump((embeddings, issame_list), - f, - protocol=pickle.HIGHEST_PROTOCOL) - - -# if __name__ == '__main__': -# -# parser = argparse.ArgumentParser(description='do verification') -# # general -# parser.add_argument('--data-dir', default='', help='') -# parser.add_argument('--model', -# default='../model/softmax,50', -# help='path to load model.') -# parser.add_argument('--target', -# default='lfw,cfp_ff,cfp_fp,agedb_30', -# help='test targets.') -# parser.add_argument('--gpu', default=0, type=int, help='gpu id') -# parser.add_argument('--batch-size', default=32, type=int, help='') -# parser.add_argument('--max', default='', type=str, help='') -# parser.add_argument('--mode', default=0, type=int, help='') -# parser.add_argument('--nfolds', default=10, type=int, help='') -# args = parser.parse_args() -# image_size = [112, 112] -# print('image_size', image_size) -# ctx = mx.gpu(args.gpu) -# nets = [] -# vec = args.model.split(',') -# prefix = args.model.split(',')[0] -# epochs = [] -# if len(vec) == 1: -# pdir = os.path.dirname(prefix) -# for fname in os.listdir(pdir): -# if not fname.endswith('.params'): -# continue -# _file = os.path.join(pdir, fname) -# if _file.startswith(prefix): -# epoch = int(fname.split('.')[0].split('-')[1]) -# epochs.append(epoch) -# epochs = sorted(epochs, reverse=True) -# if len(args.max) > 0: -# _max = [int(x) for x in args.max.split(',')] -# assert len(_max) == 2 -# if len(epochs) > _max[1]: -# epochs = epochs[_max[0]:_max[1]] -# -# else: -# epochs = [int(x) for x in vec[1].split('|')] -# print('model number', len(epochs)) -# time0 = datetime.datetime.now() -# for epoch in epochs: -# print('loading', prefix, epoch) -# sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, epoch) -# # arg_params, aux_params = ch_dev(arg_params, aux_params, ctx) -# all_layers = sym.get_internals() -# sym = all_layers['fc1_output'] -# model = mx.mod.Module(symbol=sym, context=ctx, label_names=None) -# # model.bind(data_shapes=[('data', (args.batch_size, 3, image_size[0], image_size[1]))], label_shapes=[('softmax_label', (args.batch_size,))]) -# model.bind(data_shapes=[('data', (args.batch_size, 3, image_size[0], -# image_size[1]))]) -# model.set_params(arg_params, aux_params) -# nets.append(model) -# time_now = datetime.datetime.now() -# diff = time_now - time0 -# print('model loading time', diff.total_seconds()) -# -# ver_list = [] -# ver_name_list = [] -# for name in args.target.split(','): -# path = os.path.join(args.data_dir, name + ".bin") -# if os.path.exists(path): -# print('loading.. ', name) -# data_set = load_bin(path, image_size) -# ver_list.append(data_set) -# ver_name_list.append(name) -# -# if args.mode == 0: -# for i in range(len(ver_list)): -# results = [] -# for model in nets: -# acc1, std1, acc2, std2, xnorm, embeddings_list = test( -# ver_list[i], model, args.batch_size, args.nfolds) -# print('[%s]XNorm: %f' % (ver_name_list[i], xnorm)) -# print('[%s]Accuracy: %1.5f+-%1.5f' % (ver_name_list[i], acc1, std1)) -# print('[%s]Accuracy-Flip: %1.5f+-%1.5f' % (ver_name_list[i], acc2, std2)) -# results.append(acc2) -# print('Max of [%s] is %1.5f' % (ver_name_list[i], np.max(results))) -# elif args.mode == 1: -# raise ValueError -# else: -# model = nets[0] -# dumpR(ver_list[0], model, args.batch_size, args.target) diff --git a/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py b/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py deleted file mode 100644 index ab6aa82d3e9055a838f1f9076b12f05fdfc154d0..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def conv_bn(inp, oup, stride=1, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_bn_no_relu(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - ) - - -def conv_bn1X1(inp, oup, stride, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_dw(inp, oup, stride, leaky=0.1): - return nn.Sequential( - nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), - nn.BatchNorm2d(inp), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - ) - - -class SSH(nn.Module): - - def __init__(self, in_channel, out_channel): - super(SSH, self).__init__() - assert out_channel % 4 == 0 - leaky = 0 - if (out_channel <= 64): - leaky = 0.1 - self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1) - - self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky) - self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky) - self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - def forward(self, input): - conv3X3 = self.conv3X3(input) - - conv5X5_1 = self.conv5X5_1(input) - conv5X5 = self.conv5X5_2(conv5X5_1) - - conv7X7_2 = self.conv7X7_2(conv5X5_1) - conv7X7 = self.conv7x7_3(conv7X7_2) - - out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1) - out = F.relu(out) - return out - - -class FPN(nn.Module): - - def __init__(self, in_channels_list, out_channels): - super(FPN, self).__init__() - leaky = 0 - if (out_channels <= 64): - leaky = 0.1 - self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky) - self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky) - self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky) - - self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky) - self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky) - - def forward(self, input): - # names = list(input.keys()) - # input = list(input.values()) - - output1 = self.output1(input[0]) - output2 = self.output2(input[1]) - output3 = self.output3(input[2]) - - up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest') - output2 = output2 + up3 - output2 = self.merge2(output2) - - up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest') - output1 = output1 + up2 - output1 = self.merge1(output1) - - out = [output1, output2, output3] - return out - - -class MobileNetV1(nn.Module): - - def __init__(self): - super(MobileNetV1, self).__init__() - self.stage1 = nn.Sequential( - conv_bn(3, 8, 2, leaky=0.1), # 3 - conv_dw(8, 16, 1), # 7 - conv_dw(16, 32, 2), # 11 - conv_dw(32, 32, 1), # 19 - conv_dw(32, 64, 2), # 27 - conv_dw(64, 64, 1), # 43 - ) - self.stage2 = nn.Sequential( - conv_dw(64, 128, 2), # 43 + 16 = 59 - conv_dw(128, 128, 1), # 59 + 32 = 91 - conv_dw(128, 128, 1), # 91 + 32 = 123 - conv_dw(128, 128, 1), # 123 + 32 = 155 - conv_dw(128, 128, 1), # 155 + 32 = 187 - conv_dw(128, 128, 1), # 187 + 32 = 219 - ) - self.stage3 = nn.Sequential( - conv_dw(128, 256, 2), # 219 +3 2 = 241 - conv_dw(256, 256, 1), # 241 + 64 = 301 - ) - self.avg = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(256, 1000) - - def forward(self, x): - x = self.stage1(x) - x = self.stage2(x) - x = self.stage3(x) - x = self.avg(x) - # x = self.model(x) - x = x.view(-1, 256) - x = self.fc(x) - return x - - -class ClassHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(ClassHead, self).__init__() - self.num_anchors = num_anchors - self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 2) - - -class BboxHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(BboxHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 4) - - -class LandmarkHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(LandmarkHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 10) - - -def make_class_head(fpn_num=3, inchannels=64, anchor_num=2): - classhead = nn.ModuleList() - for i in range(fpn_num): - classhead.append(ClassHead(inchannels, anchor_num)) - return classhead - - -def make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2): - bboxhead = nn.ModuleList() - for i in range(fpn_num): - bboxhead.append(BboxHead(inchannels, anchor_num)) - return bboxhead - - -def make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2): - landmarkhead = nn.ModuleList() - for i in range(fpn_num): - landmarkhead.append(LandmarkHead(inchannels, anchor_num)) - return landmarkhead diff --git a/spaces/malteos/gpt-german/app.py b/spaces/malteos/gpt-german/app.py deleted file mode 100644 index 96fefe1d24a93e516e75d2d886660ff745d9c998..0000000000000000000000000000000000000000 --- a/spaces/malteos/gpt-german/app.py +++ /dev/null @@ -1,14 +0,0 @@ -# mostly copied from https://huggingface.co/spaces/gradio/gpt-neo/ -import gradio as gr - -title = "GPT-German Demo" -description = "A demo for GPT-typed models in various sizes trained on German text. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." -article = "

          #TODO

          " -examples = [ - ['In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten.'], - ["Vergangene Woche war über ein Zerwürfnis zwischen Kanzlerin Merkel und Frankreichs Präsident Sarkozy spekuliert worden. Nun zeigten sie "], - ["Bereits vor dem Beginn der Feierlichkeiten sollten Hundebesitzer ihre Tiere in Wohngebieten"], - ["Die Mängel seien von der Qualitätssicherung während der "], -] - -gr.Interface.load("huggingface/malteos/gpt2-wechsel-german-ds-meg", inputs=gr.inputs.Textbox(lines=5, label="Input Text"),title=title,description=description,article=article, examples=examples).launch() \ No newline at end of file diff --git a/spaces/maminghui/ChatGPT/chatgpt - macOS.command b/spaces/maminghui/ChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/maminghui/ChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/face_detection/detection/sfd/bbox.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/face_detection/detection/sfd/bbox.py deleted file mode 100644 index 4bd7222e5e5f78a51944cbeed3cccbacddc46bed..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/face_detection/detection/sfd/bbox.py +++ /dev/null @@ -1,129 +0,0 @@ -from __future__ import print_function -import os -import sys -import cv2 -import random -import datetime -import time -import math -import argparse -import numpy as np -import torch - -try: - from iou import IOU -except BaseException: - # IOU cython speedup 10x - def IOU(ax1, ay1, ax2, ay2, bx1, by1, bx2, by2): - sa = abs((ax2 - ax1) * (ay2 - ay1)) - sb = abs((bx2 - bx1) * (by2 - by1)) - x1, y1 = max(ax1, bx1), max(ay1, by1) - x2, y2 = min(ax2, bx2), min(ay2, by2) - w = x2 - x1 - h = y2 - y1 - if w < 0 or h < 0: - return 0.0 - else: - return 1.0 * w * h / (sa + sb - w * h) - - -def bboxlog(x1, y1, x2, y2, axc, ayc, aww, ahh): - xc, yc, ww, hh = (x2 + x1) / 2, (y2 + y1) / 2, x2 - x1, y2 - y1 - dx, dy = (xc - axc) / aww, (yc - ayc) / ahh - dw, dh = math.log(ww / aww), math.log(hh / ahh) - return dx, dy, dw, dh - - -def bboxloginv(dx, dy, dw, dh, axc, ayc, aww, ahh): - xc, yc = dx * aww + axc, dy * ahh + ayc - ww, hh = math.exp(dw) * aww, math.exp(dh) * ahh - x1, x2, y1, y2 = xc - ww / 2, xc + ww / 2, yc - hh / 2, yc + hh / 2 - return x1, y1, x2, y2 - - -def nms(dets, thresh): - if 0 == len(dets): - return [] - x1, y1, x2, y2, scores = dets[:, 0], dets[:, 1], dets[:, 2], dets[:, 3], dets[:, 4] - areas = (x2 - x1 + 1) * (y2 - y1 + 1) - order = scores.argsort()[::-1] - - keep = [] - while order.size > 0: - i = order[0] - keep.append(i) - xx1, yy1 = np.maximum(x1[i], x1[order[1:]]), np.maximum(y1[i], y1[order[1:]]) - xx2, yy2 = np.minimum(x2[i], x2[order[1:]]), np.minimum(y2[i], y2[order[1:]]) - - w, h = np.maximum(0.0, xx2 - xx1 + 1), np.maximum(0.0, yy2 - yy1 + 1) - ovr = w * h / (areas[i] + areas[order[1:]] - w * h) - - inds = np.where(ovr <= thresh)[0] - order = order[inds + 1] - - return keep - - -def encode(matched, priors, variances): - """Encode the variances from the priorbox layers into the ground truth boxes - we have matched (based on jaccard overlap) with the prior boxes. - Args: - matched: (tensor) Coords of ground truth for each prior in point-form - Shape: [num_priors, 4]. - priors: (tensor) Prior boxes in center-offset form - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - encoded boxes (tensor), Shape: [num_priors, 4] - """ - - # dist b/t match center and prior's center - g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2] - # encode variance - g_cxcy /= (variances[0] * priors[:, 2:]) - # match wh / prior wh - g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:] - g_wh = torch.log(g_wh) / variances[1] - # return target for smooth_l1_loss - return torch.cat([g_cxcy, g_wh], 1) # [num_priors,4] - - -def decode(loc, priors, variances): - """Decode locations from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - loc (tensor): location predictions for loc layers, - Shape: [num_priors,4] - priors (tensor): Prior boxes in center-offset form. - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded bounding box predictions - """ - - boxes = torch.cat(( - priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:], - priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1) - boxes[:, :2] -= boxes[:, 2:] / 2 - boxes[:, 2:] += boxes[:, :2] - return boxes - -def batch_decode(loc, priors, variances): - """Decode locations from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - loc (tensor): location predictions for loc layers, - Shape: [num_priors,4] - priors (tensor): Prior boxes in center-offset form. - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded bounding box predictions - """ - - boxes = torch.cat(( - priors[:, :, :2] + loc[:, :, :2] * variances[0] * priors[:, :, 2:], - priors[:, :, 2:] * torch.exp(loc[:, :, 2:] * variances[1])), 2) - boxes[:, :, :2] -= boxes[:, :, 2:] / 2 - boxes[:, :, 2:] += boxes[:, :, :2] - return boxes diff --git a/spaces/manivannan7gp/Words2Image/README.md b/spaces/manivannan7gp/Words2Image/README.md deleted file mode 100644 index 588f7fcc84860e5d4dad9a8948c205bee26c6ca1..0000000000000000000000000000000000000000 --- a/spaces/manivannan7gp/Words2Image/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 490 Models Fast Diffusion -emoji: 🪅🌐 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: Omnibus/maximum_multiplier_places ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/diffusion/__init__.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/diffusion/__init__.py deleted file mode 100644 index e5737294ae16c0de52085b8dcf6825c348f617e4..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/diffusion/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Diffusion grids.""" diff --git a/spaces/matthoffner/chatbot-mini/pages/api/home/home.tsx b/spaces/matthoffner/chatbot-mini/pages/api/home/home.tsx deleted file mode 100644 index 4cd19f4bf117b341b072828d65fcd2c2cb311637..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/pages/api/home/home.tsx +++ /dev/null @@ -1,302 +0,0 @@ -import { useEffect, useRef } from 'react'; - -import { GetServerSideProps } from 'next'; -import { useTranslation } from 'next-i18next'; -import { serverSideTranslations } from 'next-i18next/serverSideTranslations'; -import Head from 'next/head'; - -import { useCreateReducer } from '@/hooks/useCreateReducer'; - -import { - cleanConversationHistory, - cleanSelectedConversation, -} from '@/utils/app/clean'; -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { - saveConversation, - saveConversations, - updateConversation, -} from '@/utils/app/conversation'; -import { getSettings } from '@/utils/app/settings'; - -import { Conversation } from '@/types/chat'; -import { KeyValuePair } from '@/types/data'; -import { OpenAIModelID, OpenAIModels, fallbackModelID } from '@/types/openai'; - -import { Chat } from '@/components/Chat/Chat'; - -import HomeContext from './home.context'; -import { HomeInitialState, initialState } from './home.state'; - -import { v4 as uuidv4 } from 'uuid'; - -interface Props { - serverSideApiKeyIsSet: boolean; - serverSidePluginKeysSet: boolean; - defaultModelId: OpenAIModelID; -} - -const Home = ({ - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - defaultModelId, -}: Props) => { - const { t } = useTranslation('chat'); - - const contextValue = useCreateReducer({ - initialState, - }); - - const { - state: { - apiKey, - lightMode, - folders, - conversations, - selectedConversation, - prompts, - temperature, - }, - dispatch, - } = contextValue; - - const stopConversationRef = useRef(false); - - const handleSelectConversation = (conversation: Conversation) => { - dispatch({ - field: 'selectedConversation', - value: conversation, - }); - - saveConversation(conversation); - }; - - // CONVERSATION OPERATIONS -------------------------------------------- - - const handleNewConversation = () => { - const lastConversation = conversations[conversations.length - 1]; - - const newConversation: Conversation = { - id: uuidv4(), - name: t('New Conversation'), - messages: [], - model: lastConversation?.model || { - id: OpenAIModels[defaultModelId].id, - name: OpenAIModels[defaultModelId].name, - maxLength: OpenAIModels[defaultModelId].maxLength, - tokenLimit: OpenAIModels[defaultModelId].tokenLimit, - }, - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE, - folderId: null, - }; - - const updatedConversations = [...conversations, newConversation]; - - dispatch({ field: 'selectedConversation', value: newConversation }); - dispatch({ field: 'conversations', value: updatedConversations }); - - saveConversation(newConversation); - saveConversations(updatedConversations); - - dispatch({ field: 'loading', value: false }); - }; - - const handleUpdateConversation = ( - conversation: Conversation, - data: KeyValuePair, - ) => { - const updatedConversation = { - ...conversation, - [data.key]: data.value, - }; - - const { single, all } = updateConversation( - updatedConversation, - conversations, - ); - - dispatch({ field: 'selectedConversation', value: single }); - dispatch({ field: 'conversations', value: all }); - }; - - // EFFECTS -------------------------------------------- - - useEffect(() => { - if (window.innerWidth < 640) { - dispatch({ field: 'showChatbar', value: false }); - } - }, [selectedConversation]); - - useEffect(() => { - defaultModelId && - dispatch({ field: 'defaultModelId', value: defaultModelId }); - serverSideApiKeyIsSet && - dispatch({ - field: 'serverSideApiKeyIsSet', - value: serverSideApiKeyIsSet, - }); - serverSidePluginKeysSet && - dispatch({ - field: 'serverSidePluginKeysSet', - value: serverSidePluginKeysSet, - }); - }, [defaultModelId, serverSideApiKeyIsSet, serverSidePluginKeysSet]); - - // ON LOAD -------------------------------------------- - - useEffect(() => { - const settings = getSettings(); - if (settings?.theme) { - dispatch({ - field: 'lightMode', - value: settings.theme, - }); - } - - dispatch({ field: 'apiKey', value: "test" }); - - const pluginKeys = localStorage.getItem('pluginKeys'); - if (serverSidePluginKeysSet) { - dispatch({ field: 'pluginKeys', value: [] }); - localStorage.removeItem('pluginKeys'); - } else if (pluginKeys) { - dispatch({ field: 'pluginKeys', value: pluginKeys }); - } - - if (window.innerWidth < 640) { - dispatch({ field: 'showChatbar', value: false }); - dispatch({ field: 'showPromptbar', value: false }); - } - - const showChatbar = localStorage.getItem('showChatbar'); - if (showChatbar) { - dispatch({ field: 'showChatbar', value: showChatbar === 'true' }); - } - - const showPromptbar = localStorage.getItem('showPromptbar'); - if (showPromptbar) { - dispatch({ field: 'showPromptbar', value: showPromptbar === 'true' }); - } - - const folders = localStorage.getItem('folders'); - if (folders) { - dispatch({ field: 'folders', value: JSON.parse(folders) }); - } - - const prompts = localStorage.getItem('prompts'); - if (prompts) { - dispatch({ field: 'prompts', value: JSON.parse(prompts) }); - } - - const conversationHistory = localStorage.getItem('conversationHistory'); - if (conversationHistory) { - const parsedConversationHistory: Conversation[] = - JSON.parse(conversationHistory); - const cleanedConversationHistory = cleanConversationHistory( - parsedConversationHistory, - ); - - dispatch({ field: 'conversations', value: cleanedConversationHistory }); - } - - const selectedConversation = localStorage.getItem('selectedConversation'); - if (selectedConversation) { - const parsedSelectedConversation: Conversation = - JSON.parse(selectedConversation); - const cleanedSelectedConversation = cleanSelectedConversation( - parsedSelectedConversation, - ); - - dispatch({ - field: 'selectedConversation', - value: cleanedSelectedConversation, - }); - } else { - const lastConversation = conversations[conversations.length - 1]; - dispatch({ - field: 'selectedConversation', - value: { - id: uuidv4(), - name: t('New Conversation'), - messages: [], - model: OpenAIModels[defaultModelId], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE, - folderId: null, - }, - }); - } - }, [ - defaultModelId, - dispatch, - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - ]); - - return ( - - - Chatbot Mini - - - - -
          -
          -
          - -
          -
          -
          -
          - ); -}; -export default Home; - -export const getServerSideProps: GetServerSideProps = async ({ locale }) => { - const defaultModelId = - (process.env.DEFAULT_MODEL && - Object.values(OpenAIModelID).includes( - process.env.DEFAULT_MODEL as OpenAIModelID, - ) && - process.env.DEFAULT_MODEL) || - fallbackModelID; - - let serverSidePluginKeysSet = false; - - const googleApiKey = process.env.GOOGLE_API_KEY; - const googleCSEId = process.env.GOOGLE_CSE_ID; - - if (googleApiKey && googleCSEId) { - serverSidePluginKeysSet = true; - } - - return { - props: { - serverSideApiKeyIsSet: !!process.env.OPENAI_API_KEY, - defaultModelId, - serverSidePluginKeysSet, - ...(await serverSideTranslations(locale ?? 'en', [ - 'common', - 'chat', - 'sidebar', - 'markdown', - 'promptbar', - 'settings', - ])), - }, - }; -}; diff --git a/spaces/maze/FastStyleTransfer/app.py b/spaces/maze/FastStyleTransfer/app.py deleted file mode 100644 index 790a02807e4caaaa4de9e2fffb85d7f0eff77282..0000000000000000000000000000000000000000 --- a/spaces/maze/FastStyleTransfer/app.py +++ /dev/null @@ -1,177 +0,0 @@ -from huggingface_hub import hf_hub_download - - -Rain_Princess = hf_hub_download(repo_id="maze/FastStyleTransfer", filename="Rain_Princess_512.pth") -The_Scream = hf_hub_download(repo_id="maze/FastStyleTransfer", filename="Scream_512.pth") -The_Mosaic = hf_hub_download(repo_id="maze/FastStyleTransfer", filename="Mosaic_512.pth") -Starry_Night = hf_hub_download(repo_id="maze/FastStyleTransfer", filename="Starry_Night_512.pth") - - -import numpy as np -from PIL import Image -import gradio as gr - -import torch -import torch.nn as nn - -import torchvision.transforms as transforms -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class TransformerNetwork(nn.Module): - def __init__(self, tanh_multiplier=None): - super(TransformerNetwork, self).__init__() - self.ConvBlock = nn.Sequential( - ConvLayer(3, 32, 9, 1), - nn.ReLU(), - ConvLayer(32, 64, 3, 2), - nn.ReLU(), - ConvLayer(64, 128, 3, 2), - nn.ReLU() - ) - self.ResidualBlock = nn.Sequential( - ResidualLayer(128, 3), - ResidualLayer(128, 3), - ResidualLayer(128, 3), - ResidualLayer(128, 3), - ResidualLayer(128, 3) - ) - self.DeconvBlock = nn.Sequential( - DeconvLayer(128, 64, 3, 2, 1), - nn.ReLU(), - DeconvLayer(64, 32, 3, 2, 1), - nn.ReLU(), - ConvLayer(32, 3, 9, 1, norm="None") - ) - self.tanh_multiplier = tanh_multiplier - - def forward(self, x): - x = self.ConvBlock(x) - x = self.ResidualBlock(x) - x = self.DeconvBlock(x) - if isinstance(self.tanh_multiplier, int): - x = self.tanh_multiplier * F.tanh(x) - return x - -class ConvLayer(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, stride, norm="instance"): - super(ConvLayer, self).__init__() - padding_size = kernel_size // 2 - self.pad = nn.ReflectionPad2d(padding_size) - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride) - if norm == "instance": - self.norm = nn.InstanceNorm2d(out_channels, affine=True) - elif norm == "batch": - self.norm = nn.BatchNorm2d(out_channels, affine=True) - else: - self.norm = nn.Identity() - - def forward(self, x): - x = self.pad(x) - x = self.conv(x) - x = self.norm(x) - return x - -class ResidualLayer(nn.Module): - def __init__(self, channels=128, kernel_size=3): - super(ResidualLayer, self).__init__() - self.conv1 = ConvLayer(channels, channels, kernel_size, stride=1) - self.relu = nn.ReLU() - self.conv2 = ConvLayer(channels, channels, kernel_size, stride=1) - - def forward(self, x): - identity = x - out = self.relu(self.conv1(x)) - out = self.conv2(out) - out = out + identity - return out - -class DeconvLayer(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, stride, output_padding, norm="instance"): - super(DeconvLayer, self).__init__() - - padding_size = kernel_size // 2 - self.conv_transpose = nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding_size, output_padding) - if norm == "instance": - self.norm = nn.InstanceNorm2d(out_channels, affine=True) - elif norm == "batch": - self.norm = nn.BatchNorm2d(out_channels, affine=True) - else: - self.norm = nn.Identity() - - def forward(self, x): - x = self.conv_transpose(x) - out = self.norm(x) - return out - - -mean = np.array([0.485, 0.456, 0.406]) -std = np.array([0.229, 0.224, 0.225]) - -transformer = TransformerNetwork().to(device) - -transformer.eval() - -transform = transforms.Compose([ - transforms.Resize(512), - transforms.ToTensor(), - transforms.Normalize(mean, std), -]) - -denormalize = transforms.Normalize( - mean= [-m/s for m, s in zip(mean, std)], - std= [1/s for s in std] -) -tensor2Image = transforms.ToPILImage() - -@torch.no_grad() -def process(image, model): - image = transform(image).to(device) - image = image.unsqueeze(dim=0) - - image = denormalize(model(image)).cpu() - image = torch.clamp(image.squeeze(dim=0), 0, 1) - image = tensor2Image(image) - - return image - - -def main(image, backbone, style): - if style == "The Scream": - transformer.load_state_dict(torch.load(The_Scream, map_location=torch.device('cpu'))) - elif style == "Rain Princess": - transformer.load_state_dict(torch.load(Rain_Princess, map_location=torch.device('cpu'))) - elif style == "The Mosaic": - transformer.load_state_dict(torch.load(The_Mosaic, map_location=torch.device('cpu'))) - elif style == "Starry Night": - transformer.load_state_dict(torch.load(Starry_Night, map_location=torch.device('cpu'))) - else: - transformer.load_state_dict(torch.load(Rain_Princess, map_location=torch.device('cpu'))) - image = Image.fromarray(image) - isize = image.size - image = process(image, transformer) - s = f"The output image {str(image.size)} is processed by {backbone} based on input image {str(isize)}.
          Please rate the generated image through the Flag button below!" - print(s) - return image, s - -# "Standard ResNet50", "VGG19" -gr.Interface( - title = "Stylize", - description = "Image generated based on Fast Style Transfer", - fn = main, - inputs = [ - gr.inputs.Image(), - gr.inputs.Radio(["Robust ResNet50"], label="Backbone"), - gr.inputs.Dropdown(["The Scream", "Rain Princess", "Starry Night", "The Mosaic"], type="value", default="Rain Princess", label="style") - ], - outputs = [gr.outputs.Image(label="Stylized"), gr.outputs.HTML(label="Comment")], - # examples = [ - # [] - # ], - # live = True, # the interface will recalculate as soon as the user input changes. - allow_flagging = "manual", - flagging_options = ["Excellect", "Moderate", "Bad"], - flagging_dir = "flagged", - allow_screenshot = False, -).launch() -# iface.launch(enable_queue=True, cache_examples=True, debug=True) \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/public/fill-in-the-blank/init-diff.js b/spaces/merve/uncertainty-calibration/public/fill-in-the-blank/init-diff.js deleted file mode 100644 index e0bb76f70a4d3ff6689b493236b5da93150746da..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/fill-in-the-blank/init-diff.js +++ /dev/null @@ -1,525 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initDiff = function(pair){ - var sel = d3.select('.' + pair.class).html('') - .at({role: 'graphics-document', 'aria-label': pair.ariaLabel}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - - pair.str0 = '' - - updateChart() - }) - - if (!sel.node()) return - - var isMobile = innerWidth <= 1100 - - var optionSel = sel.append('div.options') - .classed('wide', !isMobile) - .st({marginBottom: isMobile ? 20 : ''}) - - var input0Sel = optionSel.append('div.flex-row').append('textarea.input-0') - .st({marginBottom: 10}) - if (isMobile){ - input0Sel.on('change', updateChart) - } - - input0Sel.node().value = pair.s0.replace('[MASK]', '_') - - var countSel = optionSel.append('div.option-tokens') - .append('b').text('Number of Tokens') - .parent() - .append('div.flex-row') - .appendMany('div.button', [30, 200, 1000, 5000, 99999]) - .text(d => d > 5000 ? 'All' : d) - .st({width: 34, textAlign: 'center'}) - .on('click', d => { - pair.count = d - updateChart() - }) - - var typeSel = optionSel.append('div.option-type') - .append('b').text('Chart Type') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['Likelihoods', 'Differences']) - .text(d => d) - .st({width: 116, textAlign: 'center'}) - .on('click', d => { - pair.type = d - updateChart() - }) - - var modelSel = optionSel.append('div.option-model') - .st({display: 'none'}) - .append('b').text('Model') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['BERT', 'Zari']) - .text(d => d) - .st({width: 116, textAlign: 'center'}) - .on('click', d => { - pair.model = d - updateChart() - }) - - var updateSel = optionSel.append('div.button.update').on('click', updateChart) - .text('Update') - .st({display: isMobile ? 'none' : ''}) - - var resetSel = optionSel.append('div.reset') - .html(' Reset') - .on('click', () => { - pair = JSON.parse(pair.pairStr) - pair.pairStr = JSON.stringify(pair) - input0Sel.node().value = pair.s0 - updateChart(true) - }) - .st({display: 'none'}) - - if (pair.alts){ - d3.select('.' + pair.class + '-alts').html('') - .classed('alt-block', 1).st({display: 'block'}) - .appendMany('span.p-button-link', pair.alts) - .html(d => d.str) - .on('click', d => { - input0Sel.node().value = d.rawStr - - updateChart() - }) - } - - var scatters = [] - var scatterSel = sel.append('div.pair-container-overflow').append('div.pair-container') - .st({width: 940}) - .appendMany('div', 'p0 p1 c0 p2 p3 c1'.split(' ')) - .each(function(id){ - var c = d3.conventions({ - sel: d3.select(this).append('div.graph.diff').st({marginTop: -5}), - height: 250, - width: 250, - margin: {bottom: 40, right: 60, top: 5, left: 0}, - layers: 'sdds', - }) - - var [type, i] = id.split('') - - if (type == 'p'){ - c.sel - .st({pointer: 'cursor'}) - .on('click', () => { - pair.colorByIndex = +i - updateChart() - }) - } - - var nTicks = 4 - var tickScale = d3.scaleLinear().range([0, c.width]) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`}) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`}) - - - c.type = type - c.scatters = scatters - c.scatter = window.initScatter(c) - c.scatters.push(c.scatter) - - - d3.select(this).datum({c, type, i}) - }) - - - updateChart(true) - - - async function updateChart(isFirst){ - // warningSel.st({opacity: isFirst ? 0 : 1}) - // resetSel.st({opacity: isFirst ? 0 : 1}) - sel.classed('changed', 0) - - countSel.classed('active', d => d == pair.count) - typeSel.classed('active', d => d == pair.type) - modelSel.classed('active', d => d == pair.model) - - function getStr(sel){ - return sel.node().value.replace('_', '[MASK]') - } - - - pair.s0 = input0Sel.node().value.replace('_', '[MASK]') - var str = pair.s0.replace('[MASK]', '{MASK}') - var sentences = str.split('|').length == 2 ? getZariSenteces() : getTwoPairSentences() - - function getTwoPairSentences(){ - var start = str.split('[')[0] - var mid = str.split(']')[1].split('[')[0] - var last = str.split(']')[2] - - var pairA = str.split('[')[1].split(']')[0].split('|') - var pairB = str.split('[')[2].split(']')[0].split('|') - - return [ - {i: 0, j: 0}, - {i: 0, j: 1}, - {i: 1, j: 0}, - {i: 1, j: 1}, - ].map(word => { - var strA = pairA[word.i] - var strB = pairB[word.j] - - var sentence = [start, strA, mid, strB, last] - .join('') - .replace('{MASK}', '[MASK]') - - var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed' - - return {word, strA, strB, sentence, modelPath} - }) - } - - function getZariSenteces(){ - var start = str.split('[')[0] - var last = str.split(']')[1] - var pairB = str.split('[')[1].split(']')[0].split('|') - - return [ - {i: 0, j: 0}, - {i: 0, j: 1}, - {i: 1, j: 0}, - {i: 1, j: 1}, - ].map(word => { - var strA = word.i ? 'Zari' : 'BERT' - var strB = pairB[word.j] - - var sentence = [start, strB, last] - .join('') - .replace('{MASK}', '[MASK]') - - var modelPath = strA == 'Zari' ? 'embed_zari_cda' : 'embed' - - return {word, strA, strB, sentence, modelPath} - }) - } - - - updateSel.classed('loading', 1) - // TODO parallel? - for (var d of sentences){ - d.maskVals = await post(d.modelPath, {sentence: d.sentence}) - } - updateSel.classed('loading', 0) - - - var allTokens = sentences[0].maskVals.map((v0, i) => { - var word = tokenizer.vocab[i] - var v = sentences.map(d => d.maskVals[i]) - - return {word, i, v, isVisible: false} - }) - - _.sortBy(allTokens, d => -d.v[0]).forEach((d, i) => d.v0i = i) - _.sortBy(allTokens, d => -d.v[1]).forEach((d, i) => d.v1i = i) - _.sortBy(allTokens, d => -d.v[2]).forEach((d, i) => d.v2i = i) - _.sortBy(allTokens, d => -d.v[3]).forEach((d, i) => d.v3i = i) - - allTokens - .filter(d => - d.v0i <= pair.count || - d.v1i <= pair.count || - d.v2i <= pair.count || - d.v3i <= pair.count - ) - .forEach(d => { - d.isTop = true - d.isVisible = true - }) - - var pairs = [ - [0, 1], - [2, 3], - - // [1, 2], - // [3, 0], - - [0, 2], - [1, 3], - - ].map((d, i) => { - var sentA = sentences[d[0]] - var sentB = sentences[d[1]] - - var allPairTokens = allTokens.map((t, i) => { - return {word: t.word, v0: t.v[d[0]], i, v1: t.v[d[1]], t} - }) - - allPairTokens.forEach(d => { - d.dif = d.v0 - d.v1 - d.meanV = (d.v0 + d.v1) / 2 - }) - var i0key = 'v' + d[0] + 'i' - var i1key = 'v' + d[1] + 'i' - - // TODO should this be done per chart or globally? - var topTokens = allPairTokens.filter(d => d.t.isTop) - // var topTokens = allPairTokens.filter(d => d.t[i0key] <= pair.count || d.t[i1key] <= pair.count) - var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1))) - - var tokens = allPairTokens - .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1) - - var mag = logitExtent[1] - logitExtent[0] - logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002] - - if (pair.type == 'Differences') tokens = _.sortBy(allPairTokens, d => -d.meanV).slice(0, pair.count) - - tokens.forEach(d => { - d.isVisible = true - }) - - var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs)) - var color = palette(-maxDif*.5, maxDif*.5) - - label0 = sentA.strA + ' / ' + sentA.strB - label1 = sentB.strA + ' / ' + sentB.strB - - - return {i, sentA, sentB, allPairTokens, logitExtent, tokens, maxDif, color, label0, label1} - }) - - var compares = [[0, 1], [2, 3]].map((d, i) => { - var pairA = pairs[d[0]] - var pairB = pairs[d[1]] - - var allTokensA = pairA.allPairTokens - var allTokensB = pairB.allPairTokens - - var allPairTokens = allTokens.map((t, i) => { - return {word: t.word, t, difA: allTokensA[i].dif, meanA: allTokensA[i].meanV, difB: allTokensB[i].dif, meanB: allTokensB[i].meanV} - }) - - _.sortBy(allPairTokens, d => -d.meanA) - .slice(0, pair.count) - .forEach(d => d.isVisible = true) - - _.sortBy(allPairTokens, d => -d.meanB) - .slice(0, pair.count) - .forEach(d => d.isVisible = true) - - var tokens = allPairTokens.filter(d => d.isVisible) - - return {pairA, pairB, tokens, allPairTokens} - }) - - if (!pair.colorByIndex) pair.colorByIndex = 1 - var color = pairs[pair.colorByIndex].color - pairs[pair.colorByIndex].allPairTokens.forEach(d => { - d.t.color = color(d.dif) - }) - - scatterSel.each(function({c, i, type}){ - updatePairChart(c, type == 'p' ? pairs[i] : compares[i]) - }) - } - - function updatePairChart(c, p){ - var {logitExtent, tokens, maxDif, color} = p - var allTokens = p.allPairTokens - - if (c.type == 'c'){ - drawDifDif() - } else { - if (pair.type == 'Likelihoods'){ - drawXY() - } else{ - drawRotated() - } - - sel.classed('is-xy', pair.type == 'Likelihoods') - sel.classed('is-rotate', pair.type != 'Likelihoods') - c.sel.classed('is-color-by', p.i == pair.colorByIndex) - c.sel.classed('not-is-color-by', p.i != pair.colorByIndex) - } - - function drawXY(){ - c.x.domain(logitExtent) - c.y.domain(logitExtent) - - d3.drawAxis(c) - - var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2 - var scatterData = allTokens.map(d => { - var x = c.x(d.v0) - var y = c.y(d.v1) - var fill = d.t.color - var dif = d.dif - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s, dif, fill, word, show, isVisible} - }) - - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif) - d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - logitExtent.pair = pair - c.scatter.draw(c, scatterData, true) - c.svg.selectAppend('text.x-axis-label.xy-only') - .translate([c.width/2, c.height + 24]) - .text(p.label0 + ' →') - .at({fill: util.colors[0], textAnchor: 'middle'}) - - c.svg.selectAppend('g.y-axis-label.xy-only') - .translate([c.width + 20, c.height/2]) - .selectAppend('text') - .text(p.label1 + ' →') - .at({fill: util.colors[1], textAnchor: 'middle', transform: 'rotate(-90)'}) - } - - function drawRotated(){ - c.x.domain(d3.extent(tokens, d => d.meanV)) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.meanV) - var y = c.y(d.dif) - var fill = d.t.color - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy) - .filter(d => d.isVisible) - .slice(0, 5000) - d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy))) - .map(d => d[0]) - .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r')) - - c.scatter.draw(c, scatterData, false) - c.svg.selectAppend('text.rotate-only.x-axis-label') - .translate([c.width/2, c.height + 24]) - .text(p.label0 + ' + ' + p.label1 + ' →') - .at({textAnchor: 'middle'}) - .st({fill: '#000', fontWeight: 300}) - - c.svg.select('g.rotate-only.sent-1').html('') - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2]) - .append('text') - .text(p.label1 + ' →') - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10}) - .st({fill: util.colors[1]}) - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2 + 0]) - .append('text') - .text('← ' + p.label0) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10}) - .st({fill: util.colors[0]}) - } - - function drawDifDif(){ - var maxDifA = d3.max(d3.extent(tokens, d => d.difA).map(Math.abs)) - var maxDifB = d3.max(d3.extent(tokens, d => d.difB).map(Math.abs)) - var maxDif = d3.max([maxDifA, maxDifB]) - - c.x.domain([maxDif, -maxDif]) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.difA) - var y = c.y(d.difB) - var fill = d.t.color - var word = d.word - var show = '' - var isVisible = d.isVisible - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.x - d.y) - d3.nestBy(textCandidates, d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse(), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - c.scatter.draw(c, scatterData, true) - - var isColor = pair.colorByIndex == p.pairA.i - - var labelSel = c.svg.selectAppend('g.sent-0') - .html('') - .translate([c.width/2, c.height + 24]) - - labelSel.append('text') - .text(p.pairA.label1 + ' →') - .at({textAnchor: 'start', x: 10}) - .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''}) - - labelSel.append('text') - .text('← ' + p.pairA.label0) - .at({textAnchor: 'end', x: -10}) - .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''}) - - - var isColor = pair.colorByIndex == p.pairB.i - - var labelSel = c.svg.selectAppend('g.sent-1') - .html('') - .translate([c.width + 20, c.height/2]) - - labelSel.append('text') - .text(p.pairB.label1 + ' →') - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10}) - .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''}) - - labelSel.append('text') - .text('← ' + p.pairB.label0) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10}) - .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''}) - } - - } -} - -if (window.init) init() diff --git a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shapes.js b/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shapes.js deleted file mode 100644 index 87af55b4829a78b48dc41f6674c12cd58cfc3741..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shapes.js +++ /dev/null @@ -1,248 +0,0 @@ - -// Space out the shapes a bit -shapeParams.forEach((d) => (d.startX = d.startX * 1.1)); - -// How to draw the background boxes, which will be styled later -const classifierBgPathTop = "M 420 150 H 0 V 0 H 420 V 150"; -const classifierBgPathBottom = "M 420 300 H 0 V 0 H 420 V 300"; - -const toDropdownValueStringDict = { - shape_name: "circles, triangles, or rectangles", - pointiness: "pointy shapes or round shapes", - size: "small shapes or big shapes", -}; - -const toShortValueStringDict = { - shape_name: "circles, triangles, or rectangles", - pointiness: "pointy or round", - size: "small or big", -}; - -const toDropdownValueRoundingStringDict = { - true: "with our best guess", - false: 'as "other"', -}; - -const toPropertyStringDict = { - pointy: "pointy shapes", - round: "round shapes", - small: "small shapes", - large: "big shapes", - circle: "circles", - triangle: "triangles", - rect: "rectangles", -}; - -function toOriginalString(inputString) { - for (const [key, value] of Object.entries(toPropertyStringDict)) { - if (inputString == value) { - return key; - } - } -} - -function toPropertyString(inputProperty, isRounding = true) { - if (!isRounding && inputProperty.startsWith("rt_")) { - return "others"; - } - return toPropertyStringDict[inputProperty.replace("rt_", "")]; -} - -// Dictionary mapping div name to classifier results and summary sentences -var allResults = {}; -var summaries = {}; - -function toBool(inputString) { - if (inputString == "true") { - return true; - } - return false; -} -function updateResults() { - allResults["default-classifier"] = calculateResults(); - allResults["second-classifier"] = calculateResults( - "shape_name", - toBool( - document.getElementById("second-classifier-select-rounding").value - ) - ); - - allResults["final-classifier"] = calculateResults( - document.getElementById("final-classifier-select-category").value, - toBool( - document.getElementById("final-classifier-select-rounding").value - ) - ); - - allResults["conclusion"] = calculateResults( - document.getElementById("conclusion-select-category").value, - true - ); - - updateSummaries(); - updateSecondInterfaceImages(); -} - -// Text summaries are written by hand for simplicity, and keyed simply by -// a string of the form "[category]:[useGuess]" (or simply "none"). -// These are hashed in the same way as the results, by div name. -function updateSummaries() { - summaries["default-classifier"] = getPerformanceSummary("none"); - summaries["second-classifier"] = getPerformanceSummary( - "shape_name:" + - document.getElementById("second-classifier-select-rounding").value - ); - - summaries["final-classifier"] = getPerformanceSummary( - document.getElementById("final-classifier-select-category").value + - ":" + - document.getElementById("final-classifier-select-rounding").value - ); - - summaries["conclusion"] = getPerformanceSummary( - document.getElementById("conclusion-select-category").value + ":" + true - ); -} - -// Yes, these background colors are hardcoded in, -// no, this is not good design, this is just how it happened. -function getPerformanceSummary(key) { - allSummaries = { - "shape_name:true": - 'well on circles, terribly on triangles, and best on rectangles', - "shape_name:false": - 'poorly on circles, best on triangles and rectangles, and fine on other shapes', - "pointiness:true": - 'better on pointy shapes and worse on round shapes', - "pointiness:false": - 'best on pointy shapes, fine on round shapes, and poorly on other shapes', - "size:true": - 'better on small shapes, worse on big shapes', - "size:false": - 'poorly on small shapes, terribly on big shapes, and best on other shapes', - "none:true": - 'fine on all shapes', - "none:false": - 'fine on all shapes', - none: 'fine on all shapes', - }; - - return "The Is-Shaded Classifier performs " + allSummaries[key] + "."; -} - -// On the second-classifier dropdown, update the "task interface" image. -function updateSecondInterfaceImages() { - d3.select(".second-interface").html(function () { - if ( - !document.getElementById("second-classifier-select-rounding").value - ) { - return; - } - var imgPath = - "img/interface_shape_name_" + - document.getElementById("second-classifier-select-rounding").value; - return ( - '' - ); - }); -} - -// Calculate results given input parameters -function calculateResults(property = "none", useGuess = false) { - switch (property) { - case "none": - var nAccurate = shapeParams.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = shapeParams.length; - - var results = [ - { - object: "shape", - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: "none", - }, - ]; - - return results; - case "pointiness": - categories = ["pointy", "round"]; - break; - case "size": - categories = ["small", "large"]; - break; - case "shape_name": - categories = ["circle", "triangle", "rect"]; - break; - } - - var results = []; - if (useGuess == true) { - // Rounding shapes to categories - - for (const category of categories) { - // Get shapes that are either in this category (e.g. rectangle) or "rounds to" this category (e.g. rt_rectangle) - var theseShapes = shapeParams.filter( - (shape) => - shape[property] == category || - shape[property] == "rt_" + category - ); - var nAccurate = theseShapes.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = theseShapes.length; - - results.push({ - object: toPropertyString(category), - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: category, - }); - } - } else { - // Not rounding, treat everything else as "other" - - // First go through existing categories - for (const category of categories) { - var theseShapes = shapeParams.filter( - (shape) => shape[property] == category - ); - var nAccurate = theseShapes.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = theseShapes.length; - results.push({ - object: toPropertyString(category), - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: category, - }); - } - - // Now get "other" shapes - var theseShapes = shapeParams.filter( - (shape) => !categories.includes(shape[property]) - ); - var nAccurate = theseShapes.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = theseShapes.length; - results.push({ - object: "other shapes", - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: "other", - }); - } - - return results; -} diff --git a/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/mask_decoder.py b/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/mask_decoder.py deleted file mode 100644 index 8635b671d24329d7764404ca0479cb9af4260daa..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/mask_decoder.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import List, Tuple, Type - -from .common import LayerNorm2d - - -class MaskDecoder(nn.Module): - def __init__( - self, - *, - transformer_dim: int, - transformer: nn.Module, - num_multimask_outputs: int = 3, - activation: Type[nn.Module] = nn.GELU, - iou_head_depth: int = 3, - iou_head_hidden_dim: int = 256, - ) -> None: - """ - Predicts masks given an image and prompt embeddings, using a - tranformer architecture. - - Arguments: - transformer_dim (int): the channel dimension of the transformer - transformer (nn.Module): the transformer used to predict masks - num_multimask_outputs (int): the number of masks to predict - when disambiguating masks - activation (nn.Module): the type of activation to use when - upscaling masks - iou_head_depth (int): the depth of the MLP used to predict - mask quality - iou_head_hidden_dim (int): the hidden dimension of the MLP - used to predict mask quality - """ - super().__init__() - self.transformer_dim = transformer_dim - self.transformer = transformer - - self.num_multimask_outputs = num_multimask_outputs - - self.iou_token = nn.Embedding(1, transformer_dim) - self.num_mask_tokens = num_multimask_outputs + 1 - self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) - - self.output_upscaling = nn.Sequential( - nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim // 4), - activation(), - nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), - activation(), - ) - self.output_hypernetworks_mlps = nn.ModuleList( - [ - MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) - for i in range(self.num_mask_tokens) - ] - ) - - self.iou_prediction_head = MLP( - transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth - ) - - def forward( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - multimask_output: bool, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks given image and prompt embeddings. - - Arguments: - image_embeddings (torch.Tensor): the embeddings from the image encoder - image_pe (torch.Tensor): positional encoding with the shape of image_embeddings - sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes - dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs - multimask_output (bool): Whether to return multiple masks or a single - mask. - - Returns: - torch.Tensor: batched predicted masks - torch.Tensor: batched predictions of mask quality - """ - masks, iou_pred, mask_tokens_out = self.predict_masks( - image_embeddings=image_embeddings, - image_pe=image_pe, - sparse_prompt_embeddings=sparse_prompt_embeddings, - dense_prompt_embeddings=dense_prompt_embeddings, - ) - - # Select the correct mask or masks for outptu - if multimask_output: - mask_slice = slice(1, None) - else: - mask_slice = slice(0, 1) - masks = masks[:, mask_slice, :, :] - mask_tokens_out = mask_tokens_out[:, mask_slice, :] - iou_pred = iou_pred[:, mask_slice] - - # Prepare output - return masks, iou_pred, mask_tokens_out - - def predict_masks( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """Predicts masks. See 'forward' for more details.""" - # Concatenate output tokens - output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0) - output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1) - tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) - - # Expand per-image data in batch direction to be per-mask - src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) - src = src + dense_prompt_embeddings - pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) - b, c, h, w = src.shape - - # Run the transformer - hs, src = self.transformer(src, pos_src, tokens) - iou_token_out = hs[:, 0, :] - mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :] - - # Upscale mask embeddings and predict masks using the mask tokens - src = src.transpose(1, 2).view(b, c, h, w) - upscaled_embedding = self.output_upscaling(src) - hyper_in_list: List[torch.Tensor] = [] - for i in range(self.num_mask_tokens): - hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :])) - hyper_in = torch.stack(hyper_in_list, dim=1) - b, c, h, w = upscaled_embedding.shape - masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) - - # Generate mask quality predictions - iou_pred = self.iou_prediction_head(iou_token_out) - - return masks, iou_pred, mask_tokens_out - - -# Lightly adapted from -# https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa -class MLP(nn.Module): - def __init__( - self, - input_dim: int, - hidden_dim: int, - output_dim: int, - num_layers: int, - sigmoid_output: bool = False, - ) -> None: - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - self.sigmoid_output = sigmoid_output - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - if self.sigmoid_output: - x = F.sigmoid(x) - return x diff --git a/spaces/mpatel57/WOUAF-Text-to-Image/torch_utils/ops/conv2d_gradfix.py b/spaces/mpatel57/WOUAF-Text-to-Image/torch_utils/ops/conv2d_gradfix.py deleted file mode 100644 index e95e10d0b1d0315a63a76446fd4c5c293c8bbc6d..0000000000000000000000000000000000000000 --- a/spaces/mpatel57/WOUAF-Text-to-Image/torch_utils/ops/conv2d_gradfix.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.conv2d` that supports -arbitrarily high order gradients with zero performance penalty.""" - -import warnings -import contextlib -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. -weight_gradients_disabled = False # Forcefully disable computation of gradients with respect to the weights. - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - -#---------------------------------------------------------------------------- - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) - -def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias) - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(input): - assert isinstance(input, torch.Tensor) - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - if input.device.type != 'cuda': - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']): - return True - warnings.warn(f'conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d().') - return False - -def _tuple_of_ints(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - assert len(xs) == ndim - assert all(isinstance(x, int) for x in xs) - return xs - -#---------------------------------------------------------------------------- - -_conv2d_gradfix_cache = dict() - -def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups): - # Parse arguments. - ndim = 2 - weight_shape = tuple(weight_shape) - stride = _tuple_of_ints(stride, ndim) - padding = _tuple_of_ints(padding, ndim) - output_padding = _tuple_of_ints(output_padding, ndim) - dilation = _tuple_of_ints(dilation, ndim) - - # Lookup from cache. - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in _conv2d_gradfix_cache: - return _conv2d_gradfix_cache[key] - - # Validate arguments. - assert groups >= 1 - assert len(weight_shape) == ndim + 2 - assert all(stride[i] >= 1 for i in range(ndim)) - assert all(padding[i] >= 0 for i in range(ndim)) - assert all(dilation[i] >= 0 for i in range(ndim)) - if not transpose: - assert all(output_padding[i] == 0 for i in range(ndim)) - else: # transpose - assert all(0 <= output_padding[i] < max(stride[i], dilation[i]) for i in range(ndim)) - - # Helpers. - common_kwargs = dict(stride=stride, padding=padding, dilation=dilation, groups=groups) - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - # Forward & backward. - class Conv2d(torch.autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - assert weight.shape == weight_shape - if not transpose: - output = torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - else: # transpose - output = torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs) - ctx.save_for_backward(input, weight) - return output - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input = None - grad_weight = None - grad_bias = None - - if ctx.needs_input_grad[0]: - p = calc_output_padding(input_shape=input.shape, output_shape=grad_output.shape) - grad_input = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs).apply(grad_output, weight, None) - assert grad_input.shape == input.shape - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - assert grad_weight.shape == weight_shape - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum([0, 2, 3]) - - return grad_input, grad_weight, grad_bias - - # Gradient with respect to the weights. - class Conv2dGradWeight(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation('aten::cudnn_convolution_backward_weight' if not transpose else 'aten::cudnn_convolution_transpose_backward_weight') - flags = [torch.backends.cudnn.benchmark, torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32] - grad_weight = op(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags) - assert grad_weight.shape == weight_shape - ctx.save_for_backward(grad_output, input) - return grad_weight - - @staticmethod - def backward(ctx, grad2_grad_weight): - grad_output, input = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = Conv2d.apply(input, grad2_grad_weight, None) - assert grad2_grad_output.shape == grad_output.shape - - if ctx.needs_input_grad[1]: - p = calc_output_padding(input_shape=input.shape, output_shape=grad_output.shape) - grad2_input = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs).apply(grad_output, grad2_grad_weight, None) - assert grad2_input.shape == input.shape - - return grad2_grad_output, grad2_input - - _conv2d_gradfix_cache[key] = Conv2d - return Conv2d - -#---------------------------------------------------------------------------- diff --git a/spaces/mrneuralnet/P-PD/README.md b/spaces/mrneuralnet/P-PD/README.md deleted file mode 100644 index 64e2c77d238a46b35a30a67c3339d3a8d25e49ea..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-PD/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: P DFD -emoji: ⚡ -colorFrom: yellow -colorTo: purple -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/data/augmentation/augmentations_3d.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/data/augmentation/augmentations_3d.py deleted file mode 100644 index d6b6012d5f50a7d26017daf641eb5eed1c2be639..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/data/augmentation/augmentations_3d.py +++ /dev/null @@ -1,38 +0,0 @@ -import torchio as tio -from typing import Union, Optional, Sequence -from torchio.typing import TypeTripletInt -from torchio import Subject, Image -from torchio.utils import to_tuple - -class CropOrPad_None(tio.CropOrPad): - def __init__( - self, - target_shape: Union[int, TypeTripletInt, None] = None, - padding_mode: Union[str, float] = 0, - mask_name: Optional[str] = None, - labels: Optional[Sequence[int]] = None, - **kwargs - ): - - # WARNING: Ugly workaround to allow None values - if target_shape is not None: - self.original_target_shape = to_tuple(target_shape, length=3) - target_shape = [1 if t_s is None else t_s for t_s in target_shape] - super().__init__(target_shape, padding_mode, mask_name, labels, **kwargs) - - def apply_transform(self, subject: Subject): - # WARNING: This makes the transformation subject dependent - reverse transformation must be adapted - if self.target_shape is not None: - self.target_shape = [s_s if t_s is None else t_s for t_s, s_s in zip(self.original_target_shape, subject.spatial_shape)] - return super().apply_transform(subject=subject) - - -class SubjectToTensor(object): - """Transforms TorchIO Subjects into a Python dict and changes axes order from TorchIO to Torch""" - def __call__(self, subject: Subject): - return {key: val.data.swapaxes(1,-1) if isinstance(val, Image) else val for key,val in subject.items()} - -class ImageToTensor(object): - """Transforms TorchIO Image into a Numpy/Torch Tensor and changes axes order from TorchIO [B, C, W, H, D] to Torch [B, C, D, H, W]""" - def __call__(self, image: Image): - return image.data.swapaxes(1,-1) \ No newline at end of file diff --git a/spaces/nakamura196/yolov5-ndl-layout/init.sh b/spaces/nakamura196/yolov5-ndl-layout/init.sh deleted file mode 100644 index 257d65cf4445ad3cbcf031495bc53e4c3283fd3d..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/init.sh +++ /dev/null @@ -1,2 +0,0 @@ -rm best.pt -gdown https://drive.google.com/uc?id=1DduqMfElGLPYWZTbrEO8F3qn6VPOZDPM \ No newline at end of file diff --git a/spaces/naqibhakimi/sk/app.py b/spaces/naqibhakimi/sk/app.py deleted file mode 100644 index 554fc4eda6a7c035758c2cc96de0f6f0a425144a..0000000000000000000000000000000000000000 --- a/spaces/naqibhakimi/sk/app.py +++ /dev/null @@ -1,171 +0,0 @@ -import contextlib -import streamlit as st -import streamlit.components.v1 as components -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer -import utils -from kb import KB - - -import wikipedia -MAX_TOPICS= 5 -BUTTON_COLUMS = 4 - -st.header("Extracting a Knowledge Graph from text") - -# Loading the model - - - -def load_model(): - tokenizer = AutoTokenizer.from_pretrained("Babelscape/rebel-large") - model = AutoModelForSeq2SeqLM.from_pretrained("Babelscape/rebel-large") - return tokenizer, model - - - - -def generate_kb(): - st_model_load = st.text('Loading NER model... It may take a while.') - tokenizer, model = load_model() - st.success('Model loaded!') - st_model_load.text("") - - kb = utils.from_text_to_kb(' '.join(st.session_state['wiki_text']), model, tokenizer, "", verbose=True) - utils.save_network_html(kb, filename="networks/network.html") - st.session_state.kb_chart = "networks/network.html" - st.session_state.kb_text = kb.get_textual_representation() - st.session_state.error_url = None - - - -def show_textbox(): - if len(st.session_state['wiki_text']) != 0: - for i, t in enumerate(st.session_state['wiki_text']): - new_expander = st.expander(label=f"{t[:30]}...", expanded=(i==0)) - with new_expander: - st.markdown(t) - - -def wiki_show_text(page_title): - with st.spinner(text="Fetching wiki page..."): - # print(st.session_state['wiki_suggestions']) - try: - page = wikipedia.page(title=page_title, auto_suggest=False) - st.session_state['wiki_text'].append(page.summary) - st.session_state['topics'].append(page_title.lower()) - st.session_state['wiki_suggestions'].remove(page_title) - show_textbox() - - except wikipedia.DisambiguationError as e: - with st.spinner(text="Woops, ambigious term, recalculating options..."): - st.session_state['wiki_suggestions'].remove(page_title) - temp = st.session_state['wiki_suggestions'] + e.options[:3] - st.session_state['wiki_suggestions'] = list(set(temp)) - show_textbox() - except wikipedia.WikipediaException: - st.session_state['wiki_suggestions'].remove(page_title) - - - -def wiki_add_text(term): - - if len(st.session_state['wiki_text']) > MAX_TOPICS: - return - try: - page = wikipedia.page(title=term, auto_suggest=False) - extra_text = page.summary - - st.session_state['wiki_text'].append(extra_text) - st.session_state['topics'].append(term.lower()) - st.session_state['nodes'].remove(term) - - except wikipedia.DisambiguationError as e: - with st.spinner(text="Woops, ambigious term, recalculating options..."): - st.session_state['nodes'].remove(term) - temp = st.session_state['nodes'] + e.options[:3] - st.session_state['nodes'] = list(set(temp)) - except wikipedia.WikipediaException as e: - st.session_state['nodes'].remove(term) - -def reset_thread(): - st.session_state['wiki_text'] = [] - st.session_state['topics'] = [] - st.session_state['nodes'] = [] - st.session_state['has_run_wiki'] = False - st.session_state['wiki_suggestions'] = [] - st.session_state['html_wiki'] = '' - - -def show_wiki_hub_page(): - cols = st.columns([7, 1]) - b_cols = st.columns([2, 1.2, 8]) - - with cols[0]: - st.text_input("Search", on_change=wiki_show_suggestion, key="text", value="graphs, are, awesome") - with cols[1]: - st.text('') - st.text('') - st.button("Search", on_click=wiki_show_suggestion, key="show_suggestion_key") - with b_cols[0]: - st.button("Generate KB", on_click=generate_kb) - with b_cols[1]: - st.button("Reset", on_click=reset_thread) - - - -def wiki_show_suggestion(): - with st.spinner(text="Fetching wiki topics..."): - text = st.session_state.text - if (text is not None) and (text != ""): - subjects = text.split(",")[:MAX_TOPICS] - for subj in subjects: - st.session_state['wiki_suggestions'] += wikipedia.search(subj, results = 3) - show_wiki_suggestions_buttons() - - - -def show_wiki_suggestions_buttons(): - if len(st.session_state['wiki_suggestions']) == 0: - return - num_buttons = len(st.session_state['wiki_suggestions']) - # st.session_state['wiki_suggestions'] = list(set(st.session_state['wiki_suggestions'])) - num_cols = num_buttons if 0 < num_buttons < BUTTON_COLUMS else BUTTON_COLUMS - columns = st.columns([1] * num_cols ) - for q in range(1 + num_buttons//num_cols): - for i, (c, s) in enumerate(zip(columns, st.session_state['wiki_suggestions'][q*num_cols: (q+1)*num_cols])): - with c: - with contextlib.suppress(Exception): - st.button(s, on_click=wiki_show_text, args=(s,), key=str(i)+s+"wiki_suggestion") - - - -def init_variables(): - if 'wiki_suggestions' not in st.session_state: - st.session_state['wiki_text'] = [] - st.session_state['topics'] = [] - st.session_state['nodes'] = [] - st.session_state['has_run_wiki'] = True - st.session_state['wiki_suggestions'] = [] - st.session_state['html_wiki'] = '' - - -init_variables() -show_wiki_hub_page() -# kb chart session state -if 'kb_chart' not in st.session_state: - st.session_state.kb_chart = None -if 'kb_text' not in st.session_state: - st.session_state.kb_text = None -if 'error_url' not in st.session_state: - st.session_state.error_url = None - -# show graph -if st.session_state.error_url: - st.markdown(st.session_state.error_url) -elif st.session_state.kb_chart: - with st.container(): - st.subheader("Generated KB") - st.markdown("*You can interact with the graph and zoom.*") - html_source_code = open(st.session_state.kb_chart, 'r', encoding='utf-8').read() - components.html(html_source_code, width=700, height=700) - st.markdown(st.session_state.kb_text) \ No newline at end of file diff --git a/spaces/nateevo/docu-searcher/app.py b/spaces/nateevo/docu-searcher/app.py deleted file mode 100644 index 29526127b1b1d65f2acbec416b35157d72caf920..0000000000000000000000000000000000000000 --- a/spaces/nateevo/docu-searcher/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import numpy as np -import pandas as pd -import openai -from openai.embeddings_utils import get_embedding, cosine_similarity -import gradio as gr -import os - -openai.api_key = "sk-"+os.environ['OPENAI_API_KEY'] - - -def get_documentation(query, platform): - embedding = get_embedding( - query, - engine="text-embedding-ada-002") - - if platform == "Salesforce Marketing Cloud Intelligence": - df = pd.read_csv("(sfmci)doc_embeddings.csv") - - elif platform == "Salesforce Marketing Cloud CDP": - df = pd.read_csv("(sfmcdp)doc_embeddings.csv") - - elif platform == "Salesforce Marketing Cloud Personalization": - df = pd.read_csv("(sfmcp)doc_embeddings.csv") - - elif platform == "Salesforce Marketing Cloud Engagement": - df = pd.read_csv("(sfmce)doc_embeddings.csv") - - df.ada_search = df.ada_search.apply( - lambda x: np.array(x[1:-1].split(','), dtype=np.float32)) - df["similarities"] = df.ada_search.apply( - lambda x: cosine_similarity(x, embedding)) - df = df.sort_values("similarities", ascending=False).reset_index() - titles = df['title'] - contents = df['body'] - links = df['link'] - res = [] - for i in range(3): - res.append("Title: " + titles[i] + "\n\nContent: " + - contents[i] + "\n\nURL: " + links[i]) - return res[0], res[1], res[2] - - -demo = gr.Interface( - fn=get_documentation, - inputs=[ - gr.Textbox(label="Question: ", lines=3,), - gr.Radio(["Salesforce Marketing Cloud Intelligence", - "Salesforce Marketing Cloud CDP", - "Salesforce Marketing Cloud Personalization", "Salesforce Marketing Cloud Engagement"], value="Salesforce Marketing Cloud CDP", label="Platform") - ], - outputs=[gr.Textbox(label="Results: "), - gr.Textbox( - label="Resultado 2", show_label=False), - gr.Textbox(label="Resultado 3", show_label=False)], - title="Salesforce Documentation Search", - examples=[ - ["conector de instagram", "Salesforce Marketing Cloud Intelligence"], - # [4, "dog", "zoo", ["ate", "swam"], False], - # [10, "bird", "road", ["ran"], False], - # [8, "cat", "zoo", ["ate"], True], - ], -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gopro Cineform Studio Mac Download PATCHED.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gopro Cineform Studio Mac Download PATCHED.md deleted file mode 100644 index 5cacee493bb06ea26ce47c6292753dccdfea060e..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gopro Cineform Studio Mac Download PATCHED.md +++ /dev/null @@ -1,37 +0,0 @@ - -

          How to Download and Use GoPro CineForm Studio for Mac

          -

          GoPro CineForm Studio is a software application that allows you to edit and convert your GoPro videos into high-quality formats for professional and personal use. With GoPro CineForm Studio, you can trim, crop, rotate, color correct, add effects, and export your videos in various resolutions and codecs. You can also transform your 360-degree footage from GoPro MAX into stunning traditional videos with reframing and keyframing tools. Moreover, you can upgrade to ReelSteady within GoPro CineForm Studio to get pro-level stabilization for your GoPro footage.

          -

          Gopro Cineform Studio Mac Download


          Download ••• https://urlcod.com/2uI9M4



          -

          In this article, we will show you how to download and use GoPro CineForm Studio for Mac in 2023.

          -

          How to Download GoPro CineForm Studio for Mac

          -

          GoPro CineForm Studio is available as a free download from the official GoPro website. Here are the steps to download it:

          -
            -
          1. Go to https://gopro.com/en/us/info/gopro-player and click on the "Download" button for Mac.
          2. -
          3. Wait for the file to download and then double-click on it to launch the installer.
          4. -
          5. Follow the instructions on the screen to complete the installation process.
          6. -
          7. Launch GoPro CineForm Studio from your Applications folder or Dock.
          8. -
          -

          How to Use GoPro CineForm Studio for Mac

          -

          GoPro CineForm Studio has a simple and intuitive interface that consists of three main tabs: Import, Edit, and Export. Here is how to use each tab:

          -

          Import Tab

          -

          The Import tab allows you to transfer your media from your GoPro camera or SD card to your computer and organize them into folders. You can also preview and trim your clips before importing them. Here is how to use the Import tab:

          -
            -
          1. Connect your GoPro camera or SD card to your Mac using a USB cable or a card reader.
          2. -
          3. GoPro CineForm Studio will automatically detect your device and display its contents in the left panel.
          4. -
          5. Select the clips you want to import and click on the "Import" button at the bottom right corner.
          6. -
          7. You can also click on the "Advanced Settings" button to change the destination folder, file name format, and conversion quality.
          8. -
          9. If you want to preview or trim your clips before importing them, click on the "Play" button or drag the sliders below the preview window.
          10. -
          11. Once you have imported your clips, they will appear in the right panel under "Imported Files". You can rename, delete, or move them as you wish.
          12. -
          -

          Edit Tab

          -

          The Edit tab allows you to edit and enhance your videos using various tools and effects. You can also reframe your 360-degree footage from GoPro MAX into traditional videos with keyframes. Here is how to use the Edit tab:

          -
            -
          1. Select a clip from the right panel and drag it to the timeline at the bottom of the screen.
          2. -
          3. You can add more clips to the timeline by dragging them from the right panel or by clicking on the "+" button at the end of the timeline.
          4. -
          5. You can rearrange, trim, split, or delete clips on the timeline by using the buttons above it or by right-clicking on them.
          6. -
          7. To edit a clip, select it on the timeline and use the tools on the left panel. You can adjust the exposure, contrast, saturation, white balance, sharpness, zoom, rotation, and more.
          8. -
          9. To add effects to a clip, select it on the timeline and click on the "FX" button on the left panel. You can choose from various presets or create your own custom effects.
          10. -
          11. To reframe your 360-degree footage from GoPro MAX, select it on the timeline and click on the "Reframe" button on the left panel. You can change the perspective, field of view, horizon level, and more by dragging on the preview window or by using the sliders below it.
          12. -
          13. To add keyframes to your 360-degree footage, click on the "Keyframe" button at the bottom of the

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kano Home Sweet Home Full Album ((TOP)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kano Home Sweet Home Full Album ((TOP)).md deleted file mode 100644 index bf30e5fb398cf576b380aec0db36f1ae88a30444..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kano Home Sweet Home Full Album ((TOP)).md +++ /dev/null @@ -1,12 +0,0 @@ - -

            Kano: Home Sweet Home - A Classic Debut Album

            -

            Kano is one of the most influential and respected MCs in the UK grime scene. His debut album, Home Sweet Home, released in 2005, is widely regarded as a classic of the genre. The album showcases Kano's lyrical skills, versatility, and charisma, as he raps over diverse beats ranging from garage, hip-hop, dancehall, and R&B.

            -

            Kano, Home Sweet Home Full Album


            DOWNLOAD ✒ ✒ ✒ https://urlcod.com/2uIb4C



            -

            The album features guest appearances from fellow grime artists such as D Double E, Demon, Ghetts, and Wiley, as well as singers such as Leo the Lion and Mike Skinner of The Streets. Some of the standout tracks include "P's and Q's", a fast-paced anthem that showcases Kano's rapid-fire flow and witty wordplay; "Nite Nite", a smooth and soulful collaboration with Leo the Lion and Mike Skinner; "Reload It", a dancehall-inspired banger that features Kano's mentor D Double E; and "Typical Me", a catchy and humorous track that reflects on Kano's personality and lifestyle.

            -

            Home Sweet Home is a landmark album that cemented Kano's reputation as one of the best MCs in the UK. The album received critical acclaim from various publications such as Pitchfork[^2^], NME, The Guardian, and The Independent. The album also sold over 100,000 copies in the UK and was nominated for the Mercury Prize in 2005.

            -

            Kano has since released six more albums, each showcasing his growth and evolution as an artist. His latest album, Hoodies All Summer, was released in 2019 and won him two MOBO Awards for Best Album and Best Grime Act. Kano is also an accomplished actor, having starred in the TV series Top Boy and the film Yardie. Kano remains one of the most influential figures in British music and culture.

            If you want to listen to Home Sweet Home, you can find it on various streaming platforms such as Spotify, Apple Music, YouTube Music, and Tidal. You can also watch some of Kano's music videos on his official YouTube channel. You can also follow him on his social media accounts such as Instagram, Twitter, and Facebook to stay updated on his latest news and projects.

            -

            Home Sweet Home is a must-listen for any fan of grime music or British rap in general. It is a testament to Kano's talent and legacy as one of the pioneers and leaders of the grime scene. It is an album that will make you nod your head, laugh, think, and feel. It is an album that deserves to be called a classic.

            One of the reasons why Home Sweet Home is such a great album is because it showcases Kano's versatility as an MC. He can rap over any type of beat, from grime to hip-hop to dancehall to R&B. He can switch his flow and delivery to suit the mood and tone of the song. He can rap about serious topics such as racism, violence, and poverty, as well as lighter topics such as love, partying, and fashion. He can also inject humor and personality into his lyrics, making them relatable and memorable.

            -

            Another reason why Home Sweet Home is such a great album is because it reflects Kano's life and experiences as a young black man growing up in East London. He raps about his struggles and aspirations, his family and friends, his culture and identity. He raps about the realities and challenges of living in a city that is full of opportunities but also full of dangers. He raps about the joys and pains of being part of the grime scene, a musical movement that was born out of the streets and gave voice to a generation.

            -

            A final reason why Home Sweet Home is such a great album is because it influenced and inspired many other artists and listeners. Kano paved the way for other grime MCs to break into the mainstream and gain recognition and respect. He also opened the doors for other genres of British rap such as drill, afroswing, and UK hip-hop to emerge and flourish. He also motivated many young people to pursue their dreams and express themselves through music. He showed them that they can be proud of who they are and where they come from.

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ncomputing Vspace License [PATCHED] !!LINK!! Crack 265.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ncomputing Vspace License [PATCHED] !!LINK!! Crack 265.md deleted file mode 100644 index 79d7815f4e5982e1c09f437605939c615371e134..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ncomputing Vspace License [PATCHED] !!LINK!! Crack 265.md +++ /dev/null @@ -1,20 +0,0 @@ -
            -

            Ncomputing Vspace License [PATCHED] Crack 265: A Review

            -

            Ncomputing Vspace is a desktop virtualization software that allows multiple users to share one computer. It is designed to reduce hardware costs and power consumption, while providing a high-performance and secure computing experience. Ncomputing Vspace 3.5 is the latest version of the software, which supports Windows and Linux operating systems.

            -

            Ncomputing Vspace License [PATCHED] Crack 265


            Downloadhttps://urlcod.com/2uI9vc



            -

            However, some users may want to use Ncomputing Vspace without paying for the license fee, which can range from $50 to $150 per user. This is where Ncomputing Vspace License [PATCHED] Crack 265 comes in. This is a license-key generator tool that claims to provide a valid license key for Ncomputing Vspace 3.5. The tool is available for sale on the internet, and some users have reported that it works well.

            -

            But is Ncomputing Vspace License [PATCHED] Crack 265 legal and safe to use? The answer is no. First of all, using a cracked license key is a violation of the Ncomputing Vspace terms of service, which can result in legal action from the company. Secondly, using a cracked license key can compromise the security and stability of the system, as it may contain malware or viruses. Thirdly, using a cracked license key can affect the performance and quality of the software, as it may not be compatible with the latest updates and features.

            -

            Therefore, it is not recommended to use Ncomputing Vspace License [PATCHED] Crack 265 or any other similar tools. Instead, users should purchase a legitimate license key from Ncomputing or its authorized resellers. This way, they can enjoy the benefits of Ncomputing Vspace without risking any legal or technical issues.

            Here are some more paragraphs for the article:

            -

            -

            Ncomputing Vspace is a popular choice for many organizations and individuals who want to save money and energy while providing a high-quality computing experience. Some of the benefits of Ncomputing Vspace include:

            -
              -
            • Reduced hardware costs: Ncomputing Vspace allows up to 100 users to share one computer, which means less hardware to buy and maintain.
            • -
            • Reduced power consumption: Ncomputing Vspace uses only 1 watt of electricity per user, which means lower energy bills and carbon footprint.
            • -
            • Increased security: Ncomputing Vspace encrypts all data and communications between the host computer and the user devices, which means no data loss or theft.
            • -
            • Increased performance: Ncomputing Vspace optimizes the use of CPU, memory, and network resources, which means faster and smoother operation.
            • -
            • Increased flexibility: Ncomputing Vspace supports a wide range of user devices, such as laptops, tablets, thin clients, and monitors, which means more options and convenience.
            • -
            -

            However, to enjoy these benefits, users need to purchase a valid license key from Ncomputing or its authorized resellers. A license key is a unique code that activates the software and allows it to run on a specific number of user devices. The license key also enables the software to receive updates and support from Ncomputing.

            -

            Purchasing a license key is easy and affordable. Users can choose from different license types and durations, depending on their needs and budget. Users can also purchase additional licenses if they want to add more user devices. Users can purchase license keys online from the Ncomputing website or offline from local resellers.

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/editor_gecko.css b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/editor_gecko.css deleted file mode 100644 index 64524f818db26c0c02c3e07690344affda05c76f..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/editor_gecko.css +++ /dev/null @@ -1 +0,0 @@ -.cke_reset{margin:0;padding:0;border:0;background:0;text-decoration:none;width:auto;height:auto;vertical-align:baseline;box-sizing:content-box;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;position:static;-webkit-transition:none;-moz-transition:none;-ms-transition:none;transition:none}.cke_reset_all,.cke_reset_all *{margin:0;padding:0;border:0;background:0;text-decoration:none;width:auto;height:auto;vertical-align:baseline;box-sizing:content-box;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;position:static;-webkit-transition:none;-moz-transition:none;-ms-transition:none;transition:none;border-collapse:collapse;font:normal normal normal 12px Arial,Helvetica,Tahoma,Verdana,Sans-Serif;color:#333;text-align:left;white-space:nowrap;cursor:auto;float:none}.cke_reset_all .cke_rtl *{text-align:right}.cke_reset_all iframe{vertical-align:inherit}.cke_reset_all textarea{white-space:pre}.cke_reset_all input[type=password],.cke_reset_all input[type=text],.cke_reset_all textarea{cursor:text}.cke_reset_all input[type=password][disabled],.cke_reset_all input[type=text][disabled],.cke_reset_all textarea[disabled]{cursor:default}.cke_reset_all fieldset{padding:10px;margin-top:10px;border:1px solid #ddd}.cke_reset_all fieldset legend{padding:0 5px}.cke_reset_all select{box-sizing:border-box;-moz-box-sizing:border-box;-webkit-box-sizing:border-box}.cke_chrome{display:block;border:1px solid #ddd;border-radius:4px;padding:0 3px;background:#eee}.cke_inner{display:block;-webkit-touch-callout:none;background:0;padding:0}.cke_float{border:0}.cke_float .cke_inner{padding-bottom:0}.cke_float .cke_top{border:1px solid #ddd}.cke_bottom,.cke_contents,.cke_top{display:block;overflow:hidden}.cke_bottom,.cke_top{padding:3px 0 0;background:#eee}.cke_top{white-space:normal}.cke_contents{background-color:#fff;border:1px solid #ddd;border-radius:4px}.cke_bottom{position:relative}.cke_browser_ios .cke_contents{overflow-y:auto;-webkit-overflow-scrolling:touch}.cke_resizer{width:0;height:0;overflow:hidden;border-width:10px 10px 0 0;border-color:transparent #555 transparent transparent;border-style:dashed solid dashed dashed;font-size:0;vertical-align:bottom;margin-top:6px;margin-bottom:2px}.cke_hc .cke_resizer{font-size:15px;width:auto;height:auto;border-width:0}.cke_resizer_ltr{cursor:se-resize;float:right;margin-right:-4px}.cke_resizer_rtl{border-width:10px 0 0 10px;border-color:transparent transparent transparent #aaa;border-style:dashed dashed dashed solid;cursor:sw-resize;float:left;margin-left:-4px;right:auto}.cke_wysiwyg_div{display:block;height:100%;overflow:auto;padding:0 8px;outline-style:none;-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}.cke_panel{visibility:visible;width:120px;height:100px;overflow:hidden;margin-top:5px;background-color:#fff;border:1px solid #aaa;border-radius:4px}.cke_menu_panel{padding:0;margin:0}.cke_combopanel{width:150px;height:178px}.cke_panel_frame{width:100%;height:100%;font-size:12px;overflow:auto;overflow-x:hidden}.cke_panel_container{overflow-y:auto;overflow-x:hidden}.cke_panel_list{list-style-type:none;margin:3px;padding:0;white-space:nowrap}.cke_panel_listItem{margin:0;padding-bottom:1px}.cke_panel_listItem a{padding:3px 4px;display:block;border:1px solid #fff;color:inherit!important;text-decoration:none;overflow:hidden;text-overflow:ellipsis;border-radius:2px}.cke_panel_listItem a:active,.cke_panel_listItem a:focus,.cke_panel_listItem a:hover{background-color:#e1edf7}* html .cke_panel_listItem a{width:100%;color:#000}:first-child+html .cke_panel_listItem a{color:#000}.cke_panel_listItem.cke_selected a{background-color:#92bce0;outline:0}.cke_hc .cke_panel_listItem a{border-style:none}.cke_hc .cke_panel_listItem a:active,.cke_hc .cke_panel_listItem a:focus,.cke_hc .cke_panel_listItem a:hover{border:2px solid;padding:1px 2px}.cke_panel_grouptitle{font-size:11px;font-weight:700;white-space:nowrap;margin:0;padding:6px;color:#474747;border-bottom:1px solid #aaa;background:#eee}.cke_panel_grouptitle:first-child{border-radius:4px 4px 0 0}.cke_panel_listItem h1,.cke_panel_listItem h2,.cke_panel_listItem h3,.cke_panel_listItem h4,.cke_panel_listItem h5,.cke_panel_listItem h6,.cke_panel_listItem p,.cke_panel_listItem pre{margin-top:0;margin-bottom:0}.cke_colorblock{padding:3px;font-size:11px;font-family:'Microsoft Sans Serif',Tahoma,Arial,Verdana,Sans-Serif}.cke_colorblock,.cke_colorblock a{text-decoration:none;color:#000}span.cke_colorbox{width:10px;height:10px;border:1px solid #aaa;float:left}.cke_rtl span.cke_colorbox{float:right}a.cke_colorbox{border:1px solid #fff;padding:2px;float:left;width:12px;height:12px;border-radius:2px}.cke_rtl a.cke_colorbox{float:right}a:active.cke_colorbox,a:focus.cke_colorbox,a:hover.cke_colorbox{border:1px solid #ddd;background-color:#eee}a.cke_colorauto,a.cke_colormore{border:1px solid #fff;padding:2px;display:block;cursor:pointer}a:active.cke_colorauto,a:active.cke_colormore,a:focus.cke_colorauto,a:focus.cke_colormore,a:hover.cke_colorauto,a:hover.cke_colormore{border:1px solid #ddd;background-color:#eee}.cke_toolbar{float:left}.cke_rtl .cke_toolbar{float:right}.cke_toolgroup{float:left;margin:0 6px 3px 0;padding:2px;border:1px solid #ddd;border-radius:4px;background:#fff}.cke_hc .cke_toolgroup{border:0;margin-right:10px;margin-bottom:10px}.cke_rtl .cke_toolgroup :first-child{border-radius:0 4px 4px 0}.cke_rtl .cke_toolgroup :last-child{border-radius:4px 0 0 4px}.cke_rtl .cke_toolgroup{float:right;margin-left:6px;margin-right:0}a.cke_button{display:inline-block;height:18px;padding:2px 4px;outline:0;cursor:default;float:left;border:0;border-radius:2px}.cke_rtl .cke_button{float:right}.cke_hc .cke_button{border:1px solid #000;padding:3px 5px;margin:-2px 4px 0 -2px}.cke_button_on{background:#92bce0}.cke_hc .cke_button_on,.cke_hc a.cke_button_disabled:active,.cke_hc a.cke_button_disabled:focus,.cke_hc a.cke_button_disabled:hover,.cke_hc a.cke_button_off:active,.cke_hc a.cke_button_off:focus,.cke_hc a.cke_button_off:hover{border-width:3px;padding:1px 3px}.cke_button_disabled .cke_button_icon{opacity:.3}.cke_hc .cke_button_disabled{opacity:.5}a.cke_button_disabled:active,a.cke_button_disabled:focus,a.cke_button_disabled:hover,a.cke_button_off:active,a.cke_button_off:focus,a.cke_button_off:hover{background:#e1edf7}.cke_button_icon{cursor:inherit;background-repeat:no-repeat;margin-top:1px;width:16px;height:16px;float:left;display:inline-block}.cke_rtl .cke_button_icon{float:right}.cke_hc .cke_button_icon{display:none}.cke_button_label{display:none;padding-left:3px;margin-top:1px;line-height:18px;vertical-align:middle;float:left;cursor:default;color:#555}.cke_rtl .cke_button_label{padding-right:3px;padding-left:0;float:right}.cke_hc .cke_button_label{padding:0;display:inline-block;font-size:12px}.cke_button_arrow{display:inline-block;margin:8px 0 0 1px;width:0;height:0;cursor:default;vertical-align:top;border-left:3px solid transparent;border-right:3px solid transparent;border-top:3px solid #474747}.cke_rtl .cke_button_arrow{margin-right:5px;margin-left:0}.cke_hc .cke_button_arrow{font-size:10px;margin:3px -2px 0 3px;width:auto;border:0}.cke_toolbar_separator{float:left;background-color:#ddd;margin:4px 2px 0;height:16px;width:1px}.cke_rtl .cke_toolbar_separator{float:right}.cke_hc .cke_toolbar_separator{width:0;border-left:1px solid;margin:1px 5px 0 0}.cke_toolbar_break{display:block;clear:left}.cke_rtl .cke_toolbar_break{clear:right}.cke_toolbox_collapser{width:12px;height:11px;float:right;margin:11px 0 0;font-size:0;cursor:default;text-align:center;border:1px solid #a6a6a6;border-bottom-color:#979797;border-radius:4px;background:#e4e4e4}.cke_toolbox_collapser:hover{background:#ccc}.cke_toolbox_collapser.cke_toolbox_collapser_min{margin:0 2px 4px}.cke_toolbox_collapser.cke_toolbox_collapser_min .cke_arrow{margin-top:4px;border-bottom-color:transparent;border-top-color:#474747}.cke_toolbox_collapser .cke_arrow{display:inline-block;height:0;width:0;font-size:0;margin-top:1px;border-left:3px solid transparent;border-right:3px solid transparent;border-bottom:3px solid #474747;border-top:3px solid transparent}.cke_rtl .cke_toolbox_collapser{float:left}.cke_hc .cke_toolbox_collapser .cke_arrow{font-size:8px;width:auto;border:0;margin-top:0;margin-right:2px}.cke_menubutton{display:block}.cke_button_icon{opacity:.8}.cke_menuitem span{cursor:default}.cke_menubutton:active,.cke_menubutton:focus,.cke_menubutton:hover{display:block}.cke_hc .cke_menubutton{padding:2px}.cke_hc .cke_menubutton:active,.cke_hc .cke_menubutton:focus,.cke_hc .cke_menubutton:hover{border:2px solid;padding:0}.cke_menubutton_inner{display:table-row}.cke_menuarrow,.cke_menubutton_icon,.cke_menubutton_label{display:table-cell}.cke_menubutton_icon{background-color:#d7d8d7;opacity:.7;filter:alpha(opacity=70);padding:4px}.cke_hc .cke_menubutton_icon{height:16px;width:0;padding:4px 0}.cke_menubutton:active .cke_menubutton_icon,.cke_menubutton:focus .cke_menubutton_icon,.cke_menubutton:hover .cke_menubutton_icon{background-color:#d0d2d0}.cke_menubutton_disabled:active .cke_menubutton_icon,.cke_menubutton_disabled:focus .cke_menubutton_icon,.cke_menubutton_disabled:hover .cke_menubutton_icon{opacity:.3;filter:alpha(opacity=30)}.cke_menubutton_label{padding:0 5px;background-color:transparent;width:100%;vertical-align:middle}.cke_menubutton_disabled .cke_menubutton_label{opacity:.3;filter:alpha(opacity=30)}.cke_menubutton_on{border:1px solid #dedede;background-color:#f2f2f2}.cke_menubutton_on .cke_menubutton_icon{padding-right:3px}.cke_menubutton:active,.cke_menubutton:focus,.cke_menubutton:hover{background-color:#eff0ef}.cke_panel_frame .cke_menubutton_label{display:none}.cke_menuseparator{background-color:#d3d3d3;height:1px;filter:alpha(opacity=70);opacity:.7}.cke_menuarrow{background-image:url(images/arrow.png);background-position:0 10px;background-repeat:no-repeat;padding:0 5px}.cke_menuarrow span{display:none}.cke_rtl .cke_menuarrow{background-position:5px -13px;background-repeat:no-repeat}.cke_hc .cke_menuarrow span{vertical-align:middle;display:inline}.cke_combo{display:inline-block;float:left}.cke_rtl .cke_combo{float:right}.cke_hc .cke_combo{margin-top:-2px}.cke_combo_label{display:none;float:left;line-height:26px;vertical-align:top;margin-right:5px}.cke_rtl .cke_combo_label{float:right;margin-left:5px;margin-right:0}.cke_combo_button{display:inline-block;float:left;margin:0 6px 5px 0;border:1px solid #ddd;border-radius:4px;background:#fff}.cke_combo_off a.cke_combo_button:focus,.cke_combo_off a.cke_combo_button:hover{outline:0}.cke_combo_off a.cke_combo_button:active,.cke_combo_on a.cke_combo_button{border-color:#333}.cke_rtl .cke_combo_button{float:right;margin-left:5px;margin-right:0}.cke_hc a.cke_combo_button{padding:3px}.cke_hc .cke_combo_off a.cke_combo_button:active,.cke_hc .cke_combo_off a.cke_combo_button:focus,.cke_hc .cke_combo_off a.cke_combo_button:hover,.cke_hc .cke_combo_on a.cke_combo_button{border-width:3px;padding:1px}.cke_combo_text{line-height:26px;padding-left:10px;text-overflow:ellipsis;overflow:hidden;float:left;cursor:default;color:#474747;width:60px}.cke_rtl .cke_combo_text{float:right;text-align:right;padding-left:0;padding-right:10px}.cke_hc .cke_combo_text{line-height:18px;font-size:12px}.cke_combo_open{cursor:default;display:inline-block;font-size:0;height:19px;line-height:17px;margin:1px 7px;width:5px}.cke_hc .cke_combo_open{height:12px}.cke_combo_arrow{margin:11px 0 0;float:left;height:0;width:0;font-size:0;border-left:3px solid transparent;border-right:3px solid transparent;border-top:3px solid #333}.cke_hc .cke_combo_arrow{font-size:10px;width:auto;border:0;margin-top:3px}.cke_combo_disabled .cke_combo_inlinelabel,.cke_combo_disabled .cke_combo_open{opacity:.3}.cke_path{float:left;margin:-2px 0 2px}.cke_path_empty,.cke_path_item{display:inline-block;float:left;padding:3px 4px;margin-right:2px;cursor:default;text-decoration:none;outline:0;border:0;color:#4c4c4c;font-weight:700;font-size:11px}.cke_rtl .cke_path,.cke_rtl .cke_path_empty,.cke_rtl .cke_path_item{float:right}a.cke_path_item:active,a.cke_path_item:focus,a.cke_path_item:hover{background-color:#bfbfbf;color:#333;border-radius:2px}.cke_hc a.cke_path_item:active,.cke_hc a.cke_path_item:focus,.cke_hc a.cke_path_item:hover{border:2px solid;padding:1px 2px}.cke_button__source_label,.cke_button__sourcedialog_label{display:inline}.cke_combo__fontsize .cke_combo_text{width:30px}.cke_combopanel__fontsize{width:120px}.cke_source{font-family:'Courier New',Monospace;font-size:small;background-color:#fff;white-space:pre}.cke_wysiwyg_div,.cke_wysiwyg_frame{background-color:#fff}.cke_chrome{visibility:inherit}.cke_voice_label,legend.cke_voice_label{display:none}.cke_bottom{padding-bottom:3px}.cke_combo_text{margin-bottom:-1px;margin-top:1px} \ No newline at end of file diff --git a/spaces/nsarrazin/serge/README.md b/spaces/nsarrazin/serge/README.md deleted file mode 100644 index 0783f8666544c86e496e25dbab75ec4a33822933..0000000000000000000000000000000000000000 --- a/spaces/nsarrazin/serge/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Serge -emoji: 🦙 -colorFrom: green -colorTo: indigo -sdk: docker -app_port: 8008 -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ntt123/Vietnam-male-voice-TTS/app.py b/spaces/ntt123/Vietnam-male-voice-TTS/app.py deleted file mode 100644 index 163b1df882488767127e6f8bca89f540097497d2..0000000000000000000000000000000000000000 --- a/spaces/ntt123/Vietnam-male-voice-TTS/app.py +++ /dev/null @@ -1,230 +0,0 @@ -import torch # isort:skip - -torch.manual_seed(42) -import json -import re -import unicodedata -from types import SimpleNamespace - -import gradio as gr -import numpy as np -import regex - -from models import DurationNet, SynthesizerTrn - -title = "LightSpeed: Vietnamese Male Voice TTS" -description = "Vietnam Male Voice TTS." -config_file = "config.json" -duration_model_path = "vbx_duration_model.pth" -lightspeed_model_path = "gen_619k.pth" -phone_set_file = "vbx_phone_set.json" -device = "cuda" if torch.cuda.is_available() else "cpu" -with open(config_file, "rb") as f: - hps = json.load(f, object_hook=lambda x: SimpleNamespace(**x)) - -# load phone set json file -with open(phone_set_file, "r") as f: - phone_set = json.load(f) - -assert phone_set[0][1:-1] == "SEP" -assert "sil" in phone_set -sil_idx = phone_set.index("sil") - -space_re = regex.compile(r"\s+") -number_re = regex.compile("([0-9]+)") -digits = ["không", "một", "hai", "ba", "bốn", "năm", "sáu", "bảy", "tám", "chín"] -num_re = regex.compile(r"([0-9.,]*[0-9])") -alphabet = "aàáảãạăằắẳẵặâầấẩẫậeèéẻẽẹêềếểễệiìíỉĩịoòóỏõọôồốổỗộơờớởỡợuùúủũụưừứửữựyỳýỷỹỵbcdđghklmnpqrstvx" -keep_text_and_num_re = regex.compile(rf"[^\s{alphabet}.,0-9]") -keep_text_re = regex.compile(rf"[^\s{alphabet}]") - - -def read_number(num: str) -> str: - if len(num) == 1: - return digits[int(num)] - elif len(num) == 2 and num.isdigit(): - n = int(num) - end = digits[n % 10] - if n == 10: - return "mười" - if n % 10 == 5: - end = "lăm" - if n % 10 == 0: - return digits[n // 10] + " mươi" - elif n < 20: - return "mười " + end - else: - if n % 10 == 1: - end = "mốt" - return digits[n // 10] + " mươi " + end - elif len(num) == 3 and num.isdigit(): - n = int(num) - if n % 100 == 0: - return digits[n // 100] + " trăm" - elif num[1] == "0": - return digits[n // 100] + " trăm lẻ " + digits[n % 100] - else: - return digits[n // 100] + " trăm " + read_number(num[1:]) - elif len(num) >= 4 and len(num) <= 6 and num.isdigit(): - n = int(num) - n1 = n // 1000 - return read_number(str(n1)) + " ngàn " + read_number(num[-3:]) - elif "," in num: - n1, n2 = num.split(",") - return read_number(n1) + " phẩy " + read_number(n2) - elif "." in num: - parts = num.split(".") - if len(parts) == 2: - if parts[1] == "000": - return read_number(parts[0]) + " ngàn" - elif parts[1].startswith("00"): - end = digits[int(parts[1][2:])] - return read_number(parts[0]) + " ngàn lẻ " + end - else: - return read_number(parts[0]) + " ngàn " + read_number(parts[1]) - elif len(parts) == 3: - return ( - read_number(parts[0]) - + " triệu " - + read_number(parts[1]) - + " ngàn " - + read_number(parts[2]) - ) - return num - - -def text_to_phone_idx(text): - # lowercase - text = text.lower() - # unicode normalize - text = unicodedata.normalize("NFKC", text) - text = text.replace(".", " . ") - text = text.replace(",", " , ") - text = text.replace(";", " ; ") - text = text.replace(":", " : ") - text = text.replace("!", " ! ") - text = text.replace("?", " ? ") - text = text.replace("(", " ( ") - - text = num_re.sub(r" \1 ", text) - words = text.split() - words = [read_number(w) if num_re.fullmatch(w) else w for w in words] - text = " ".join(words) - - # remove redundant spaces - text = re.sub(r"\s+", " ", text) - # remove leading and trailing spaces - text = text.strip() - # convert words to phone indices - tokens = [] - for c in text: - # if c is "," or ".", add phone - if c in ":,.!?;(": - tokens.append(sil_idx) - elif c in phone_set: - tokens.append(phone_set.index(c)) - elif c == " ": - # add phone - tokens.append(0) - if tokens[0] != sil_idx: - # insert phone at the beginning - tokens = [sil_idx, 0] + tokens - if tokens[-1] != sil_idx: - tokens = tokens + [0, sil_idx] - return tokens - - -def text_to_speech(duration_net, generator, text): - # prevent too long text - if len(text) > 500: - text = text[:500] - - phone_idx = text_to_phone_idx(text) - batch = { - "phone_idx": np.array([phone_idx]), - "phone_length": np.array([len(phone_idx)]), - } - - # predict phoneme duration - phone_length = torch.from_numpy(batch["phone_length"].copy()).long().to(device) - phone_idx = torch.from_numpy(batch["phone_idx"].copy()).long().to(device) - with torch.inference_mode(): - phone_duration = duration_net(phone_idx, phone_length)[:, :, 0] * 1000 - phone_duration = torch.where( - phone_idx == sil_idx, torch.clamp_min(phone_duration, 200), phone_duration - ) - phone_duration = torch.where(phone_idx == 0, 0, phone_duration) - - # generate waveform - end_time = torch.cumsum(phone_duration, dim=-1) - start_time = end_time - phone_duration - start_frame = start_time / 1000 * hps.data.sampling_rate / hps.data.hop_length - end_frame = end_time / 1000 * hps.data.sampling_rate / hps.data.hop_length - spec_length = end_frame.max(dim=-1).values - pos = torch.arange(0, spec_length.item(), device=device) - attn = torch.logical_and( - pos[None, :, None] >= start_frame[:, None, :], - pos[None, :, None] < end_frame[:, None, :], - ).float() - with torch.inference_mode(): - y_hat = generator.infer( - phone_idx, phone_length, spec_length, attn, max_len=None, noise_scale=0.667 - )[0] - wave = y_hat[0, 0].data.cpu().numpy() - return (wave * (2**15)).astype(np.int16) - - -def load_models(): - duration_net = DurationNet(hps.data.vocab_size, 64, 4).to(device) - duration_net.load_state_dict(torch.load(duration_model_path, map_location=device)) - duration_net = duration_net.eval() - generator = SynthesizerTrn( - hps.data.vocab_size, - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **vars(hps.model), - ).to(device) - del generator.enc_q - ckpt = torch.load(lightspeed_model_path, map_location=device) - params = {} - for k, v in ckpt["net_g"].items(): - k = k[7:] if k.startswith("module.") else k - params[k] = v - generator.load_state_dict(params, strict=False) - del ckpt, params - generator = generator.eval() - return duration_net, generator - - -def speak(text): - duration_net, generator = load_models() - paragraphs = text.split("\n") - clips = [] # list of audio clips - # silence = np.zeros(hps.data.sampling_rate // 4) - for paragraph in paragraphs: - paragraph = paragraph.strip() - if paragraph == "": - continue - clips.append(text_to_speech(duration_net, generator, paragraph)) - # clips.append(silence) - y = np.concatenate(clips) - return hps.data.sampling_rate, y - - -gr.Interface( - fn=speak, - inputs="text", - outputs="audio", - title=title, - examples=[ - "Trăm năm trong cõi người ta, chữ tài chữ mệnh khéo là ghét nhau.", - "Đoạn trường tân thanh, thường được biết đến với cái tên đơn giản là Truyện Kiều, là một truyện thơ của đại thi hào Nguyễn Du", - "Lục Vân Tiên quê ở huyện Đông Thành, khôi ngô tuấn tú, tài kiêm văn võ. Nghe tin triều đình mở khoa thi, Vân Tiên từ giã thầy xuống núi đua tài.", - "Lê Quý Đôn, tên thuở nhỏ là Lê Danh Phương, là vị quan thời Lê trung hưng, cũng là nhà thơ và được mệnh danh là nhà bác học lớn của Việt Nam trong thời phong kiến", - "Tất cả mọi người đều sinh ra có quyền bình đẳng. Tạo hóa cho họ những quyền không ai có thể xâm phạm được; trong những quyền ấy, có quyền được sống, quyền tự do và quyền mưu cầu hạnh phúc.", - ], - description=description, - theme="default", - allow_screenshot=False, - allow_flagging="never", -).launch(debug=False) diff --git a/spaces/omdenalagos/job_skill_cat/src/gauge_components.py b/spaces/omdenalagos/job_skill_cat/src/gauge_components.py deleted file mode 100644 index 93e1e16d9c67a0120c2caef4ba6c4ca58708e47a..0000000000000000000000000000000000000000 --- a/spaces/omdenalagos/job_skill_cat/src/gauge_components.py +++ /dev/null @@ -1,64 +0,0 @@ -def gauge(value): - gaugeData = [{ - "value": 0, - "name": 'Match %', - "detail": { - "valueAnimation": True, - "offsetCenter": ['0%', '0%'] - } - }] - option = { - "series": [ - { - "type": "gauge", - "startAngle": 90, - "endAngle": -270, - "pointer": { - "show": False, - }, - "progress": { - "show": True, - "overlap": False, - "roundCap":False, - "clip": False, - "backgroundColor": '#11D1F9', - "itemStyle": { - "color": '#E96605', - "borderWidth": 0, - "borderColor": "light blue" - } - }, - "axisLine": { - "lineStyle": { - "width": 40 - } - }, - "splitLine": { - "show": False, - "distance": 0, - "length": 20 - }, - "axisTick": { - "show": False - }, - "axisLabel": { - "show": False, - "distance": 50 - }, - "data": gaugeData, - "detail": { - "valueAnimation": True, - "offsetCenter": ['0%', '0%'], - "width": 40, - "height": 14, - "fontSize": 24, - "color": 'inherit', - "borderColor": 'inherit', - "borderRadius": 0, - "borderWidth": 0, - "formatter": '{value}%' - }, - } - ] - } - return gaugeData ,option \ No newline at end of file diff --git a/spaces/omlab/vlchecklist_demo/models/albef/models/model_ve.py b/spaces/omlab/vlchecklist_demo/models/albef/models/model_ve.py deleted file mode 100644 index d659842adb5536ac8f6e91f6968b130c55fdb422..0000000000000000000000000000000000000000 --- a/spaces/omlab/vlchecklist_demo/models/albef/models/model_ve.py +++ /dev/null @@ -1,110 +0,0 @@ -from functools import partial -from models.vit import VisionTransformer -from models.xbert import BertConfig, BertModel - -import torch -from torch import nn -import torch.nn.functional as F - -class ALBEF(nn.Module): - def __init__(self, - text_encoder = None, - tokenizer = None, - config = None, - ): - super().__init__() - - self.tokenizer = tokenizer - self.distill = config['distill'] - - self.visual_encoder = VisionTransformer( - img_size=config['image_res'], patch_size=16, embed_dim=768, depth=12, num_heads=12, - mlp_ratio=4, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6)) - - bert_config = BertConfig.from_json_file(config['bert_config']) - - self.text_encoder = BertModel.from_pretrained(text_encoder, config=bert_config, add_pooling_layer=False) - - self.cls_head = nn.Sequential( - nn.Linear(self.text_encoder.config.hidden_size, self.text_encoder.config.hidden_size), - nn.ReLU(), - nn.Linear(self.text_encoder.config.hidden_size, 3) - ) - - if self.distill: - self.visual_encoder_m = VisionTransformer( - img_size=config['image_res'], patch_size=16, embed_dim=768, depth=12, num_heads=12, - mlp_ratio=4, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6)) - self.text_encoder_m = BertModel.from_pretrained(text_encoder, config=bert_config, add_pooling_layer=False) - self.cls_head_m = nn.Sequential( - nn.Linear(self.text_encoder.config.hidden_size, self.text_encoder.config.hidden_size), - nn.ReLU(), - nn.Linear(self.text_encoder.config.hidden_size, 3) - ) - - self.model_pairs = [[self.visual_encoder,self.visual_encoder_m], - [self.text_encoder,self.text_encoder_m], - [self.cls_head,self.cls_head_m], - ] - self.copy_params() - self.momentum = 0.995 - - - def forward(self, image, text, targets, alpha=0, train=True): - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - - if train: - output = self.text_encoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True - ) - prediction = self.cls_head(output.last_hidden_state[:,0,:]) - if self.distill: - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - output_m = self.text_encoder_m(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds_m, - encoder_attention_mask = image_atts, - return_dict = True - ) - prediction_m = self.cls_head_m(output_m.last_hidden_state[:,0,:]) - - loss = (1-alpha)*F.cross_entropy(prediction, targets) - alpha*torch.sum( - F.log_softmax(prediction, dim=1)*F.softmax(prediction_m, dim=1),dim=1).mean() - else: - loss = F.cross_entropy(prediction, targets) - return loss - - else: - output = self.text_encoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True - ) - prediction = self.cls_head(output.last_hidden_state[:,0,:]) - return prediction - - - - @torch.no_grad() - def copy_params(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data.copy_(param.data) # initialize - param_m.requires_grad = False # not update by gradient - - - @torch.no_grad() - def _momentum_update(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data = param_m.data * self.momentum + param.data * (1. - self.momentum) - - diff --git a/spaces/oms12/dfgan/models/DAMSM.py b/spaces/oms12/dfgan/models/DAMSM.py deleted file mode 100644 index 0acb185769a959f6a8e60871b990fd7f03c92904..0000000000000000000000000000000000000000 --- a/spaces/oms12/dfgan/models/DAMSM.py +++ /dev/null @@ -1,206 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.parallel -from torch.autograd import Variable -from torchvision import models -import torch.utils.model_zoo as model_zoo -import torch.nn.functional as F -from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence - -# ############## Text2Image Encoder-Decoder ####### -class RNN_ENCODER(nn.Module): - def __init__(self, ntoken, ninput=300, drop_prob=0.5, - nhidden=128, nlayers=1, bidirectional=True): - super(RNN_ENCODER, self).__init__() - self.n_steps = 18 - self.ntoken = ntoken # size of the dictionary - self.ninput = ninput # size of each embedding vector - self.drop_prob = drop_prob # probability of an element to be zeroed - self.nlayers = nlayers # Number of recurrent layers - self.bidirectional = bidirectional - self.rnn_type = 'LSTM' - if bidirectional: - self.num_directions = 2 - else: - self.num_directions = 1 - # number of features in the hidden state - self.nhidden = nhidden // self.num_directions - - self.define_module() - self.init_weights() - - def define_module(self): - self.encoder = nn.Embedding(self.ntoken, self.ninput) - self.drop = nn.Dropout(self.drop_prob) - if self.rnn_type == 'LSTM': - # dropout: If non-zero, introduces a dropout layer on - # the outputs of each RNN layer except the last layer - self.rnn = nn.LSTM(self.ninput, self.nhidden, - self.nlayers, batch_first=True, - dropout=self.drop_prob, - bidirectional=self.bidirectional) - elif self.rnn_type == 'GRU': - self.rnn = nn.GRU(self.ninput, self.nhidden, - self.nlayers, batch_first=True, - dropout=self.drop_prob, - bidirectional=self.bidirectional) - else: - raise NotImplementedError - - def init_weights(self): - initrange = 0.1 - self.encoder.weight.data.uniform_(-initrange, initrange) - # Do not need to initialize RNN parameters, which have been initialized - # http://pytorch.org/docs/master/_modules/torch/nn/modules/rnn.html#LSTM - # self.decoder.weight.data.uniform_(-initrange, initrange) - # self.decoder.bias.data.fill_(0) - - def init_hidden(self, bsz): - weight = next(self.parameters()).data - if self.rnn_type == 'LSTM': - return (Variable(weight.new(self.nlayers * self.num_directions, - bsz, self.nhidden).zero_()), - Variable(weight.new(self.nlayers * self.num_directions, - bsz, self.nhidden).zero_())) - else: - return Variable(weight.new(self.nlayers * self.num_directions, - bsz, self.nhidden).zero_()) - - def forward(self, captions, cap_lens, hidden, mask=None): - # input: torch.LongTensor of size batch x n_steps - # --> emb: batch x n_steps x ninput - emb = self.drop(self.encoder(captions)) - # - # Returns: a PackedSequence object - cap_lens = cap_lens.data.tolist() - emb = pack_padded_sequence(emb, cap_lens, batch_first=True) - # #hidden and memory (num_layers * num_directions, batch, hidden_size): - # tensor containing the initial hidden state for each element in batch. - # #output (batch, seq_len, hidden_size * num_directions) - # #or a PackedSequence object: - # tensor containing output features (h_t) from the last layer of RNN - output, hidden = self.rnn(emb, hidden) - # PackedSequence object - # --> (batch, seq_len, hidden_size * num_directions) - output = pad_packed_sequence(output, batch_first=True)[0] - # output = self.drop(output) - # --> batch x hidden_size*num_directions x seq_len - words_emb = output.transpose(1, 2) - # --> batch x num_directions*hidden_size - if self.rnn_type == 'LSTM': - sent_emb = hidden[0].transpose(0, 1).contiguous() - else: - sent_emb = hidden.transpose(0, 1).contiguous() - sent_emb = sent_emb.view(-1, self.nhidden * self.num_directions) - return words_emb, sent_emb - - -def conv1x1(in_planes, out_planes, bias=False): - "1x1 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, - padding=0, bias=bias) - - -class CNN_ENCODER(nn.Module): - def __init__(self, nef): - super(CNN_ENCODER, self).__init__() - self.nef = 256 # define a uniform ranker - - model = models.inception_v3() - # url = 'https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth' - # model.load_state_dict(model_zoo.load_url(url)) - # print('Load pretrained model from ', url) - for param in model.parameters(): - param.requires_grad = False - # print(model) - - self.define_module(model) - self.init_trainable_weights() - - def define_module(self, model): - self.Conv2d_1a_3x3 = model.Conv2d_1a_3x3 - self.Conv2d_2a_3x3 = model.Conv2d_2a_3x3 - self.Conv2d_2b_3x3 = model.Conv2d_2b_3x3 - self.Conv2d_3b_1x1 = model.Conv2d_3b_1x1 - self.Conv2d_4a_3x3 = model.Conv2d_4a_3x3 - self.Mixed_5b = model.Mixed_5b - self.Mixed_5c = model.Mixed_5c - self.Mixed_5d = model.Mixed_5d - self.Mixed_6a = model.Mixed_6a - self.Mixed_6b = model.Mixed_6b - self.Mixed_6c = model.Mixed_6c - self.Mixed_6d = model.Mixed_6d - self.Mixed_6e = model.Mixed_6e - self.Mixed_7a = model.Mixed_7a - self.Mixed_7b = model.Mixed_7b - self.Mixed_7c = model.Mixed_7c - - self.emb_features = conv1x1(768, self.nef) - self.emb_cnn_code = nn.Linear(2048, self.nef) - - def init_trainable_weights(self): - initrange = 0.1 - self.emb_features.weight.data.uniform_(-initrange, initrange) - self.emb_cnn_code.weight.data.uniform_(-initrange, initrange) - - def forward(self, x): - features = None - # --> fixed-size input: batch x 3 x 299 x 299 - x = nn.functional.interpolate(x,size=(299, 299), mode='bilinear', align_corners=False) - # 299 x 299 x 3 - x = self.Conv2d_1a_3x3(x) - # 149 x 149 x 32 - x = self.Conv2d_2a_3x3(x) - # 147 x 147 x 32 - x = self.Conv2d_2b_3x3(x) - # 147 x 147 x 64 - x = F.max_pool2d(x, kernel_size=3, stride=2) - # 73 x 73 x 64 - x = self.Conv2d_3b_1x1(x) - # 73 x 73 x 80 - x = self.Conv2d_4a_3x3(x) - # 71 x 71 x 192 - - x = F.max_pool2d(x, kernel_size=3, stride=2) - # 35 x 35 x 192 - x = self.Mixed_5b(x) - # 35 x 35 x 256 - x = self.Mixed_5c(x) - # 35 x 35 x 288 - x = self.Mixed_5d(x) - # 35 x 35 x 288 - - x = self.Mixed_6a(x) - # 17 x 17 x 768 - x = self.Mixed_6b(x) - # 17 x 17 x 768 - x = self.Mixed_6c(x) - # 17 x 17 x 768 - x = self.Mixed_6d(x) - # 17 x 17 x 768 - x = self.Mixed_6e(x) - # 17 x 17 x 768 - - # image region features - features = x - # 17 x 17 x 768 - - x = self.Mixed_7a(x) - # 8 x 8 x 1280 - x = self.Mixed_7b(x) - # 8 x 8 x 2048 - x = self.Mixed_7c(x) - # 8 x 8 x 2048 - x = F.avg_pool2d(x, kernel_size=8) - # 1 x 1 x 2048 - # x = F.dropout(x, training=self.training) - # 1 x 1 x 2048 - x = x.view(x.size(0), -1) - # 2048 - - # global image features - cnn_code = self.emb_cnn_code(x) - # 512 - if features is not None: - features = self.emb_features(features) - return features, cnn_code \ No newline at end of file diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/baseline.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/baseline.py deleted file mode 100644 index 66ae09df2b0296f00ec6e578fd7e6f916e083632..0000000000000000000000000000000000000000 --- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/baseline.py +++ /dev/null @@ -1,195 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The Google Research Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -r"""Config for unsupervised training on MultiShapeNet-Easy.""" - -import ml_collections - - -def get_config(): - """Get the default hyperparameter configuration.""" - config = ml_collections.ConfigDict() - - config.seed = 42 - config.seed_data = True - - config.batch_size = 64 - config.num_train_steps = 500000 # from the original Slot Attention - config.init_checkpoint = ml_collections.ConfigDict() - config.init_checkpoint.xid = 0 # Disabled by default. - config.init_checkpoint.wid = 1 - - config.optimizer_configs = ml_collections.ConfigDict() - config.optimizer_configs.optimizer = "adam" - - config.optimizer_configs.grad_clip = ml_collections.ConfigDict() - config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm" - config.optimizer_configs.grad_clip.clip_value = 0.05 - - config.lr_configs = ml_collections.ConfigDict() - config.lr_configs.learning_rate_schedule = "compound" - config.lr_configs.factors = "constant * cosine_decay * linear_warmup" - config.lr_configs.warmup_steps = 10000 # from the original Slot Attention - config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps") - # from the original Slot Attention - config.lr_configs.base_learning_rate = 4e-4 - - config.eval_pad_last_batch = False # True - config.log_loss_every_steps = 50 - config.eval_every_steps = 5000 - config.checkpoint_every_steps = 5000 - - config.train_metrics_spec = { - "loss": "loss", - "ari": "ari", - "ari_nobg": "ari_nobg", - } - config.eval_metrics_spec = { - "eval_loss": "loss", - "eval_ari": "ari", - "eval_ari_nobg": "ari_nobg", - } - - config.data = ml_collections.ConfigDict({ - "dataset_name": "multishapenet_easy", - "shuffle_buffer_size": config.batch_size * 8, - "resolution": (128, 128) - }) - - config.max_instances = 11 - config.num_slots = config.max_instances # Only used for metrics. - config.logging_min_n_colors = config.max_instances - - config.preproc_train = [ - "sunds_to_tfds_video", - "video_from_tfds", - "subtract_one_from_segmentations", - "central_crop(height=240, width=240)", - "resize_small({size})".format(size=min(*config.data.resolution)) - ] - - config.preproc_eval = [ - "sunds_to_tfds_video", - "video_from_tfds", - "subtract_one_from_segmentations", - "central_crop(height=240, width=240)", - "resize_small({size})".format(size=min(*config.data.resolution)) - ] - - config.eval_slice_size = 1 - config.eval_slice_keys = ["video", "segmentations_video"] - - # Dictionary of targets and corresponding channels. Losses need to match. - targets = {"video": 3} - config.losses = {"recon": {"targets": list(targets)}} - config.losses = ml_collections.ConfigDict({ - f"recon_{target}": {"loss_type": "recon", "key": target} - for target in targets}) - - config.model = ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.SAVi", - - # Encoder. - "encoder": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.FrameEncoder", - "reduction": "spatial_flatten", - "backbone": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.SimpleCNN", - "features": [64, 64, 64, 64], - "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5)], - "strides": [(2, 2), (2, 2), (2, 2), (1, 1)] - }), - "pos_emb": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.PositionEmbedding", - "embedding_type": "linear", - "update_type": "project_add", - "output_transform": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.MLP", - "hidden_size": 128, - "layernorm": "pre" - }), - }), - }), - - # Corrector. - "corrector": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.SlotAttention", - "num_iterations": 3, - "qkv_size": 64, - "mlp_size": 128, - }), - - # Predictor. - # Removed since we are running a single frame. - "predictor": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.Identity" - }), - - # Initializer. - "initializer": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.ParamStateInit", - "shape": (11, 64), # (num_slots, slot_size) - }), - - # Decoder. - "decoder": ml_collections.ConfigDict({ - "module": - "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder", - "resolution": (16, 16), # Update if data resolution or strides change - "backbone": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.CNN", - "features": [64, 64, 64, 64, 64], - "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)], - "strides": [(2, 2), (2, 2), (2, 2), (1, 1), (1, 1)], - "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)], - "layer_transpose": [True, True, True, False, False] - }), - "target_readout": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.Readout", - "keys": list(targets), - "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension - "module": "invariant_slot_attention.modules.MLP", - "num_hidden_layers": 0, - "hidden_size": 0, - "output_size": targets[k]}) for k in targets], - }), - "pos_emb": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.PositionEmbedding", - "embedding_type": "linear", - "update_type": "project_add" - }), - }), - "decode_corrected": True, - "decode_predicted": False, - }) - - # Which video-shaped variables to visualize. - config.debug_var_video_paths = { - "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long - } - - # Define which attention matrices to log/visualize. - config.debug_var_attn_paths = { - "corrector_attn": "corrector/InvertedDotProductAttention_0/GeneralizedDotProductAttention_0/attn" # pylint: disable=line-too-long - } - - # Widths of attention matrices (for reshaping to image grid). - config.debug_var_attn_widths = { - "corrector_attn": 16, - } - - return config - - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/textual_inversion/textual_inversion.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/textual_inversion/textual_inversion.py deleted file mode 100644 index 2e6f9a7d95228e98462dbdecd7fce665a946d427..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/textual_inversion/textual_inversion.py +++ /dev/null @@ -1,989 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import random -import shutil -import warnings -from pathlib import Path - -import numpy as np -import PIL -import safetensors -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -if is_wandb_available(): - import wandb - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.22.0.dev0") - -logger = get_logger(__name__) - - -def save_model_card(repo_id: str, images=None, base_model=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- textual_inversion -inference: true ---- - """ - model_card = f""" -# Textual inversion text2image fine-tuning - {repo_id} -These are textual inversion adaption weights for {base_model}. You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def log_validation(text_encoder, tokenizer, unet, vae, args, accelerator, weight_dtype, epoch): - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline (note: unet and vae are loaded again in float32) - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - tokenizer=tokenizer, - unet=unet, - vae=vae, - safety_checker=None, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - return images - - -def save_progress(text_encoder, placeholder_token_ids, accelerator, args, save_path, safe_serialization=True): - logger.info("Saving embeddings") - learned_embeds = ( - accelerator.unwrap_model(text_encoder) - .get_input_embeddings() - .weight[min(placeholder_token_ids) : max(placeholder_token_ids) + 1] - ) - learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()} - - if safe_serialization: - safetensors.torch.save_file(learned_embeds_dict, save_path, metadata={"format": "pt"}) - else: - torch.save(learned_embeds_dict, save_path) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--save_as_full_pipeline", - action="store_true", - help="Save the complete stable diffusion pipeline.", - ) - parser.add_argument( - "--num_vectors", - type=int, - default=1, - help="How many textual inversion vectors shall be used to learn the concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_steps", - type=int, - default=100, - help=( - "Run validation every X steps. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=None, - help=( - "Deprecated in favor of validation_steps. Run validation every X epochs. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - parser.add_argument( - "--no_safe_serialization", - action="store_true", - help="If specified save the checkpoint not in `safetensors` format, but in original PyTorch format instead.", - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # Add the placeholder token in tokenizer - placeholder_tokens = [args.placeholder_token] - - if args.num_vectors < 1: - raise ValueError(f"--num_vectors has to be larger or equal to 1, but is {args.num_vectors}") - - # add dummy tokens for multi-vector - additional_tokens = [] - for i in range(1, args.num_vectors): - additional_tokens.append(f"{args.placeholder_token}_{i}") - placeholder_tokens += additional_tokens - - num_added_tokens = tokenizer.add_tokens(placeholder_tokens) - if num_added_tokens != args.num_vectors: - raise ValueError( - f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different" - " `placeholder_token` that is not already in the tokenizer." - ) - - # Convert the initializer_token, placeholder_token to ids - token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False) - # Check if initializer_token is a single token or a sequence of tokens - if len(token_ids) > 1: - raise ValueError("The initializer token must be a single token.") - - initializer_token_id = token_ids[0] - placeholder_token_ids = tokenizer.convert_tokens_to_ids(placeholder_tokens) - - # Resize the token embeddings as we are adding new special tokens to the tokenizer - text_encoder.resize_token_embeddings(len(tokenizer)) - - # Initialise the newly added placeholder token with the embeddings of the initializer token - token_embeds = text_encoder.get_input_embeddings().weight.data - with torch.no_grad(): - for token_id in placeholder_token_ids: - token_embeds[token_id] = token_embeds[initializer_token_id].clone() - - # Freeze vae and unet - vae.requires_grad_(False) - unet.requires_grad_(False) - # Freeze all parameters except for the token embeddings in text encoder - text_encoder.text_model.encoder.requires_grad_(False) - text_encoder.text_model.final_layer_norm.requires_grad_(False) - text_encoder.text_model.embeddings.position_embedding.requires_grad_(False) - - if args.gradient_checkpointing: - # Keep unet in train mode if we are using gradient checkpointing to save memory. - # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode. - unet.train() - text_encoder.gradient_checkpointing_enable() - unet.enable_gradient_checkpointing() - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - if args.validation_epochs is not None: - warnings.warn( - f"FutureWarning: You are doing logging with validation_epochs={args.validation_epochs}." - " Deprecated validation_epochs in favor of `validation_steps`" - f"Setting `args.validation_steps` to {args.validation_epochs * len(train_dataset)}", - FutureWarning, - stacklevel=2, - ) - args.validation_steps = args.validation_epochs * len(train_dataset) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - num_cycles=args.lr_num_cycles, - ) - - # Prepare everything with our `accelerator`. - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast all non-trainable weigths (vae, non-lora text_encoder and non-lora unet) to half-precision - # as these weights are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and unet to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - # keep original embeddings as reference - orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone() - - for epoch in range(first_epoch, args.num_train_epochs): - text_encoder.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype) - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Let's make sure we don't update any embedding weights besides the newly added token - index_no_updates = torch.ones((len(tokenizer),), dtype=torch.bool) - index_no_updates[min(placeholder_token_ids) : max(placeholder_token_ids) + 1] = False - - with torch.no_grad(): - accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[ - index_no_updates - ] = orig_embeds_params[index_no_updates] - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - images = [] - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - weight_name = ( - f"learned_embeds-steps-{global_step}.bin" - if args.no_safe_serialization - else f"learned_embeds-steps-{global_step}.safetensors" - ) - save_path = os.path.join(args.output_dir, weight_name) - save_progress( - text_encoder, - placeholder_token_ids, - accelerator, - args, - save_path, - safe_serialization=not args.no_safe_serialization, - ) - - if accelerator.is_main_process: - if global_step % args.checkpointing_steps == 0: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - if args.validation_prompt is not None and global_step % args.validation_steps == 0: - images = log_validation( - text_encoder, tokenizer, unet, vae, args, accelerator, weight_dtype, epoch - ) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.push_to_hub and not args.save_as_full_pipeline: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = args.save_as_full_pipeline - if save_full_model: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - weight_name = "learned_embeds.bin" if args.no_safe_serialization else "learned_embeds.safetensors" - save_path = os.path.join(args.output_dir, weight_name) - save_progress( - text_encoder, - placeholder_token_ids, - accelerator, - args, - save_path, - safe_serialization=not args.no_safe_serialization, - ) - - if args.push_to_hub: - save_model_card( - repo_id, - images=images, - base_model=args.pretrained_model_name_or_path, - repo_folder=args.output_dir, - ) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/patgpt4/MusicGen/audiocraft/utils/__init__.py b/spaces/patgpt4/MusicGen/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/phyloforfun/GreenSight/run_greensight.py b/spaces/phyloforfun/GreenSight/run_greensight.py deleted file mode 100644 index e70e370f8bc86cacf4e1f17f47366ad65b4d589f..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/GreenSight/run_greensight.py +++ /dev/null @@ -1,23 +0,0 @@ -import streamlit.web.cli as stcli -import os, sys - - -def resolve_path(path): - resolved_path = os.path.abspath(os.path.join(os.getcwd(), path)) - return resolved_path - - -if __name__ == "__main__": - dir_home = os.path.dirname(__file__) - - # pip install protobuf==3.20.0 - - sys.argv = [ - "streamlit", - "run", - resolve_path(os.path.join(dir_home,"app.py")), - "--global.developmentMode=false", - "--server.port=8519", - - ] - sys.exit(stcli.main()) \ No newline at end of file diff --git a/spaces/pierreguillou/whisper-demo-french/README.md b/spaces/pierreguillou/whisper-demo-french/README.md deleted file mode 100644 index 742b13bbe67e8ab8b592e9c47a2d96a9b4b6bc91..0000000000000000000000000000000000000000 --- a/spaces/pierreguillou/whisper-demo-french/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Demo in French -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: whisper-event/whisper-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/__main__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/__main__.py deleted file mode 100644 index 5991326115fe5026470165b387ba2bc78bceb006..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/__main__.py +++ /dev/null @@ -1,24 +0,0 @@ -import os -import sys - -# Remove '' and current working directory from the first entry -# of sys.path, if present to avoid using current directory -# in pip commands check, freeze, install, list and show, -# when invoked as python -m pip -if sys.path[0] in ("", os.getcwd()): - sys.path.pop(0) - -# If we are running from a wheel, add the wheel to sys.path -# This allows the usage python pip-*.whl/pip install pip-*.whl -if __package__ == "": - # __file__ is pip-*.whl/pip/__main__.py - # first dirname call strips of '/__main__.py', second strips off '/pip' - # Resulting path is the name of the wheel itself - # Add that to sys.path so we can import pip - path = os.path.dirname(os.path.dirname(__file__)) - sys.path.insert(0, path) - -if __name__ == "__main__": - from pip._internal.cli.main import main as _main - - sys.exit(_main()) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py deleted file mode 100644 index 39a5388948ef12b69b65fbfa89a84c6ef4a4bfd6..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py +++ /dev/null @@ -1,5725 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -RUSSIAN_LANG_MODEL = { - 37: { # 'А' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 44: { # 'Б' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 33: { # 'В' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 0, # 'ю' - 16: 1, # 'я' - }, - 46: { # 'Г' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 41: { # 'Д' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 3, # 'ж' - 20: 1, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 48: { # 'Е' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 2, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 1, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 56: { # 'Ж' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 1, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 2, # 'ю' - 16: 0, # 'я' - }, - 51: { # 'З' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 1, # 'я' - }, - 42: { # 'И' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 2, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 60: { # 'Й' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 36: { # 'К' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 49: { # 'Л' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 1, # 'я' - }, - 38: { # 'М' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 1, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 31: { # 'Н' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 2, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 34: { # 'О' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 2, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 2, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 35: { # 'П' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 1, # 'с' - 6: 1, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 2, # 'я' - }, - 45: { # 'Р' - 37: 2, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 2, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 32: { # 'С' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 2, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 40: { # 'Т' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 52: { # 'У' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 1, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 1, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 53: { # 'Ф' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 1, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 55: { # 'Х' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 58: { # 'Ц' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 50: { # 'Ч' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 57: { # 'Ш' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 63: { # 'Щ' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 62: { # 'Ы' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 61: { # 'Ь' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 47: { # 'Э' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 59: { # 'Ю' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 43: { # 'Я' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 1, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 3: { # 'а' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 21: { # 'б' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 10: { # 'в' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 19: { # 'г' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 13: { # 'д' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 3, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 2: { # 'е' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 24: { # 'ж' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 1, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 20: { # 'з' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 4: { # 'и' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 23: { # 'й' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 11: { # 'к' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 3, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 8: { # 'л' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 3, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 12: { # 'м' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 5: { # 'н' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 2, # 'щ' - 54: 1, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 1: { # 'о' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 15: { # 'п' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 9: { # 'р' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 7: { # 'с' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 6: { # 'т' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 2, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 14: { # 'у' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 2, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 2, # 'я' - }, - 39: { # 'ф' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 26: { # 'х' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 3, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 28: { # 'ц' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 1, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 22: { # 'ч' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 3, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 25: { # 'ш' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 29: { # 'щ' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 2, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 54: { # 'ъ' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 18: { # 'ы' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 1, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 2, # 'я' - }, - 17: { # 'ь' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 0, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 0, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 30: { # 'э' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 27: { # 'ю' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 1, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 1, # 'я' - }, - 16: { # 'я' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 2, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 2, # 'ю' - 16: 2, # 'я' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -IBM866_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 37, # 'А' - 129: 44, # 'Б' - 130: 33, # 'В' - 131: 46, # 'Г' - 132: 41, # 'Д' - 133: 48, # 'Е' - 134: 56, # 'Ж' - 135: 51, # 'З' - 136: 42, # 'И' - 137: 60, # 'Й' - 138: 36, # 'К' - 139: 49, # 'Л' - 140: 38, # 'М' - 141: 31, # 'Н' - 142: 34, # 'О' - 143: 35, # 'П' - 144: 45, # 'Р' - 145: 32, # 'С' - 146: 40, # 'Т' - 147: 52, # 'У' - 148: 53, # 'Ф' - 149: 55, # 'Х' - 150: 58, # 'Ц' - 151: 50, # 'Ч' - 152: 57, # 'Ш' - 153: 63, # 'Щ' - 154: 70, # 'Ъ' - 155: 62, # 'Ы' - 156: 61, # 'Ь' - 157: 47, # 'Э' - 158: 59, # 'Ю' - 159: 43, # 'Я' - 160: 3, # 'а' - 161: 21, # 'б' - 162: 10, # 'в' - 163: 19, # 'г' - 164: 13, # 'д' - 165: 2, # 'е' - 166: 24, # 'ж' - 167: 20, # 'з' - 168: 4, # 'и' - 169: 23, # 'й' - 170: 11, # 'к' - 171: 8, # 'л' - 172: 12, # 'м' - 173: 5, # 'н' - 174: 1, # 'о' - 175: 15, # 'п' - 176: 191, # '░' - 177: 192, # '▒' - 178: 193, # '▓' - 179: 194, # '│' - 180: 195, # '┤' - 181: 196, # '╡' - 182: 197, # '╢' - 183: 198, # '╖' - 184: 199, # '╕' - 185: 200, # '╣' - 186: 201, # '║' - 187: 202, # '╗' - 188: 203, # '╝' - 189: 204, # '╜' - 190: 205, # '╛' - 191: 206, # '┐' - 192: 207, # '└' - 193: 208, # '┴' - 194: 209, # '┬' - 195: 210, # '├' - 196: 211, # '─' - 197: 212, # '┼' - 198: 213, # '╞' - 199: 214, # '╟' - 200: 215, # '╚' - 201: 216, # '╔' - 202: 217, # '╩' - 203: 218, # '╦' - 204: 219, # '╠' - 205: 220, # '═' - 206: 221, # '╬' - 207: 222, # '╧' - 208: 223, # '╨' - 209: 224, # '╤' - 210: 225, # '╥' - 211: 226, # '╙' - 212: 227, # '╘' - 213: 228, # '╒' - 214: 229, # '╓' - 215: 230, # '╫' - 216: 231, # '╪' - 217: 232, # '┘' - 218: 233, # '┌' - 219: 234, # '█' - 220: 235, # '▄' - 221: 236, # '▌' - 222: 237, # '▐' - 223: 238, # '▀' - 224: 9, # 'р' - 225: 7, # 'с' - 226: 6, # 'т' - 227: 14, # 'у' - 228: 39, # 'ф' - 229: 26, # 'х' - 230: 28, # 'ц' - 231: 22, # 'ч' - 232: 25, # 'ш' - 233: 29, # 'щ' - 234: 54, # 'ъ' - 235: 18, # 'ы' - 236: 17, # 'ь' - 237: 30, # 'э' - 238: 27, # 'ю' - 239: 16, # 'я' - 240: 239, # 'Ё' - 241: 68, # 'ё' - 242: 240, # 'Є' - 243: 241, # 'є' - 244: 242, # 'Ї' - 245: 243, # 'ї' - 246: 244, # 'Ў' - 247: 245, # 'ў' - 248: 246, # '°' - 249: 247, # '∙' - 250: 248, # '·' - 251: 249, # '√' - 252: 250, # '№' - 253: 251, # '¤' - 254: 252, # '■' - 255: 255, # '\xa0' -} - -IBM866_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="IBM866", - language="Russian", - char_to_order_map=IBM866_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # 'Ђ' - 129: 192, # 'Ѓ' - 130: 193, # '‚' - 131: 194, # 'ѓ' - 132: 195, # '„' - 133: 196, # '…' - 134: 197, # '†' - 135: 198, # '‡' - 136: 199, # '€' - 137: 200, # '‰' - 138: 201, # 'Љ' - 139: 202, # '‹' - 140: 203, # 'Њ' - 141: 204, # 'Ќ' - 142: 205, # 'Ћ' - 143: 206, # 'Џ' - 144: 207, # 'ђ' - 145: 208, # '‘' - 146: 209, # '’' - 147: 210, # '“' - 148: 211, # '”' - 149: 212, # '•' - 150: 213, # '–' - 151: 214, # '—' - 152: 215, # None - 153: 216, # '™' - 154: 217, # 'љ' - 155: 218, # '›' - 156: 219, # 'њ' - 157: 220, # 'ќ' - 158: 221, # 'ћ' - 159: 222, # 'џ' - 160: 223, # '\xa0' - 161: 224, # 'Ў' - 162: 225, # 'ў' - 163: 226, # 'Ј' - 164: 227, # '¤' - 165: 228, # 'Ґ' - 166: 229, # '¦' - 167: 230, # '§' - 168: 231, # 'Ё' - 169: 232, # '©' - 170: 233, # 'Є' - 171: 234, # '«' - 172: 235, # '¬' - 173: 236, # '\xad' - 174: 237, # '®' - 175: 238, # 'Ї' - 176: 239, # '°' - 177: 240, # '±' - 178: 241, # 'І' - 179: 242, # 'і' - 180: 243, # 'ґ' - 181: 244, # 'µ' - 182: 245, # '¶' - 183: 246, # '·' - 184: 68, # 'ё' - 185: 247, # '№' - 186: 248, # 'є' - 187: 249, # '»' - 188: 250, # 'ј' - 189: 251, # 'Ѕ' - 190: 252, # 'ѕ' - 191: 253, # 'ї' - 192: 37, # 'А' - 193: 44, # 'Б' - 194: 33, # 'В' - 195: 46, # 'Г' - 196: 41, # 'Д' - 197: 48, # 'Е' - 198: 56, # 'Ж' - 199: 51, # 'З' - 200: 42, # 'И' - 201: 60, # 'Й' - 202: 36, # 'К' - 203: 49, # 'Л' - 204: 38, # 'М' - 205: 31, # 'Н' - 206: 34, # 'О' - 207: 35, # 'П' - 208: 45, # 'Р' - 209: 32, # 'С' - 210: 40, # 'Т' - 211: 52, # 'У' - 212: 53, # 'Ф' - 213: 55, # 'Х' - 214: 58, # 'Ц' - 215: 50, # 'Ч' - 216: 57, # 'Ш' - 217: 63, # 'Щ' - 218: 70, # 'Ъ' - 219: 62, # 'Ы' - 220: 61, # 'Ь' - 221: 47, # 'Э' - 222: 59, # 'Ю' - 223: 43, # 'Я' - 224: 3, # 'а' - 225: 21, # 'б' - 226: 10, # 'в' - 227: 19, # 'г' - 228: 13, # 'д' - 229: 2, # 'е' - 230: 24, # 'ж' - 231: 20, # 'з' - 232: 4, # 'и' - 233: 23, # 'й' - 234: 11, # 'к' - 235: 8, # 'л' - 236: 12, # 'м' - 237: 5, # 'н' - 238: 1, # 'о' - 239: 15, # 'п' - 240: 9, # 'р' - 241: 7, # 'с' - 242: 6, # 'т' - 243: 14, # 'у' - 244: 39, # 'ф' - 245: 26, # 'х' - 246: 28, # 'ц' - 247: 22, # 'ч' - 248: 25, # 'ш' - 249: 29, # 'щ' - 250: 54, # 'ъ' - 251: 18, # 'ы' - 252: 17, # 'ь' - 253: 30, # 'э' - 254: 27, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1251", - language="Russian", - char_to_order_map=WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -IBM855_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # 'ђ' - 129: 192, # 'Ђ' - 130: 193, # 'ѓ' - 131: 194, # 'Ѓ' - 132: 68, # 'ё' - 133: 195, # 'Ё' - 134: 196, # 'є' - 135: 197, # 'Є' - 136: 198, # 'ѕ' - 137: 199, # 'Ѕ' - 138: 200, # 'і' - 139: 201, # 'І' - 140: 202, # 'ї' - 141: 203, # 'Ї' - 142: 204, # 'ј' - 143: 205, # 'Ј' - 144: 206, # 'љ' - 145: 207, # 'Љ' - 146: 208, # 'њ' - 147: 209, # 'Њ' - 148: 210, # 'ћ' - 149: 211, # 'Ћ' - 150: 212, # 'ќ' - 151: 213, # 'Ќ' - 152: 214, # 'ў' - 153: 215, # 'Ў' - 154: 216, # 'џ' - 155: 217, # 'Џ' - 156: 27, # 'ю' - 157: 59, # 'Ю' - 158: 54, # 'ъ' - 159: 70, # 'Ъ' - 160: 3, # 'а' - 161: 37, # 'А' - 162: 21, # 'б' - 163: 44, # 'Б' - 164: 28, # 'ц' - 165: 58, # 'Ц' - 166: 13, # 'д' - 167: 41, # 'Д' - 168: 2, # 'е' - 169: 48, # 'Е' - 170: 39, # 'ф' - 171: 53, # 'Ф' - 172: 19, # 'г' - 173: 46, # 'Г' - 174: 218, # '«' - 175: 219, # '»' - 176: 220, # '░' - 177: 221, # '▒' - 178: 222, # '▓' - 179: 223, # '│' - 180: 224, # '┤' - 181: 26, # 'х' - 182: 55, # 'Х' - 183: 4, # 'и' - 184: 42, # 'И' - 185: 225, # '╣' - 186: 226, # '║' - 187: 227, # '╗' - 188: 228, # '╝' - 189: 23, # 'й' - 190: 60, # 'Й' - 191: 229, # '┐' - 192: 230, # '└' - 193: 231, # '┴' - 194: 232, # '┬' - 195: 233, # '├' - 196: 234, # '─' - 197: 235, # '┼' - 198: 11, # 'к' - 199: 36, # 'К' - 200: 236, # '╚' - 201: 237, # '╔' - 202: 238, # '╩' - 203: 239, # '╦' - 204: 240, # '╠' - 205: 241, # '═' - 206: 242, # '╬' - 207: 243, # '¤' - 208: 8, # 'л' - 209: 49, # 'Л' - 210: 12, # 'м' - 211: 38, # 'М' - 212: 5, # 'н' - 213: 31, # 'Н' - 214: 1, # 'о' - 215: 34, # 'О' - 216: 15, # 'п' - 217: 244, # '┘' - 218: 245, # '┌' - 219: 246, # '█' - 220: 247, # '▄' - 221: 35, # 'П' - 222: 16, # 'я' - 223: 248, # '▀' - 224: 43, # 'Я' - 225: 9, # 'р' - 226: 45, # 'Р' - 227: 7, # 'с' - 228: 32, # 'С' - 229: 6, # 'т' - 230: 40, # 'Т' - 231: 14, # 'у' - 232: 52, # 'У' - 233: 24, # 'ж' - 234: 56, # 'Ж' - 235: 10, # 'в' - 236: 33, # 'В' - 237: 17, # 'ь' - 238: 61, # 'Ь' - 239: 249, # '№' - 240: 250, # '\xad' - 241: 18, # 'ы' - 242: 62, # 'Ы' - 243: 20, # 'з' - 244: 51, # 'З' - 245: 25, # 'ш' - 246: 57, # 'Ш' - 247: 30, # 'э' - 248: 47, # 'Э' - 249: 29, # 'щ' - 250: 63, # 'Щ' - 251: 22, # 'ч' - 252: 50, # 'Ч' - 253: 251, # '§' - 254: 252, # '■' - 255: 255, # '\xa0' -} - -IBM855_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="IBM855", - language="Russian", - char_to_order_map=IBM855_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -KOI8_R_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # '─' - 129: 192, # '│' - 130: 193, # '┌' - 131: 194, # '┐' - 132: 195, # '└' - 133: 196, # '┘' - 134: 197, # '├' - 135: 198, # '┤' - 136: 199, # '┬' - 137: 200, # '┴' - 138: 201, # '┼' - 139: 202, # '▀' - 140: 203, # '▄' - 141: 204, # '█' - 142: 205, # '▌' - 143: 206, # '▐' - 144: 207, # '░' - 145: 208, # '▒' - 146: 209, # '▓' - 147: 210, # '⌠' - 148: 211, # '■' - 149: 212, # '∙' - 150: 213, # '√' - 151: 214, # '≈' - 152: 215, # '≤' - 153: 216, # '≥' - 154: 217, # '\xa0' - 155: 218, # '⌡' - 156: 219, # '°' - 157: 220, # '²' - 158: 221, # '·' - 159: 222, # '÷' - 160: 223, # '═' - 161: 224, # '║' - 162: 225, # '╒' - 163: 68, # 'ё' - 164: 226, # '╓' - 165: 227, # '╔' - 166: 228, # '╕' - 167: 229, # '╖' - 168: 230, # '╗' - 169: 231, # '╘' - 170: 232, # '╙' - 171: 233, # '╚' - 172: 234, # '╛' - 173: 235, # '╜' - 174: 236, # '╝' - 175: 237, # '╞' - 176: 238, # '╟' - 177: 239, # '╠' - 178: 240, # '╡' - 179: 241, # 'Ё' - 180: 242, # '╢' - 181: 243, # '╣' - 182: 244, # '╤' - 183: 245, # '╥' - 184: 246, # '╦' - 185: 247, # '╧' - 186: 248, # '╨' - 187: 249, # '╩' - 188: 250, # '╪' - 189: 251, # '╫' - 190: 252, # '╬' - 191: 253, # '©' - 192: 27, # 'ю' - 193: 3, # 'а' - 194: 21, # 'б' - 195: 28, # 'ц' - 196: 13, # 'д' - 197: 2, # 'е' - 198: 39, # 'ф' - 199: 19, # 'г' - 200: 26, # 'х' - 201: 4, # 'и' - 202: 23, # 'й' - 203: 11, # 'к' - 204: 8, # 'л' - 205: 12, # 'м' - 206: 5, # 'н' - 207: 1, # 'о' - 208: 15, # 'п' - 209: 16, # 'я' - 210: 9, # 'р' - 211: 7, # 'с' - 212: 6, # 'т' - 213: 14, # 'у' - 214: 24, # 'ж' - 215: 10, # 'в' - 216: 17, # 'ь' - 217: 18, # 'ы' - 218: 20, # 'з' - 219: 25, # 'ш' - 220: 30, # 'э' - 221: 29, # 'щ' - 222: 22, # 'ч' - 223: 54, # 'ъ' - 224: 59, # 'Ю' - 225: 37, # 'А' - 226: 44, # 'Б' - 227: 58, # 'Ц' - 228: 41, # 'Д' - 229: 48, # 'Е' - 230: 53, # 'Ф' - 231: 46, # 'Г' - 232: 55, # 'Х' - 233: 42, # 'И' - 234: 60, # 'Й' - 235: 36, # 'К' - 236: 49, # 'Л' - 237: 38, # 'М' - 238: 31, # 'Н' - 239: 34, # 'О' - 240: 35, # 'П' - 241: 43, # 'Я' - 242: 45, # 'Р' - 243: 32, # 'С' - 244: 40, # 'Т' - 245: 52, # 'У' - 246: 56, # 'Ж' - 247: 33, # 'В' - 248: 61, # 'Ь' - 249: 62, # 'Ы' - 250: 51, # 'З' - 251: 57, # 'Ш' - 252: 47, # 'Э' - 253: 63, # 'Щ' - 254: 50, # 'Ч' - 255: 70, # 'Ъ' -} - -KOI8_R_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="KOI8-R", - language="Russian", - char_to_order_map=KOI8_R_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 37, # 'А' - 129: 44, # 'Б' - 130: 33, # 'В' - 131: 46, # 'Г' - 132: 41, # 'Д' - 133: 48, # 'Е' - 134: 56, # 'Ж' - 135: 51, # 'З' - 136: 42, # 'И' - 137: 60, # 'Й' - 138: 36, # 'К' - 139: 49, # 'Л' - 140: 38, # 'М' - 141: 31, # 'Н' - 142: 34, # 'О' - 143: 35, # 'П' - 144: 45, # 'Р' - 145: 32, # 'С' - 146: 40, # 'Т' - 147: 52, # 'У' - 148: 53, # 'Ф' - 149: 55, # 'Х' - 150: 58, # 'Ц' - 151: 50, # 'Ч' - 152: 57, # 'Ш' - 153: 63, # 'Щ' - 154: 70, # 'Ъ' - 155: 62, # 'Ы' - 156: 61, # 'Ь' - 157: 47, # 'Э' - 158: 59, # 'Ю' - 159: 43, # 'Я' - 160: 191, # '†' - 161: 192, # '°' - 162: 193, # 'Ґ' - 163: 194, # '£' - 164: 195, # '§' - 165: 196, # '•' - 166: 197, # '¶' - 167: 198, # 'І' - 168: 199, # '®' - 169: 200, # '©' - 170: 201, # '™' - 171: 202, # 'Ђ' - 172: 203, # 'ђ' - 173: 204, # '≠' - 174: 205, # 'Ѓ' - 175: 206, # 'ѓ' - 176: 207, # '∞' - 177: 208, # '±' - 178: 209, # '≤' - 179: 210, # '≥' - 180: 211, # 'і' - 181: 212, # 'µ' - 182: 213, # 'ґ' - 183: 214, # 'Ј' - 184: 215, # 'Є' - 185: 216, # 'є' - 186: 217, # 'Ї' - 187: 218, # 'ї' - 188: 219, # 'Љ' - 189: 220, # 'љ' - 190: 221, # 'Њ' - 191: 222, # 'њ' - 192: 223, # 'ј' - 193: 224, # 'Ѕ' - 194: 225, # '¬' - 195: 226, # '√' - 196: 227, # 'ƒ' - 197: 228, # '≈' - 198: 229, # '∆' - 199: 230, # '«' - 200: 231, # '»' - 201: 232, # '…' - 202: 233, # '\xa0' - 203: 234, # 'Ћ' - 204: 235, # 'ћ' - 205: 236, # 'Ќ' - 206: 237, # 'ќ' - 207: 238, # 'ѕ' - 208: 239, # '–' - 209: 240, # '—' - 210: 241, # '“' - 211: 242, # '”' - 212: 243, # '‘' - 213: 244, # '’' - 214: 245, # '÷' - 215: 246, # '„' - 216: 247, # 'Ў' - 217: 248, # 'ў' - 218: 249, # 'Џ' - 219: 250, # 'џ' - 220: 251, # '№' - 221: 252, # 'Ё' - 222: 68, # 'ё' - 223: 16, # 'я' - 224: 3, # 'а' - 225: 21, # 'б' - 226: 10, # 'в' - 227: 19, # 'г' - 228: 13, # 'д' - 229: 2, # 'е' - 230: 24, # 'ж' - 231: 20, # 'з' - 232: 4, # 'и' - 233: 23, # 'й' - 234: 11, # 'к' - 235: 8, # 'л' - 236: 12, # 'м' - 237: 5, # 'н' - 238: 1, # 'о' - 239: 15, # 'п' - 240: 9, # 'р' - 241: 7, # 'с' - 242: 6, # 'т' - 243: 14, # 'у' - 244: 39, # 'ф' - 245: 26, # 'х' - 246: 28, # 'ц' - 247: 22, # 'ч' - 248: 25, # 'ш' - 249: 29, # 'щ' - 250: 54, # 'ъ' - 251: 18, # 'ы' - 252: 17, # 'ь' - 253: 30, # 'э' - 254: 27, # 'ю' - 255: 255, # '€' -} - -MACCYRILLIC_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="MacCyrillic", - language="Russian", - char_to_order_map=MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -ISO_8859_5_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # '\x80' - 129: 192, # '\x81' - 130: 193, # '\x82' - 131: 194, # '\x83' - 132: 195, # '\x84' - 133: 196, # '\x85' - 134: 197, # '\x86' - 135: 198, # '\x87' - 136: 199, # '\x88' - 137: 200, # '\x89' - 138: 201, # '\x8a' - 139: 202, # '\x8b' - 140: 203, # '\x8c' - 141: 204, # '\x8d' - 142: 205, # '\x8e' - 143: 206, # '\x8f' - 144: 207, # '\x90' - 145: 208, # '\x91' - 146: 209, # '\x92' - 147: 210, # '\x93' - 148: 211, # '\x94' - 149: 212, # '\x95' - 150: 213, # '\x96' - 151: 214, # '\x97' - 152: 215, # '\x98' - 153: 216, # '\x99' - 154: 217, # '\x9a' - 155: 218, # '\x9b' - 156: 219, # '\x9c' - 157: 220, # '\x9d' - 158: 221, # '\x9e' - 159: 222, # '\x9f' - 160: 223, # '\xa0' - 161: 224, # 'Ё' - 162: 225, # 'Ђ' - 163: 226, # 'Ѓ' - 164: 227, # 'Є' - 165: 228, # 'Ѕ' - 166: 229, # 'І' - 167: 230, # 'Ї' - 168: 231, # 'Ј' - 169: 232, # 'Љ' - 170: 233, # 'Њ' - 171: 234, # 'Ћ' - 172: 235, # 'Ќ' - 173: 236, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 37, # 'А' - 177: 44, # 'Б' - 178: 33, # 'В' - 179: 46, # 'Г' - 180: 41, # 'Д' - 181: 48, # 'Е' - 182: 56, # 'Ж' - 183: 51, # 'З' - 184: 42, # 'И' - 185: 60, # 'Й' - 186: 36, # 'К' - 187: 49, # 'Л' - 188: 38, # 'М' - 189: 31, # 'Н' - 190: 34, # 'О' - 191: 35, # 'П' - 192: 45, # 'Р' - 193: 32, # 'С' - 194: 40, # 'Т' - 195: 52, # 'У' - 196: 53, # 'Ф' - 197: 55, # 'Х' - 198: 58, # 'Ц' - 199: 50, # 'Ч' - 200: 57, # 'Ш' - 201: 63, # 'Щ' - 202: 70, # 'Ъ' - 203: 62, # 'Ы' - 204: 61, # 'Ь' - 205: 47, # 'Э' - 206: 59, # 'Ю' - 207: 43, # 'Я' - 208: 3, # 'а' - 209: 21, # 'б' - 210: 10, # 'в' - 211: 19, # 'г' - 212: 13, # 'д' - 213: 2, # 'е' - 214: 24, # 'ж' - 215: 20, # 'з' - 216: 4, # 'и' - 217: 23, # 'й' - 218: 11, # 'к' - 219: 8, # 'л' - 220: 12, # 'м' - 221: 5, # 'н' - 222: 1, # 'о' - 223: 15, # 'п' - 224: 9, # 'р' - 225: 7, # 'с' - 226: 6, # 'т' - 227: 14, # 'у' - 228: 39, # 'ф' - 229: 26, # 'х' - 230: 28, # 'ц' - 231: 22, # 'ч' - 232: 25, # 'ш' - 233: 29, # 'щ' - 234: 54, # 'ъ' - 235: 18, # 'ы' - 236: 17, # 'ь' - 237: 30, # 'э' - 238: 27, # 'ю' - 239: 16, # 'я' - 240: 239, # '№' - 241: 68, # 'ё' - 242: 240, # 'ђ' - 243: 241, # 'ѓ' - 244: 242, # 'є' - 245: 243, # 'ѕ' - 246: 244, # 'і' - 247: 245, # 'ї' - 248: 246, # 'ј' - 249: 247, # 'љ' - 250: 248, # 'њ' - 251: 249, # 'ћ' - 252: 250, # 'ќ' - 253: 251, # '§' - 254: 252, # 'ў' - 255: 255, # 'џ' -} - -ISO_8859_5_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-5", - language="Russian", - char_to_order_map=ISO_8859_5_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) diff --git a/spaces/pknez/face-swap-docker/roop/processors/frame/__init__.py b/spaces/pknez/face-swap-docker/roop/processors/frame/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/dependencies/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/dependencies/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/cli.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/cli.py deleted file mode 100644 index d3eaa8e0b7190139a6a48f16b0f5daeef16dadf1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/cli.py +++ /dev/null @@ -1,39 +0,0 @@ -import sys - -import typer -from gradio_client.cli import deploy_discord # type: ignore -from rich.console import Console - -from .commands import custom_component, deploy, print_environment_info, reload - -app = typer.Typer() -app.command("environment", help="Print Gradio environment information.")( - print_environment_info -) -app.command( - "deploy", - help="Deploy a Gradio app to Spaces. Must be called within the directory you would like to deploy.", -)(deploy) -app.command("deploy-discord", help="Deploy a Gradio app to Discord.")( - deploy_discord.main -) - - -def cli(): - args = sys.argv[1:] - if len(args) == 0: - raise ValueError("No file specified.") - if args[0] in {"deploy", "environment", "deploy-discord"}: - app() - elif args[0] in {"cc", "component"}: - sys.argv = sys.argv[1:] - custom_component() - elif args[0] in {"build", "dev", "create", "show", "publish", "install"}: - try: - error = f"gradio {args[0]} is not a valid command. Did you mean `gradio cc {args[0]}` or `gradio component {args[0]}`?." - raise ValueError(error) - except ValueError: - console = Console() - console.print_exception() - else: - typer.run(reload) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-d55a7a8d.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-d55a7a8d.js deleted file mode 100644 index 37eb47ed880979a3b61e304e08edabafaf9b4f3c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-d55a7a8d.js +++ /dev/null @@ -1,2 +0,0 @@ -const{SvelteComponent:c,append:u,attr:d,detach:g,element:o,init:v,insert:r,noop:f,safe_not_equal:y,set_data:m,text:b,toggle_class:i}=window.__gradio__svelte__internal;function h(a){let e,n;return{c(){e=o("div"),n=b(a[0]),d(e,"class","svelte-1ayixqk"),i(e,"table",a[1]==="table"),i(e,"gallery",a[1]==="gallery"),i(e,"selected",a[2])},m(t,l){r(t,e,l),u(e,n)},p(t,[l]){l&1&&m(n,t[0]),l&2&&i(e,"table",t[1]==="table"),l&2&&i(e,"gallery",t[1]==="gallery"),l&4&&i(e,"selected",t[2])},i:f,o:f,d(t){t&&g(e)}}}function q(a,e,n){let{value:t}=e,{type:l}=e,{selected:_=!1}=e;return a.$$set=s=>{"value"in s&&n(0,t=s.value),"type"in s&&n(1,l=s.type),"selected"in s&&n(2,_=s.selected)},[t,l,_]}class w extends c{constructor(e){super(),v(this,e,q,h,y,{value:0,type:1,selected:2})}}export{w as default}; -//# sourceMappingURL=Example-d55a7a8d.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/file-url-595a5096.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/file-url-595a5096.js deleted file mode 100644 index 748a333cf2aa1674e791933fa3c4c08517a5c72a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/file-url-595a5096.js +++ /dev/null @@ -1,2 +0,0 @@ -import{g as s}from"./Index-c74a8b7c.js";function h(t){return t.host===window.location.host||t.host==="localhost:7860"||t.host==="127.0.0.1:7860"||t.host==="lite.local"}async function u(t){if(t==null)return t;const o=new URL(t);if(!h(o)||o.protocol!=="http:"&&o.protocol!=="https:")return t;const r=s();if(r==null)return t;const n=o.pathname;return r.httpRequest({method:"GET",path:n,headers:{},query_string:""}).then(e=>{if(e.status!==200)throw new Error(`Failed to get file ${n} from the Wasm worker.`);const l=new Blob([e.body],{type:e.headers["Content-Type"]});return URL.createObjectURL(l)})}export{u as r}; -//# sourceMappingURL=file-url-595a5096.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/tests/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/h11/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/ma/tests/test_deprecations.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/ma/tests/test_deprecations.py deleted file mode 100644 index 40c8418f5c1809130672dca46e8c43469692da09..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/ma/tests/test_deprecations.py +++ /dev/null @@ -1,84 +0,0 @@ -"""Test deprecation and future warnings. - -""" -import pytest -import numpy as np -from numpy.testing import assert_warns -from numpy.ma.testutils import assert_equal -from numpy.ma.core import MaskedArrayFutureWarning -import io -import textwrap - -class TestArgsort: - """ gh-8701 """ - def _test_base(self, argsort, cls): - arr_0d = np.array(1).view(cls) - argsort(arr_0d) - - arr_1d = np.array([1, 2, 3]).view(cls) - argsort(arr_1d) - - # argsort has a bad default for >1d arrays - arr_2d = np.array([[1, 2], [3, 4]]).view(cls) - result = assert_warns( - np.ma.core.MaskedArrayFutureWarning, argsort, arr_2d) - assert_equal(result, argsort(arr_2d, axis=None)) - - # should be no warnings for explicitly specifying it - argsort(arr_2d, axis=None) - argsort(arr_2d, axis=-1) - - def test_function_ndarray(self): - return self._test_base(np.ma.argsort, np.ndarray) - - def test_function_maskedarray(self): - return self._test_base(np.ma.argsort, np.ma.MaskedArray) - - def test_method(self): - return self._test_base(np.ma.MaskedArray.argsort, np.ma.MaskedArray) - - -class TestMinimumMaximum: - - def test_axis_default(self): - # NumPy 1.13, 2017-05-06 - - data1d = np.ma.arange(6) - data2d = data1d.reshape(2, 3) - - ma_min = np.ma.minimum.reduce - ma_max = np.ma.maximum.reduce - - # check that the default axis is still None, but warns on 2d arrays - result = assert_warns(MaskedArrayFutureWarning, ma_max, data2d) - assert_equal(result, ma_max(data2d, axis=None)) - - result = assert_warns(MaskedArrayFutureWarning, ma_min, data2d) - assert_equal(result, ma_min(data2d, axis=None)) - - # no warnings on 1d, as both new and old defaults are equivalent - result = ma_min(data1d) - assert_equal(result, ma_min(data1d, axis=None)) - assert_equal(result, ma_min(data1d, axis=0)) - - result = ma_max(data1d) - assert_equal(result, ma_max(data1d, axis=None)) - assert_equal(result, ma_max(data1d, axis=0)) - - -class TestFromtextfile: - def test_fromtextfile_delimitor(self): - # NumPy 1.22.0, 2021-09-23 - - textfile = io.StringIO(textwrap.dedent( - """ - A,B,C,D - 'string 1';1;1.0;'mixed column' - 'string 2';2;2.0; - 'string 3';3;3.0;123 - 'string 4';4;4.0;3.14 - """ - )) - - with pytest.warns(DeprecationWarning): - result = np.ma.mrecords.fromtextfile(textfile, delimitor=';') diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_version.py deleted file mode 100644 index 5d610b5e1ea7eb473922a6ac5094e0f67847568f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_version.py +++ /dev/null @@ -1,692 +0,0 @@ -# This file helps to compute a version number in source trees obtained from -# git-archive tarball (such as those provided by githubs download-from-tag -# feature). Distribution tarballs (built by setup.py sdist) and build -# directories (produced by setup.py build) will contain a much shorter file -# that just contains the computed version number. - -# This file is released into the public domain. -# Generated by versioneer-0.28 -# https://github.com/python-versioneer/python-versioneer - -"""Git implementation of _version.py.""" - -import errno -import functools -import os -import re -import subprocess -import sys -from typing import Callable - - -def get_keywords(): - """Get the keywords needed to look up the version information.""" - # these strings will be replaced by git during git-archive. - # setup.py/versioneer.py will grep for the variable names, so they must - # each be defined on a line of their own. _version.py will just call - # get_keywords(). - git_refnames = "$Format:%d$" - git_full = "$Format:%H$" - git_date = "$Format:%ci$" - keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} - return keywords - - -class VersioneerConfig: - """Container for Versioneer configuration parameters.""" - - -def get_config(): - """Create, populate and return the VersioneerConfig() object.""" - # these strings are filled in when 'setup.py versioneer' creates - # _version.py - cfg = VersioneerConfig() - cfg.VCS = "git" - cfg.style = "pep440" - cfg.tag_prefix = "v" - cfg.parentdir_prefix = "pandas-" - cfg.versionfile_source = "pandas/_version.py" - cfg.verbose = False - return cfg - - -class NotThisMethod(Exception): - """Exception raised if a method is not valid for the current scenario.""" - - -LONG_VERSION_PY: dict[str, str] = {} -HANDLERS: dict[str, dict[str, Callable]] = {} - - -def register_vcs_handler(vcs, method): # decorator - """Create decorator to mark a method as the handler of a VCS.""" - - def decorate(f): - """Store f in HANDLERS[vcs][method].""" - if vcs not in HANDLERS: - HANDLERS[vcs] = {} - HANDLERS[vcs][method] = f - return f - - return decorate - - -def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None): - """Call the given command(s).""" - assert isinstance(commands, list) - process = None - - popen_kwargs = {} - if sys.platform == "win32": - # This hides the console window if pythonw.exe is used - startupinfo = subprocess.STARTUPINFO() - startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW - popen_kwargs["startupinfo"] = startupinfo - - for command in commands: - dispcmd = str([command] + args) - try: - # remember shell=False, so use git.cmd on windows, not just git - process = subprocess.Popen( - [command] + args, - cwd=cwd, - env=env, - stdout=subprocess.PIPE, - stderr=(subprocess.PIPE if hide_stderr else None), - **popen_kwargs, - ) - break - except OSError: - e = sys.exc_info()[1] - if e.errno == errno.ENOENT: - continue - if verbose: - print(f"unable to run {dispcmd}") - print(e) - return None, None - else: - if verbose: - print(f"unable to find command, tried {commands}") - return None, None - stdout = process.communicate()[0].strip().decode() - if process.returncode != 0: - if verbose: - print(f"unable to run {dispcmd} (error)") - print(f"stdout was {stdout}") - return None, process.returncode - return stdout, process.returncode - - -def versions_from_parentdir(parentdir_prefix, root, verbose): - """Try to determine the version from the parent directory name. - - Source tarballs conventionally unpack into a directory that includes both - the project name and a version string. We will also support searching up - two directory levels for an appropriately named parent directory - """ - rootdirs = [] - - for _ in range(3): - dirname = os.path.basename(root) - if dirname.startswith(parentdir_prefix): - return { - "version": dirname[len(parentdir_prefix) :], - "full-revisionid": None, - "dirty": False, - "error": None, - "date": None, - } - rootdirs.append(root) - root = os.path.dirname(root) # up a level - - if verbose: - print( - f"Tried directories {str(rootdirs)} \ - but none started with prefix {parentdir_prefix}" - ) - raise NotThisMethod("rootdir doesn't start with parentdir_prefix") - - -@register_vcs_handler("git", "get_keywords") -def git_get_keywords(versionfile_abs): - """Extract version information from the given file.""" - # the code embedded in _version.py can just fetch the value of these - # keywords. When used from setup.py, we don't want to import _version.py, - # so we do it with a regexp instead. This function is not used from - # _version.py. - keywords = {} - try: - with open(versionfile_abs, encoding="utf-8") as fobj: - for line in fobj: - if line.strip().startswith("git_refnames ="): - mo = re.search(r'=\s*"(.*)"', line) - if mo: - keywords["refnames"] = mo.group(1) - if line.strip().startswith("git_full ="): - mo = re.search(r'=\s*"(.*)"', line) - if mo: - keywords["full"] = mo.group(1) - if line.strip().startswith("git_date ="): - mo = re.search(r'=\s*"(.*)"', line) - if mo: - keywords["date"] = mo.group(1) - except OSError: - pass - return keywords - - -@register_vcs_handler("git", "keywords") -def git_versions_from_keywords(keywords, tag_prefix, verbose): - """Get version information from git keywords.""" - if "refnames" not in keywords: - raise NotThisMethod("Short version file found") - date = keywords.get("date") - if date is not None: - # Use only the last line. Previous lines may contain GPG signature - # information. - date = date.splitlines()[-1] - - # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant - # datestamp. However we prefer "%ci" (which expands to an "ISO-8601 - # -like" string, which we must then edit to make compliant), because - # it's been around since git-1.5.3, and it's too difficult to - # discover which version we're using, or to work around using an - # older one. - date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) - refnames = keywords["refnames"].strip() - if refnames.startswith("$Format"): - if verbose: - print("keywords are unexpanded, not using") - raise NotThisMethod("unexpanded keywords, not a git-archive tarball") - refs = {r.strip() for r in refnames.strip("()").split(",")} - # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of - # just "foo-1.0". If we see a "tag: " prefix, prefer those. - TAG = "tag: " - tags = {r[len(TAG) :] for r in refs if r.startswith(TAG)} - if not tags: - # Either we're using git < 1.8.3, or there really are no tags. We use - # a heuristic: assume all version tags have a digit. The old git %d - # expansion behaves like git log --decorate=short and strips out the - # refs/heads/ and refs/tags/ prefixes that would let us distinguish - # between branches and tags. By ignoring refnames without digits, we - # filter out many common branch names like "release" and - # "stabilization", as well as "HEAD" and "master". - tags = {r for r in refs if re.search(r"\d", r)} - if verbose: - print(f"discarding '{','.join(refs - tags)}', no digits") - if verbose: - print(f"likely tags: {','.join(sorted(tags))}") - for ref in sorted(tags): - # sorting will prefer e.g. "2.0" over "2.0rc1" - if ref.startswith(tag_prefix): - r = ref[len(tag_prefix) :] - # Filter out refs that exactly match prefix or that don't start - # with a number once the prefix is stripped (mostly a concern - # when prefix is '') - if not re.match(r"\d", r): - continue - if verbose: - print(f"picking {r}") - return { - "version": r, - "full-revisionid": keywords["full"].strip(), - "dirty": False, - "error": None, - "date": date, - } - # no suitable tags, so version is "0+unknown", but full hex is still there - if verbose: - print("no suitable tags, using unknown + full revision id") - return { - "version": "0+unknown", - "full-revisionid": keywords["full"].strip(), - "dirty": False, - "error": "no suitable tags", - "date": None, - } - - -@register_vcs_handler("git", "pieces_from_vcs") -def git_pieces_from_vcs(tag_prefix, root, verbose, runner=run_command): - """Get version from 'git describe' in the root of the source tree. - - This only gets called if the git-archive 'subst' keywords were *not* - expanded, and _version.py hasn't already been rewritten with a short - version string, meaning we're inside a checked out source tree. - """ - GITS = ["git"] - if sys.platform == "win32": - GITS = ["git.cmd", "git.exe"] - - # GIT_DIR can interfere with correct operation of Versioneer. - # It may be intended to be passed to the Versioneer-versioned project, - # but that should not change where we get our version from. - env = os.environ.copy() - env.pop("GIT_DIR", None) - runner = functools.partial(runner, env=env) - - _, rc = runner(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=not verbose) - if rc != 0: - if verbose: - print(f"Directory {root} not under git control") - raise NotThisMethod("'git rev-parse --git-dir' returned error") - - # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] - # if there isn't one, this yields HEX[-dirty] (no NUM) - describe_out, rc = runner( - GITS, - [ - "describe", - "--tags", - "--dirty", - "--always", - "--long", - "--match", - f"{tag_prefix}[[:digit:]]*", - ], - cwd=root, - ) - # --long was added in git-1.5.5 - if describe_out is None: - raise NotThisMethod("'git describe' failed") - describe_out = describe_out.strip() - full_out, rc = runner(GITS, ["rev-parse", "HEAD"], cwd=root) - if full_out is None: - raise NotThisMethod("'git rev-parse' failed") - full_out = full_out.strip() - - pieces = {} - pieces["long"] = full_out - pieces["short"] = full_out[:7] # maybe improved later - pieces["error"] = None - - branch_name, rc = runner(GITS, ["rev-parse", "--abbrev-ref", "HEAD"], cwd=root) - # --abbrev-ref was added in git-1.6.3 - if rc != 0 or branch_name is None: - raise NotThisMethod("'git rev-parse --abbrev-ref' returned error") - branch_name = branch_name.strip() - - if branch_name == "HEAD": - # If we aren't exactly on a branch, pick a branch which represents - # the current commit. If all else fails, we are on a branchless - # commit. - branches, rc = runner(GITS, ["branch", "--contains"], cwd=root) - # --contains was added in git-1.5.4 - if rc != 0 or branches is None: - raise NotThisMethod("'git branch --contains' returned error") - branches = branches.split("\n") - - # Remove the first line if we're running detached - if "(" in branches[0]: - branches.pop(0) - - # Strip off the leading "* " from the list of branches. - branches = [branch[2:] for branch in branches] - if "master" in branches: - branch_name = "master" - elif not branches: - branch_name = None - else: - # Pick the first branch that is returned. Good or bad. - branch_name = branches[0] - - pieces["branch"] = branch_name - - # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] - # TAG might have hyphens. - git_describe = describe_out - - # look for -dirty suffix - dirty = git_describe.endswith("-dirty") - pieces["dirty"] = dirty - if dirty: - git_describe = git_describe[: git_describe.rindex("-dirty")] - - # now we have TAG-NUM-gHEX or HEX - - if "-" in git_describe: - # TAG-NUM-gHEX - mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe) - if not mo: - # unparsable. Maybe git-describe is misbehaving? - pieces["error"] = f"unable to parse git-describe output: '{describe_out}'" - return pieces - - # tag - full_tag = mo.group(1) - if not full_tag.startswith(tag_prefix): - if verbose: - fmt = "tag '%s' doesn't start with prefix '%s'" - print(fmt % (full_tag, tag_prefix)) - pieces[ - "error" - ] = f"tag '{full_tag}' doesn't start with prefix '{tag_prefix}'" - return pieces - pieces["closest-tag"] = full_tag[len(tag_prefix) :] - - # distance: number of commits since tag - pieces["distance"] = int(mo.group(2)) - - # commit: short hex revision ID - pieces["short"] = mo.group(3) - - else: - # HEX: no tags - pieces["closest-tag"] = None - out, rc = runner(GITS, ["rev-list", "HEAD", "--left-right"], cwd=root) - pieces["distance"] = len(out.split()) # total number of commits - - # commit date: see ISO-8601 comment in git_versions_from_keywords() - date = runner(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip() - # Use only the last line. Previous lines may contain GPG signature - # information. - date = date.splitlines()[-1] - pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) - - return pieces - - -def plus_or_dot(pieces): - """Return a + if we don't already have one, else return a .""" - if "+" in pieces.get("closest-tag", ""): - return "." - return "+" - - -def render_pep440(pieces): - """Build up version string, with post-release "local version identifier". - - Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you - get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty - - Exceptions: - 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - if pieces["distance"] or pieces["dirty"]: - rendered += plus_or_dot(pieces) - rendered += f"{pieces['distance']}.g{pieces['short']}" - if pieces["dirty"]: - rendered += ".dirty" - else: - # exception #1 - rendered = f"0+untagged.{pieces['distance']}.g{pieces['short']}" - if pieces["dirty"]: - rendered += ".dirty" - return rendered - - -def render_pep440_branch(pieces): - """TAG[[.dev0]+DISTANCE.gHEX[.dirty]] . - - The ".dev0" means not master branch. Note that .dev0 sorts backwards - (a feature branch will appear "older" than the master branch). - - Exceptions: - 1: no tags. 0[.dev0]+untagged.DISTANCE.gHEX[.dirty] - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - if pieces["distance"] or pieces["dirty"]: - if pieces["branch"] != "master": - rendered += ".dev0" - rendered += plus_or_dot(pieces) - rendered += f"{pieces['distance']}.g{pieces['short']}" - if pieces["dirty"]: - rendered += ".dirty" - else: - # exception #1 - rendered = "0" - if pieces["branch"] != "master": - rendered += ".dev0" - rendered += f"+untagged.{pieces['distance']}.g{pieces['short']}" - if pieces["dirty"]: - rendered += ".dirty" - return rendered - - -def pep440_split_post(ver): - """Split pep440 version string at the post-release segment. - - Returns the release segments before the post-release and the - post-release version number (or -1 if no post-release segment is present). - """ - vc = str.split(ver, ".post") - return vc[0], int(vc[1] or 0) if len(vc) == 2 else None - - -def render_pep440_pre(pieces): - """TAG[.postN.devDISTANCE] -- No -dirty. - - Exceptions: - 1: no tags. 0.post0.devDISTANCE - """ - if pieces["closest-tag"]: - if pieces["distance"]: - # update the post release segment - tag_version, post_version = pep440_split_post(pieces["closest-tag"]) - rendered = tag_version - if post_version is not None: - rendered += f".post{post_version + 1}.dev{pieces['distance']}" - else: - rendered += f".post0.dev{pieces['distance']}" - else: - # no commits, use the tag as the version - rendered = pieces["closest-tag"] - else: - # exception #1 - rendered = f"0.post0.dev{pieces['distance']}" - return rendered - - -def render_pep440_post(pieces): - """TAG[.postDISTANCE[.dev0]+gHEX] . - - The ".dev0" means dirty. Note that .dev0 sorts backwards - (a dirty tree will appear "older" than the corresponding clean one), - but you shouldn't be releasing software with -dirty anyways. - - Exceptions: - 1: no tags. 0.postDISTANCE[.dev0] - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - if pieces["distance"] or pieces["dirty"]: - rendered += f".post{pieces['distance']}" - if pieces["dirty"]: - rendered += ".dev0" - rendered += plus_or_dot(pieces) - rendered += f"g{pieces['short']}" - else: - # exception #1 - rendered = f"0.post{pieces['distance']}" - if pieces["dirty"]: - rendered += ".dev0" - rendered += f"+g{pieces['short']}" - return rendered - - -def render_pep440_post_branch(pieces): - """TAG[.postDISTANCE[.dev0]+gHEX[.dirty]] . - - The ".dev0" means not master branch. - - Exceptions: - 1: no tags. 0.postDISTANCE[.dev0]+gHEX[.dirty] - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - if pieces["distance"] or pieces["dirty"]: - rendered += f".post{pieces['distance']}" - if pieces["branch"] != "master": - rendered += ".dev0" - rendered += plus_or_dot(pieces) - rendered += f"g{pieces['short']}" - if pieces["dirty"]: - rendered += ".dirty" - else: - # exception #1 - rendered = f"0.post{pieces['distance']}" - if pieces["branch"] != "master": - rendered += ".dev0" - rendered += f"+g{pieces['short']}" - if pieces["dirty"]: - rendered += ".dirty" - return rendered - - -def render_pep440_old(pieces): - """TAG[.postDISTANCE[.dev0]] . - - The ".dev0" means dirty. - - Exceptions: - 1: no tags. 0.postDISTANCE[.dev0] - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - if pieces["distance"] or pieces["dirty"]: - rendered += f"0.post{pieces['distance']}" - if pieces["dirty"]: - rendered += ".dev0" - else: - # exception #1 - rendered = f"0.post{pieces['distance']}" - if pieces["dirty"]: - rendered += ".dev0" - return rendered - - -def render_git_describe(pieces): - """TAG[-DISTANCE-gHEX][-dirty]. - - Like 'git describe --tags --dirty --always'. - - Exceptions: - 1: no tags. HEX[-dirty] (note: no 'g' prefix) - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - if pieces["distance"]: - rendered += f"-{pieces['distance']}-g{pieces['short']}" - else: - # exception #1 - rendered = pieces["short"] - if pieces["dirty"]: - rendered += "-dirty" - return rendered - - -def render_git_describe_long(pieces): - """TAG-DISTANCE-gHEX[-dirty]. - - Like 'git describe --tags --dirty --always -long'. - The distance/hash is unconditional. - - Exceptions: - 1: no tags. HEX[-dirty] (note: no 'g' prefix) - """ - if pieces["closest-tag"]: - rendered = pieces["closest-tag"] - rendered += f"-{pieces['distance']}-g{pieces['short']}" - else: - # exception #1 - rendered = pieces["short"] - if pieces["dirty"]: - rendered += "-dirty" - return rendered - - -def render(pieces, style): - """Render the given version pieces into the requested style.""" - if pieces["error"]: - return { - "version": "unknown", - "full-revisionid": pieces.get("long"), - "dirty": None, - "error": pieces["error"], - "date": None, - } - - if not style or style == "default": - style = "pep440" # the default - - if style == "pep440": - rendered = render_pep440(pieces) - elif style == "pep440-branch": - rendered = render_pep440_branch(pieces) - elif style == "pep440-pre": - rendered = render_pep440_pre(pieces) - elif style == "pep440-post": - rendered = render_pep440_post(pieces) - elif style == "pep440-post-branch": - rendered = render_pep440_post_branch(pieces) - elif style == "pep440-old": - rendered = render_pep440_old(pieces) - elif style == "git-describe": - rendered = render_git_describe(pieces) - elif style == "git-describe-long": - rendered = render_git_describe_long(pieces) - else: - raise ValueError(f"unknown style '{style}'") - - return { - "version": rendered, - "full-revisionid": pieces["long"], - "dirty": pieces["dirty"], - "error": None, - "date": pieces.get("date"), - } - - -def get_versions(): - """Get version information or return default if unable to do so.""" - # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have - # __file__, we can work backwards from there to the root. Some - # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which - # case we can only use expanded keywords. - - cfg = get_config() - verbose = cfg.verbose - - try: - return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose) - except NotThisMethod: - pass - - try: - root = os.path.realpath(__file__) - # versionfile_source is the relative path from the top of the source - # tree (where the .git directory might live) to this file. Invert - # this to find the root from __file__. - for _ in cfg.versionfile_source.split("/"): - root = os.path.dirname(root) - except NameError: - return { - "version": "0+unknown", - "full-revisionid": None, - "dirty": None, - "error": "unable to find root of source tree", - "date": None, - } - - try: - pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) - return render(pieces, cfg.style) - except NotThisMethod: - pass - - try: - if cfg.parentdir_prefix: - return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) - except NotThisMethod: - pass - - return { - "version": "0+unknown", - "full-revisionid": None, - "dirty": None, - "error": "unable to compute version", - "date": None, - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/align.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/align.py deleted file mode 100644 index 85d412d044ba8393338fc5d8cb68938d9a9b8d31..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/align.py +++ /dev/null @@ -1,213 +0,0 @@ -""" -Core eval alignment algorithms. -""" -from __future__ import annotations - -from functools import ( - partial, - wraps, -) -from typing import ( - TYPE_CHECKING, - Callable, -) -import warnings - -import numpy as np - -from pandas.errors import PerformanceWarning -from pandas.util._exceptions import find_stack_level - -from pandas.core.dtypes.generic import ( - ABCDataFrame, - ABCSeries, -) - -from pandas.core.base import PandasObject -import pandas.core.common as com -from pandas.core.computation.common import result_type_many - -if TYPE_CHECKING: - from collections.abc import Sequence - - from pandas._typing import F - - from pandas.core.generic import NDFrame - from pandas.core.indexes.api import Index - - -def _align_core_single_unary_op( - term, -) -> tuple[partial | type[NDFrame], dict[str, Index] | None]: - typ: partial | type[NDFrame] - axes: dict[str, Index] | None = None - - if isinstance(term.value, np.ndarray): - typ = partial(np.asanyarray, dtype=term.value.dtype) - else: - typ = type(term.value) - if hasattr(term.value, "axes"): - axes = _zip_axes_from_type(typ, term.value.axes) - - return typ, axes - - -def _zip_axes_from_type( - typ: type[NDFrame], new_axes: Sequence[Index] -) -> dict[str, Index]: - return {name: new_axes[i] for i, name in enumerate(typ._AXIS_ORDERS)} - - -def _any_pandas_objects(terms) -> bool: - """ - Check a sequence of terms for instances of PandasObject. - """ - return any(isinstance(term.value, PandasObject) for term in terms) - - -def _filter_special_cases(f) -> Callable[[F], F]: - @wraps(f) - def wrapper(terms): - # single unary operand - if len(terms) == 1: - return _align_core_single_unary_op(terms[0]) - - term_values = (term.value for term in terms) - - # we don't have any pandas objects - if not _any_pandas_objects(terms): - return result_type_many(*term_values), None - - return f(terms) - - return wrapper - - -@_filter_special_cases -def _align_core(terms): - term_index = [i for i, term in enumerate(terms) if hasattr(term.value, "axes")] - term_dims = [terms[i].value.ndim for i in term_index] - - from pandas import Series - - ndims = Series(dict(zip(term_index, term_dims))) - - # initial axes are the axes of the largest-axis'd term - biggest = terms[ndims.idxmax()].value - typ = biggest._constructor - axes = biggest.axes - naxes = len(axes) - gt_than_one_axis = naxes > 1 - - for value in (terms[i].value for i in term_index): - is_series = isinstance(value, ABCSeries) - is_series_and_gt_one_axis = is_series and gt_than_one_axis - - for axis, items in enumerate(value.axes): - if is_series_and_gt_one_axis: - ax, itm = naxes - 1, value.index - else: - ax, itm = axis, items - - if not axes[ax].is_(itm): - axes[ax] = axes[ax].join(itm, how="outer") - - for i, ndim in ndims.items(): - for axis, items in zip(range(ndim), axes): - ti = terms[i].value - - if hasattr(ti, "reindex"): - transpose = isinstance(ti, ABCSeries) and naxes > 1 - reindexer = axes[naxes - 1] if transpose else items - - term_axis_size = len(ti.axes[axis]) - reindexer_size = len(reindexer) - - ordm = np.log10(max(1, abs(reindexer_size - term_axis_size))) - if ordm >= 1 and reindexer_size >= 10000: - w = ( - f"Alignment difference on axis {axis} is larger " - f"than an order of magnitude on term {repr(terms[i].name)}, " - f"by more than {ordm:.4g}; performance may suffer." - ) - warnings.warn( - w, category=PerformanceWarning, stacklevel=find_stack_level() - ) - - obj = ti.reindex(reindexer, axis=axis, copy=False) - terms[i].update(obj) - - terms[i].update(terms[i].value.values) - - return typ, _zip_axes_from_type(typ, axes) - - -def align_terms(terms): - """ - Align a set of terms. - """ - try: - # flatten the parse tree (a nested list, really) - terms = list(com.flatten(terms)) - except TypeError: - # can't iterate so it must just be a constant or single variable - if isinstance(terms.value, (ABCSeries, ABCDataFrame)): - typ = type(terms.value) - return typ, _zip_axes_from_type(typ, terms.value.axes) - return np.result_type(terms.type), None - - # if all resolved variables are numeric scalars - if all(term.is_scalar for term in terms): - return result_type_many(*(term.value for term in terms)).type, None - - # perform the main alignment - typ, axes = _align_core(terms) - return typ, axes - - -def reconstruct_object(typ, obj, axes, dtype): - """ - Reconstruct an object given its type, raw value, and possibly empty - (None) axes. - - Parameters - ---------- - typ : object - A type - obj : object - The value to use in the type constructor - axes : dict - The axes to use to construct the resulting pandas object - - Returns - ------- - ret : typ - An object of type ``typ`` with the value `obj` and possible axes - `axes`. - """ - try: - typ = typ.type - except AttributeError: - pass - - res_t = np.result_type(obj.dtype, dtype) - - if not isinstance(typ, partial) and issubclass(typ, PandasObject): - return typ(obj, dtype=res_t, **axes) - - # special case for pathological things like ~True/~False - if hasattr(res_t, "type") and typ == np.bool_ and res_t != np.bool_: - ret_value = res_t.type(obj) - else: - ret_value = typ(obj).astype(res_t) - # The condition is to distinguish 0-dim array (returned in case of - # scalar) and 1 element array - # e.g. np.array(0) and np.array([0]) - if ( - len(obj.shape) == 1 - and len(obj) == 1 - and not isinstance(ret_value, np.ndarray) - ): - ret_value = np.array([ret_value]).astype(res_t) - - return ret_value diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/internals/api.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/internals/api.py deleted file mode 100644 index 10e6b76e985b37fe6bafabb6971e6340069b20d3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/internals/api.py +++ /dev/null @@ -1,107 +0,0 @@ -""" -This is a pseudo-public API for downstream libraries. We ask that downstream -authors - -1) Try to avoid using internals directly altogether, and failing that, -2) Use only functions exposed here (or in core.internals) - -""" -from __future__ import annotations - -from typing import TYPE_CHECKING - -import numpy as np - -from pandas._libs.internals import BlockPlacement - -from pandas.core.dtypes.common import pandas_dtype -from pandas.core.dtypes.dtypes import ( - DatetimeTZDtype, - PeriodDtype, -) - -from pandas.core.arrays import DatetimeArray -from pandas.core.construction import extract_array -from pandas.core.internals.blocks import ( - Block, - DatetimeTZBlock, - ExtensionBlock, - check_ndim, - ensure_block_shape, - extract_pandas_array, - get_block_type, - maybe_coerce_values, -) - -if TYPE_CHECKING: - from pandas._typing import Dtype - - -def make_block( - values, placement, klass=None, ndim=None, dtype: Dtype | None = None -) -> Block: - """ - This is a pseudo-public analogue to blocks.new_block. - - We ask that downstream libraries use this rather than any fully-internal - APIs, including but not limited to: - - - core.internals.blocks.make_block - - Block.make_block - - Block.make_block_same_class - - Block.__init__ - """ - if dtype is not None: - dtype = pandas_dtype(dtype) - - values, dtype = extract_pandas_array(values, dtype, ndim) - - if klass is ExtensionBlock and isinstance(values.dtype, PeriodDtype): - # GH-44681 changed PeriodArray to be stored in the 2D - # NDArrayBackedExtensionBlock instead of ExtensionBlock - # -> still allow ExtensionBlock to be passed in this case for back compat - klass = None - - if klass is None: - dtype = dtype or values.dtype - klass = get_block_type(dtype) - - elif klass is DatetimeTZBlock and not isinstance(values.dtype, DatetimeTZDtype): - # pyarrow calls get here - values = DatetimeArray._simple_new( - # error: Argument "dtype" to "_simple_new" of "DatetimeArray" has - # incompatible type "Union[ExtensionDtype, dtype[Any], None]"; - # expected "Union[dtype[datetime64], DatetimeTZDtype]" - values, - dtype=dtype, # type: ignore[arg-type] - ) - - if not isinstance(placement, BlockPlacement): - placement = BlockPlacement(placement) - - ndim = maybe_infer_ndim(values, placement, ndim) - if isinstance(values.dtype, (PeriodDtype, DatetimeTZDtype)): - # GH#41168 ensure we can pass 1D dt64tz values - # More generally, any EA dtype that isn't is_1d_only_ea_dtype - values = extract_array(values, extract_numpy=True) - values = ensure_block_shape(values, ndim) - - check_ndim(values, placement, ndim) - values = maybe_coerce_values(values) - return klass(values, ndim=ndim, placement=placement) - - -def maybe_infer_ndim(values, placement: BlockPlacement, ndim: int | None) -> int: - """ - If `ndim` is not provided, infer it from placement and values. - """ - if ndim is None: - # GH#38134 Block constructor now assumes ndim is not None - if not isinstance(values.dtype, np.dtype): - if len(placement) != 1: - ndim = 1 - else: - ndim = 2 - else: - ndim = values.ndim - return ndim diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/plotting/_matplotlib/boxplot.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/plotting/_matplotlib/boxplot.py deleted file mode 100644 index 83cb8a6ab67dd4ae75ded3b8a6d3e572876d8634..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/plotting/_matplotlib/boxplot.py +++ /dev/null @@ -1,550 +0,0 @@ -from __future__ import annotations - -from typing import ( - TYPE_CHECKING, - Literal, - NamedTuple, -) -import warnings - -from matplotlib.artist import setp -import numpy as np - -from pandas.util._exceptions import find_stack_level - -from pandas.core.dtypes.common import is_dict_like -from pandas.core.dtypes.missing import remove_na_arraylike - -import pandas as pd -import pandas.core.common as com - -from pandas.io.formats.printing import pprint_thing -from pandas.plotting._matplotlib.core import ( - LinePlot, - MPLPlot, -) -from pandas.plotting._matplotlib.groupby import create_iter_data_given_by -from pandas.plotting._matplotlib.style import get_standard_colors -from pandas.plotting._matplotlib.tools import ( - create_subplots, - flatten_axes, - maybe_adjust_figure, -) - -if TYPE_CHECKING: - from collections.abc import Collection - - from matplotlib.axes import Axes - from matplotlib.lines import Line2D - - from pandas._typing import MatplotlibColor - - -class BoxPlot(LinePlot): - @property - def _kind(self) -> Literal["box"]: - return "box" - - _layout_type = "horizontal" - - _valid_return_types = (None, "axes", "dict", "both") - - class BP(NamedTuple): - # namedtuple to hold results - ax: Axes - lines: dict[str, list[Line2D]] - - def __init__(self, data, return_type: str = "axes", **kwargs) -> None: - if return_type not in self._valid_return_types: - raise ValueError("return_type must be {None, 'axes', 'dict', 'both'}") - - self.return_type = return_type - # Do not call LinePlot.__init__ which may fill nan - MPLPlot.__init__(self, data, **kwargs) # pylint: disable=non-parent-init-called - - def _args_adjust(self) -> None: - if self.subplots: - # Disable label ax sharing. Otherwise, all subplots shows last - # column label - if self.orientation == "vertical": - self.sharex = False - else: - self.sharey = False - - # error: Signature of "_plot" incompatible with supertype "MPLPlot" - @classmethod - def _plot( # type: ignore[override] - cls, ax, y, column_num=None, return_type: str = "axes", **kwds - ): - if y.ndim == 2: - y = [remove_na_arraylike(v) for v in y] - # Boxplot fails with empty arrays, so need to add a NaN - # if any cols are empty - # GH 8181 - y = [v if v.size > 0 else np.array([np.nan]) for v in y] - else: - y = remove_na_arraylike(y) - bp = ax.boxplot(y, **kwds) - - if return_type == "dict": - return bp, bp - elif return_type == "both": - return cls.BP(ax=ax, lines=bp), bp - else: - return ax, bp - - def _validate_color_args(self): - if "color" in self.kwds: - if self.colormap is not None: - warnings.warn( - "'color' and 'colormap' cannot be used " - "simultaneously. Using 'color'", - stacklevel=find_stack_level(), - ) - self.color = self.kwds.pop("color") - - if isinstance(self.color, dict): - valid_keys = ["boxes", "whiskers", "medians", "caps"] - for key in self.color: - if key not in valid_keys: - raise ValueError( - f"color dict contains invalid key '{key}'. " - f"The key must be either {valid_keys}" - ) - else: - self.color = None - - # get standard colors for default - colors = get_standard_colors(num_colors=3, colormap=self.colormap, color=None) - # use 2 colors by default, for box/whisker and median - # flier colors isn't needed here - # because it can be specified by ``sym`` kw - self._boxes_c = colors[0] - self._whiskers_c = colors[0] - self._medians_c = colors[2] - self._caps_c = colors[0] - - def _get_colors( - self, - num_colors=None, - color_kwds: dict[str, MatplotlibColor] - | MatplotlibColor - | Collection[MatplotlibColor] - | None = "color", - ) -> None: - pass - - def maybe_color_bp(self, bp) -> None: - if isinstance(self.color, dict): - boxes = self.color.get("boxes", self._boxes_c) - whiskers = self.color.get("whiskers", self._whiskers_c) - medians = self.color.get("medians", self._medians_c) - caps = self.color.get("caps", self._caps_c) - else: - # Other types are forwarded to matplotlib - # If None, use default colors - boxes = self.color or self._boxes_c - whiskers = self.color or self._whiskers_c - medians = self.color or self._medians_c - caps = self.color or self._caps_c - - # GH 30346, when users specifying those arguments explicitly, our defaults - # for these four kwargs should be overridden; if not, use Pandas settings - if not self.kwds.get("boxprops"): - setp(bp["boxes"], color=boxes, alpha=1) - if not self.kwds.get("whiskerprops"): - setp(bp["whiskers"], color=whiskers, alpha=1) - if not self.kwds.get("medianprops"): - setp(bp["medians"], color=medians, alpha=1) - if not self.kwds.get("capprops"): - setp(bp["caps"], color=caps, alpha=1) - - def _make_plot(self) -> None: - if self.subplots: - self._return_obj = pd.Series(dtype=object) - - # Re-create iterated data if `by` is assigned by users - data = ( - create_iter_data_given_by(self.data, self._kind) - if self.by is not None - else self.data - ) - - for i, (label, y) in enumerate(self._iter_data(data=data)): - ax = self._get_ax(i) - kwds = self.kwds.copy() - - # When by is applied, show title for subplots to know which group it is - # just like df.boxplot, and need to apply T on y to provide right input - if self.by is not None: - y = y.T - ax.set_title(pprint_thing(label)) - - # When `by` is assigned, the ticklabels will become unique grouped - # values, instead of label which is used as subtitle in this case. - ticklabels = [ - pprint_thing(col) for col in self.data.columns.levels[0] - ] - else: - ticklabels = [pprint_thing(label)] - - ret, bp = self._plot( - ax, y, column_num=i, return_type=self.return_type, **kwds - ) - self.maybe_color_bp(bp) - self._return_obj[label] = ret - self._set_ticklabels(ax, ticklabels) - else: - y = self.data.values.T - ax = self._get_ax(0) - kwds = self.kwds.copy() - - ret, bp = self._plot( - ax, y, column_num=0, return_type=self.return_type, **kwds - ) - self.maybe_color_bp(bp) - self._return_obj = ret - - labels = [left for left, _ in self._iter_data()] - labels = [pprint_thing(left) for left in labels] - if not self.use_index: - labels = [pprint_thing(key) for key in range(len(labels))] - self._set_ticklabels(ax, labels) - - def _set_ticklabels(self, ax: Axes, labels: list[str]) -> None: - if self.orientation == "vertical": - ax.set_xticklabels(labels) - else: - ax.set_yticklabels(labels) - - def _make_legend(self) -> None: - pass - - def _post_plot_logic(self, ax, data) -> None: - # GH 45465: make sure that the boxplot doesn't ignore xlabel/ylabel - if self.xlabel: - ax.set_xlabel(pprint_thing(self.xlabel)) - if self.ylabel: - ax.set_ylabel(pprint_thing(self.ylabel)) - - @property - def orientation(self) -> Literal["horizontal", "vertical"]: - if self.kwds.get("vert", True): - return "vertical" - else: - return "horizontal" - - @property - def result(self): - if self.return_type is None: - return super().result - else: - return self._return_obj - - -def _grouped_plot_by_column( - plotf, - data, - columns=None, - by=None, - numeric_only: bool = True, - grid: bool = False, - figsize: tuple[float, float] | None = None, - ax=None, - layout=None, - return_type=None, - **kwargs, -): - grouped = data.groupby(by, observed=False) - if columns is None: - if not isinstance(by, (list, tuple)): - by = [by] - columns = data._get_numeric_data().columns.difference(by) - naxes = len(columns) - fig, axes = create_subplots( - naxes=naxes, - sharex=kwargs.pop("sharex", True), - sharey=kwargs.pop("sharey", True), - figsize=figsize, - ax=ax, - layout=layout, - ) - - _axes = flatten_axes(axes) - - # GH 45465: move the "by" label based on "vert" - xlabel, ylabel = kwargs.pop("xlabel", None), kwargs.pop("ylabel", None) - if kwargs.get("vert", True): - xlabel = xlabel or by - else: - ylabel = ylabel or by - - ax_values = [] - - for i, col in enumerate(columns): - ax = _axes[i] - gp_col = grouped[col] - keys, values = zip(*gp_col) - re_plotf = plotf(keys, values, ax, xlabel=xlabel, ylabel=ylabel, **kwargs) - ax.set_title(col) - ax_values.append(re_plotf) - ax.grid(grid) - - result = pd.Series(ax_values, index=columns, copy=False) - - # Return axes in multiplot case, maybe revisit later # 985 - if return_type is None: - result = axes - - byline = by[0] if len(by) == 1 else by - fig.suptitle(f"Boxplot grouped by {byline}") - maybe_adjust_figure(fig, bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2) - - return result - - -def boxplot( - data, - column=None, - by=None, - ax=None, - fontsize: int | None = None, - rot: int = 0, - grid: bool = True, - figsize: tuple[float, float] | None = None, - layout=None, - return_type=None, - **kwds, -): - import matplotlib.pyplot as plt - - # validate return_type: - if return_type not in BoxPlot._valid_return_types: - raise ValueError("return_type must be {'axes', 'dict', 'both'}") - - if isinstance(data, pd.Series): - data = data.to_frame("x") - column = "x" - - def _get_colors(): - # num_colors=3 is required as method maybe_color_bp takes the colors - # in positions 0 and 2. - # if colors not provided, use same defaults as DataFrame.plot.box - result = get_standard_colors(num_colors=3) - result = np.take(result, [0, 0, 2]) - result = np.append(result, "k") - - colors = kwds.pop("color", None) - if colors: - if is_dict_like(colors): - # replace colors in result array with user-specified colors - # taken from the colors dict parameter - # "boxes" value placed in position 0, "whiskers" in 1, etc. - valid_keys = ["boxes", "whiskers", "medians", "caps"] - key_to_index = dict(zip(valid_keys, range(4))) - for key, value in colors.items(): - if key in valid_keys: - result[key_to_index[key]] = value - else: - raise ValueError( - f"color dict contains invalid key '{key}'. " - f"The key must be either {valid_keys}" - ) - else: - result.fill(colors) - - return result - - def maybe_color_bp(bp, **kwds) -> None: - # GH 30346, when users specifying those arguments explicitly, our defaults - # for these four kwargs should be overridden; if not, use Pandas settings - if not kwds.get("boxprops"): - setp(bp["boxes"], color=colors[0], alpha=1) - if not kwds.get("whiskerprops"): - setp(bp["whiskers"], color=colors[1], alpha=1) - if not kwds.get("medianprops"): - setp(bp["medians"], color=colors[2], alpha=1) - if not kwds.get("capprops"): - setp(bp["caps"], color=colors[3], alpha=1) - - def plot_group(keys, values, ax: Axes, **kwds): - # GH 45465: xlabel/ylabel need to be popped out before plotting happens - xlabel, ylabel = kwds.pop("xlabel", None), kwds.pop("ylabel", None) - if xlabel: - ax.set_xlabel(pprint_thing(xlabel)) - if ylabel: - ax.set_ylabel(pprint_thing(ylabel)) - - keys = [pprint_thing(x) for x in keys] - values = [np.asarray(remove_na_arraylike(v), dtype=object) for v in values] - bp = ax.boxplot(values, **kwds) - if fontsize is not None: - ax.tick_params(axis="both", labelsize=fontsize) - - # GH 45465: x/y are flipped when "vert" changes - is_vertical = kwds.get("vert", True) - ticks = ax.get_xticks() if is_vertical else ax.get_yticks() - if len(ticks) != len(keys): - i, remainder = divmod(len(ticks), len(keys)) - assert remainder == 0, remainder - keys *= i - if is_vertical: - ax.set_xticklabels(keys, rotation=rot) - else: - ax.set_yticklabels(keys, rotation=rot) - maybe_color_bp(bp, **kwds) - - # Return axes in multiplot case, maybe revisit later # 985 - if return_type == "dict": - return bp - elif return_type == "both": - return BoxPlot.BP(ax=ax, lines=bp) - else: - return ax - - colors = _get_colors() - if column is None: - columns = None - elif isinstance(column, (list, tuple)): - columns = column - else: - columns = [column] - - if by is not None: - # Prefer array return type for 2-D plots to match the subplot layout - # https://github.com/pandas-dev/pandas/pull/12216#issuecomment-241175580 - result = _grouped_plot_by_column( - plot_group, - data, - columns=columns, - by=by, - grid=grid, - figsize=figsize, - ax=ax, - layout=layout, - return_type=return_type, - **kwds, - ) - else: - if return_type is None: - return_type = "axes" - if layout is not None: - raise ValueError("The 'layout' keyword is not supported when 'by' is None") - - if ax is None: - rc = {"figure.figsize": figsize} if figsize is not None else {} - with plt.rc_context(rc): - ax = plt.gca() - data = data._get_numeric_data() - naxes = len(data.columns) - if naxes == 0: - raise ValueError( - "boxplot method requires numerical columns, nothing to plot." - ) - if columns is None: - columns = data.columns - else: - data = data[columns] - - result = plot_group(columns, data.values.T, ax, **kwds) - ax.grid(grid) - - return result - - -def boxplot_frame( - self, - column=None, - by=None, - ax=None, - fontsize: int | None = None, - rot: int = 0, - grid: bool = True, - figsize: tuple[float, float] | None = None, - layout=None, - return_type=None, - **kwds, -): - import matplotlib.pyplot as plt - - ax = boxplot( - self, - column=column, - by=by, - ax=ax, - fontsize=fontsize, - grid=grid, - rot=rot, - figsize=figsize, - layout=layout, - return_type=return_type, - **kwds, - ) - plt.draw_if_interactive() - return ax - - -def boxplot_frame_groupby( - grouped, - subplots: bool = True, - column=None, - fontsize: int | None = None, - rot: int = 0, - grid: bool = True, - ax=None, - figsize: tuple[float, float] | None = None, - layout=None, - sharex: bool = False, - sharey: bool = True, - **kwds, -): - if subplots is True: - naxes = len(grouped) - fig, axes = create_subplots( - naxes=naxes, - squeeze=False, - ax=ax, - sharex=sharex, - sharey=sharey, - figsize=figsize, - layout=layout, - ) - axes = flatten_axes(axes) - - ret = pd.Series(dtype=object) - - for (key, group), ax in zip(grouped, axes): - d = group.boxplot( - ax=ax, column=column, fontsize=fontsize, rot=rot, grid=grid, **kwds - ) - ax.set_title(pprint_thing(key)) - ret.loc[key] = d - maybe_adjust_figure(fig, bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2) - else: - keys, frames = zip(*grouped) - if grouped.axis == 0: - df = pd.concat(frames, keys=keys, axis=1) - elif len(frames) > 1: - df = frames[0].join(frames[1::]) - else: - df = frames[0] - - # GH 16748, DataFrameGroupby fails when subplots=False and `column` argument - # is assigned, and in this case, since `df` here becomes MI after groupby, - # so we need to couple the keys (grouped values) and column (original df - # column) together to search for subset to plot - if column is not None: - column = com.convert_to_list_like(column) - multi_key = pd.MultiIndex.from_product([keys, column]) - column = list(multi_key.values) - ret = df.boxplot( - column=column, - fontsize=fontsize, - rot=rot, - grid=grid, - ax=ax, - figsize=figsize, - layout=layout, - **kwds, - ) - return ret diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_conversion.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_conversion.py deleted file mode 100644 index c1ab0ba0b5e6f40b27bdbab2f195840313224472..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_conversion.py +++ /dev/null @@ -1,160 +0,0 @@ -from datetime import datetime - -import numpy as np -import pytest -from pytz import UTC - -from pandas._libs.tslibs import ( - OutOfBoundsTimedelta, - astype_overflowsafe, - conversion, - iNaT, - timezones, - tz_convert_from_utc, - tzconversion, -) - -from pandas import ( - Timestamp, - date_range, -) -import pandas._testing as tm - - -def _compare_utc_to_local(tz_didx): - def f(x): - return tzconversion.tz_convert_from_utc_single(x, tz_didx.tz) - - result = tz_convert_from_utc(tz_didx.asi8, tz_didx.tz) - expected = np.vectorize(f)(tz_didx.asi8) - - tm.assert_numpy_array_equal(result, expected) - - -def _compare_local_to_utc(tz_didx, naive_didx): - # Check that tz_localize behaves the same vectorized and pointwise. - err1 = err2 = None - try: - result = tzconversion.tz_localize_to_utc(naive_didx.asi8, tz_didx.tz) - err1 = None - except Exception as err: - err1 = err - - try: - expected = naive_didx.map(lambda x: x.tz_localize(tz_didx.tz)).asi8 - except Exception as err: - err2 = err - - if err1 is not None: - assert type(err1) == type(err2) - else: - assert err2 is None - tm.assert_numpy_array_equal(result, expected) - - -def test_tz_localize_to_utc_copies(): - # GH#46460 - arr = np.arange(5, dtype="i8") - result = tz_convert_from_utc(arr, tz=UTC) - tm.assert_numpy_array_equal(result, arr) - assert not np.shares_memory(arr, result) - - result = tz_convert_from_utc(arr, tz=None) - tm.assert_numpy_array_equal(result, arr) - assert not np.shares_memory(arr, result) - - -def test_tz_convert_single_matches_tz_convert_hourly(tz_aware_fixture): - tz = tz_aware_fixture - tz_didx = date_range("2014-03-01", "2015-01-10", freq="H", tz=tz) - naive_didx = date_range("2014-03-01", "2015-01-10", freq="H") - - _compare_utc_to_local(tz_didx) - _compare_local_to_utc(tz_didx, naive_didx) - - -@pytest.mark.parametrize("freq", ["D", "A"]) -def test_tz_convert_single_matches_tz_convert(tz_aware_fixture, freq): - tz = tz_aware_fixture - tz_didx = date_range("2018-01-01", "2020-01-01", freq=freq, tz=tz) - naive_didx = date_range("2018-01-01", "2020-01-01", freq=freq) - - _compare_utc_to_local(tz_didx) - _compare_local_to_utc(tz_didx, naive_didx) - - -@pytest.mark.parametrize( - "arr", - [ - pytest.param(np.array([], dtype=np.int64), id="empty"), - pytest.param(np.array([iNaT], dtype=np.int64), id="all_nat"), - ], -) -def test_tz_convert_corner(arr): - result = tz_convert_from_utc(arr, timezones.maybe_get_tz("Asia/Tokyo")) - tm.assert_numpy_array_equal(result, arr) - - -def test_tz_convert_readonly(): - # GH#35530 - arr = np.array([0], dtype=np.int64) - arr.setflags(write=False) - result = tz_convert_from_utc(arr, UTC) - tm.assert_numpy_array_equal(result, arr) - - -@pytest.mark.parametrize("copy", [True, False]) -@pytest.mark.parametrize("dtype", ["M8[ns]", "M8[s]"]) -def test_length_zero_copy(dtype, copy): - arr = np.array([], dtype=dtype) - result = astype_overflowsafe(arr, copy=copy, dtype=np.dtype("M8[ns]")) - if copy: - assert not np.shares_memory(result, arr) - elif arr.dtype == result.dtype: - assert result is arr - else: - assert not np.shares_memory(result, arr) - - -def test_ensure_datetime64ns_bigendian(): - # GH#29684 - arr = np.array([np.datetime64(1, "ms")], dtype=">M8[ms]") - result = astype_overflowsafe(arr, dtype=np.dtype("M8[ns]")) - - expected = np.array([np.datetime64(1, "ms")], dtype="M8[ns]") - tm.assert_numpy_array_equal(result, expected) - - -def test_ensure_timedelta64ns_overflows(): - arr = np.arange(10).astype("m8[Y]") * 100 - msg = r"Cannot convert 300 years to timedelta64\[ns\] without overflow" - with pytest.raises(OutOfBoundsTimedelta, match=msg): - astype_overflowsafe(arr, dtype=np.dtype("m8[ns]")) - - -class SubDatetime(datetime): - pass - - -@pytest.mark.parametrize( - "dt, expected", - [ - pytest.param( - Timestamp("2000-01-01"), Timestamp("2000-01-01", tz=UTC), id="timestamp" - ), - pytest.param( - datetime(2000, 1, 1), datetime(2000, 1, 1, tzinfo=UTC), id="datetime" - ), - pytest.param( - SubDatetime(2000, 1, 1), - SubDatetime(2000, 1, 1, tzinfo=UTC), - id="subclassed_datetime", - ), - ], -) -def test_localize_pydatetime_dt_types(dt, expected): - # GH 25851 - # ensure that subclassed datetime works with - # localize_pydatetime - result = conversion.localize_pydatetime(dt, UTC) - assert result == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/gdscript.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/gdscript.py deleted file mode 100644 index 0f4f6d4315b7e1fac10cfc8db37e4276134c1b94..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/gdscript.py +++ /dev/null @@ -1,188 +0,0 @@ -""" - pygments.lexers.gdscript - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexer for GDScript. - - Modified by Daniel J. Ramirez based on the original - python.py. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import RegexLexer, include, bygroups, default, words, \ - combined -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Whitespace - -__all__ = ["GDScriptLexer"] - - -class GDScriptLexer(RegexLexer): - """ - For GDScript source code. - """ - - name = "GDScript" - url = 'https://www.godotengine.org' - aliases = ["gdscript", "gd"] - filenames = ["*.gd"] - mimetypes = ["text/x-gdscript", "application/x-gdscript"] - - def innerstring_rules(ttype): - return [ - # the old style '%s' % (...) string formatting - (r"%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?" - "[hlL]?[E-GXc-giorsux%]", - String.Interpol), - # backslashes, quotes and formatting signs must be parsed one at a time - (r'[^\\\'"%\n]+', ttype), - (r'[\'"\\]', ttype), - # unhandled string formatting sign - (r"%", ttype), - # newlines are an error (use "nl" state) - ] - - tokens = { - "root": [ - (r"\n", Whitespace), - (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")', - bygroups(Whitespace, String.Affix, String.Doc)), - (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')", - bygroups(Whitespace, String.Affix, String.Doc)), - (r"[^\S\n]+", Whitespace), - (r"#.*$", Comment.Single), - (r"[]{}:(),;[]", Punctuation), - (r"(\\)(\n)", bygroups(Text, Whitespace)), - (r"\\", Text), - (r"(in|and|or|not)\b", Operator.Word), - (r"!=|==|<<|>>|&&|\+=|-=|\*=|/=|%=|&=|\|=|\|\||[-~+/*%=<>&^.!|$]", - Operator), - include("keywords"), - (r"(func)(\s+)", bygroups(Keyword, Whitespace), "funcname"), - (r"(class)(\s+)", bygroups(Keyword, Whitespace), "classname"), - include("builtins"), - ('([rR]|[uUbB][rR]|[rR][uUbB])(""")', - bygroups(String.Affix, String.Double), - "tdqs"), - ("([rR]|[uUbB][rR]|[rR][uUbB])(''')", - bygroups(String.Affix, String.Single), - "tsqs"), - ('([rR]|[uUbB][rR]|[rR][uUbB])(")', - bygroups(String.Affix, String.Double), - "dqs"), - ("([rR]|[uUbB][rR]|[rR][uUbB])(')", - bygroups(String.Affix, String.Single), - "sqs"), - ('([uUbB]?)(""")', - bygroups(String.Affix, String.Double), - combined("stringescape", "tdqs")), - ("([uUbB]?)(''')", - bygroups(String.Affix, String.Single), - combined("stringescape", "tsqs")), - ('([uUbB]?)(")', - bygroups(String.Affix, String.Double), - combined("stringescape", "dqs")), - ("([uUbB]?)(')", - bygroups(String.Affix, String.Single), - combined("stringescape", "sqs")), - include("name"), - include("numbers"), - ], - "keywords": [ - (words(("and", "in", "not", "or", "as", "breakpoint", "class", - "class_name", "extends", "is", "func", "setget", "signal", - "tool", "const", "enum", "export", "onready", "static", - "var", "break", "continue", "if", "elif", "else", "for", - "pass", "return", "match", "while", "remote", "master", - "puppet", "remotesync", "mastersync", "puppetsync"), - suffix=r"\b"), Keyword), - ], - "builtins": [ - (words(("Color8", "ColorN", "abs", "acos", "asin", "assert", "atan", - "atan2", "bytes2var", "ceil", "char", "clamp", "convert", - "cos", "cosh", "db2linear", "decimals", "dectime", "deg2rad", - "dict2inst", "ease", "exp", "floor", "fmod", "fposmod", - "funcref", "hash", "inst2dict", "instance_from_id", "is_inf", - "is_nan", "lerp", "linear2db", "load", "log", "max", "min", - "nearest_po2", "pow", "preload", "print", "print_stack", - "printerr", "printraw", "prints", "printt", "rad2deg", - "rand_range", "rand_seed", "randf", "randi", "randomize", - "range", "round", "seed", "sign", "sin", "sinh", "sqrt", - "stepify", "str", "str2var", "tan", "tan", "tanh", - "type_exist", "typeof", "var2bytes", "var2str", "weakref", - "yield"), prefix=r"(?Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
          ') - with gr.Column(scale=4): - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话") - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - usageTxt = gr.Markdown("**发送消息** 或 **提交key** 以显示额度", elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False, visible=False): - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - gr.HTML(footer.format(versions=versions_html()), elem_id="footer") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(sum(token_count.value[-4:])), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=auth_list, - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=auth_list, - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/pustozerov/poc-handwriting-ocr/modules/ocr_model_en/helpers.py b/spaces/pustozerov/poc-handwriting-ocr/modules/ocr_model_en/helpers.py deleted file mode 100644 index 0918f278e99bb48525f308f2ba79c6016a771fa4..0000000000000000000000000000000000000000 --- a/spaces/pustozerov/poc-handwriting-ocr/modules/ocr_model_en/helpers.py +++ /dev/null @@ -1,44 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Helper functions for ocr project -""" -import cv2 -import matplotlib.pyplot as plt -import numpy as np - -SMALL_HEIGHT = 800 - - -def implt(img, cmp=None, t=''): - """Show image using plt.""" - plt.imshow(img, cmap=cmp) - plt.title(t) - plt.show() - - -def resize(img, height=SMALL_HEIGHT, always=False): - """Resize image to given height.""" - if img.shape[0] > height or always: - rat = height / img.shape[0] - return cv2.resize(img, (int(rat * img.shape[1]), height)) - - return img - - -def ratio(img, height=SMALL_HEIGHT): - """Getting scale ratio.""" - return img.shape[0] / height - - -def img_extend(img, shape): - """Extend 2D image (numpy array) in vertical and horizontal direction. - Shape of result image will match 'shape' - Args: - img: image to be extended - shape: shape (tuple) of result image - Returns: - Extended image - """ - x = np.zeros(shape, np.uint8) - x[:img.shape[0], :img.shape[1]] = img - return x diff --git a/spaces/qingxu98/academic-chatgpt-beta/crazy_functional.py b/spaces/qingxu98/academic-chatgpt-beta/crazy_functional.py deleted file mode 100644 index 6f4d37ee7703b1de37bbe326ddd4fa2a990de67e..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/crazy_functional.py +++ /dev/null @@ -1,192 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - - -def get_crazy_functions(): - ###################### 第一组插件 ########################### - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个Rect项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - from crazy_functions.Latex全文润色 import Latex英文润色 - from crazy_functions.询问多个大语言模型 import 同时问询 - from crazy_functions.解析项目源代码 import 解析一个Lua项目 - from crazy_functions.解析项目源代码 import 解析一个CSharp项目 - from crazy_functions.总结word文档 import 总结word文档 - function_plugins = { - - "解析整个Python项目": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(解析一个Python项目) - }, - "批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目的头文件) - }, - "解析整个C++项目(.cpp/.hpp/.c/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目) - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Golang项目) - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Java项目) - }, - "解析整个React项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Rect项目) - }, - "解析整个Lua项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Lua项目) - }, - "解析整个CSharp项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个CSharp项目) - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(读文章写摘要) - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(批量生成函数注释) - }, - "[多线程Demo] 解析此项目本身(源码自译解)": { - "Function": HotReload(解析项目本身) - }, - "[多线程demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(全项目切换英文) - }, - "[函数插件模板Demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 - from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 - from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 - from crazy_functions.Latex全文润色 import Latex中文润色 - from crazy_functions.Latex全文翻译 import Latex中译英 - from crazy_functions.Latex全文翻译 import Latex英译中 - from crazy_functions.批量Markdown翻译 import Markdown中译英 - from crazy_functions.批量Markdown翻译 import Markdown英译中 - - function_plugins.update({ - "批量翻译PDF文档(多线程)": { - "Color": "stop", - "AsButton": True, # 加入下拉菜单中 - "Function": HotReload(批量翻译PDF文档) - }, - "询问多个GPT模型": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(同时问询) - }, - "[测试功能] 批量总结PDF文档": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(批量总结PDF文档) - }, - "[测试功能] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "谷歌学术检索助手(输入谷歌学术搜索页url)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(谷歌检索小助手) - }, - - "理解PDF文档内容 (模仿ChatPDF)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(理解PDF文档内容标准文件输入) - }, - "[测试功能] 英文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英文润色) - }, - "[测试功能] 中文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中文润色) - }, - "[测试功能] Latex项目全文中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中译英) - }, - "[测试功能] Latex项目全文英译中(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英译中) - }, - "[测试功能] 批量Markdown中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Markdown中译英) - }, - "[测试功能] 批量Markdown英译中(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Markdown英译中) - }, - - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - try: - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - - except Exception as err: - print(f'[下载arxiv论文并翻译摘要] 插件导入失败 {str(err)}') - - - - ###################### 第n组插件 ########################### - return function_plugins diff --git a/spaces/qingxu98/gpt-academic/request_llm/bridge_azure_test.py b/spaces/qingxu98/gpt-academic/request_llm/bridge_azure_test.py deleted file mode 100644 index edc68f747d650e20a9e42d65dbcac1923d5cb192..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/request_llm/bridge_azure_test.py +++ /dev/null @@ -1,241 +0,0 @@ -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import logging -import traceback -import importlib -import openai -import time - - -# 读取config.py文件中关于AZURE OPENAI API的信息 -from toolbox import get_conf, update_ui, clip_history, trimmed_format_exc -TIMEOUT_SECONDS, MAX_RETRY, AZURE_ENGINE, AZURE_ENDPOINT, AZURE_API_VERSION, AZURE_API_KEY = \ - get_conf('TIMEOUT_SECONDS', 'MAX_RETRY',"AZURE_ENGINE","AZURE_ENDPOINT", "AZURE_API_VERSION", "AZURE_API_KEY") - - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至azure openai api,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - print(llm_kwargs["llm_model"]) - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - - payload = generate_azure_payload(inputs, llm_kwargs, history, system_prompt, stream) - - history.append(inputs); history.append("") - - retry = 0 - while True: - try: - - openai.api_type = "azure" - openai.api_version = AZURE_API_VERSION - openai.api_base = AZURE_ENDPOINT - openai.api_key = AZURE_API_KEY - response = openai.ChatCompletion.create(timeout=TIMEOUT_SECONDS, **payload);break - - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], "获取response失败,重试中。。。")) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - is_head_of_the_stream = True - if stream: - - stream_response = response - - while True: - try: - chunk = next(stream_response) - - except StopIteration: - from toolbox import regular_txt_to_markdown; tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 远程返回错误: \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk)}") - yield from update_ui(chatbot=chatbot, history=history, msg="远程返回错误:" + chunk) # 刷新界面 - return - - if is_head_of_the_stream and (r'"object":"error"' not in chunk): - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - #print(chunk) - try: - if "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - logging.info(f'[response] {gpt_replying_buffer}') - break - status_text = f"finish_reason: {chunk['choices'][0]['finish_reason']}" - gpt_replying_buffer = gpt_replying_buffer + chunk["choices"][0]["delta"]["content"] - - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 - - except Exception as e: - traceback.print_exc() - yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 - chunk = get_full_error(chunk, stream_response) - - error_msg = chunk - yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 - return - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 发送至AZURE OPENAI API,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - chatGPT的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 - payload = generate_azure_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True) - retry = 0 - while True: - - try: - openai.api_type = "azure" - openai.api_version = AZURE_API_VERSION - openai.api_base = AZURE_ENDPOINT - openai.api_key = AZURE_API_KEY - response = openai.ChatCompletion.create(timeout=TIMEOUT_SECONDS, **payload);break - - except: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - - stream_response = response - result = '' - while True: - try: chunk = next(stream_response) - except StopIteration: - break - except: - chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 - - if len(chunk)==0: continue - if not chunk.startswith('data:'): - error_msg = get_full_error(chunk, stream_response) - if "reduce the length" in error_msg: - raise ConnectionAbortedError("AZURE OPENAI API拒绝了请求:" + error_msg) - else: - raise RuntimeError("AZURE OPENAI API拒绝了请求:" + error_msg) - if ('data: [DONE]' in chunk): break - - delta = chunk["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: - result += delta["content"] - if not console_slience: print(delta["content"], end='') - if observe_window is not None: - # 观测窗,把已经获取的数据显示出去 - if len(observe_window) >= 1: observe_window[0] += delta["content"] - # 看门狗,如果超过期限没有喂狗,则终止 - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("用户取消了程序。") - else: raise RuntimeError("意外Json结构:"+delta) - if chunk['finish_reason'] == 'length': - raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。") - return result - - -def generate_azure_payload(inputs, llm_kwargs, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成 azure openai api请求,为发送请求做准备 - """ - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": llm_kwargs['llm_model'], - "messages": messages, - "temperature": llm_kwargs['temperature'], # 1.0, - "top_p": llm_kwargs['top_p'], # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - "engine": AZURE_ENGINE - } - try: - print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") - except: - print('输入中可能存在乱码。') - return payload - - diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Absolute Zero Full Movie In Hindi Dubbed In Mp4 _BEST_.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Absolute Zero Full Movie In Hindi Dubbed In Mp4 _BEST_.md deleted file mode 100644 index d593eb09a8feb6e750957581cd5b6d67a830fa67..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Absolute Zero Full Movie In Hindi Dubbed In Mp4 _BEST_.md +++ /dev/null @@ -1,32 +0,0 @@ -

          Download Absolute Zero full movie in hindi dubbed in Mp4


          Download Ziphttps://geags.com/2uCsoJ



          -
          -with his portrayal of Tony Stark, is widely recognized and regarded by many fans as the quintessential performance of the actor. - -After the release of the first Iron Man film, Downey, Jr. was nominated for Best Actor at the 73rd Golden Globe Awards, winning for his role in . - -Its sequel Iron Man 2, was released in 2010, and received positive reviews. The film was nominated for six Academy Awards and won two including Best Visual Effects. It has also grossed over $1 billion at the box office, making it the highest-grossing film in the series. - -In 2014, it was announced that Downey, Jr. would be portraying Tony Stark/Iron Man in the Marvel Cinematic Universe film series, with Iron Man being the first. - -The film series consists of three MCU films and two spin-off films, with the most recent, , being released in May 2018. - -On May 2, 2020, Marvel announced that Downey, Jr. will reprise his role as Tony Stark / Iron Man in a fourth solo Marvel Cinematic Universe film, , scheduled to be released on May 3, 2022. - -In 2007, Downey, Jr. was ranked #5 in Empire Magazine's list of The Top 50 Movie Stars. - -Education - -Downey, Jr. studied theatre at Brigham Young University from 1995 to 1999, where he wrote and starred in five one-man shows. His acting teacher, Mark Schantz, said Downey, Jr. was an "outstanding performer" and "an intense and very serious actor" who "appeared to be very willing to learn his craft." - -Downey, Jr. graduated with a bachelor's degree in journalism from the University of Southern California. He returned to Brigham Young University in 2003 to receive his master's degree in public administration. - -Downey, Jr. also has a master's degree in physical therapy from the University of California at Irvine. - -Downey, Jr. is a member of The Church of Jesus Christ of Latter-day Saints (LDS Church). In an interview for , he stated that "Before I was baptized a [Latter-day Saint] we used to sneak into Joseph Smith's house when he was asleep and read the Book of Mormon by flashlight." He has been a member of the LDS Church for the majority of his life, being baptized in 1988. - -Personal life - -Downey, Jr. is married to actress May 4fefd39f24
          -
          -
          -

          diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Emedia Card Designer Crack Key Keygen Free.md b/spaces/quidiaMuxgu/Expedit-SAM/Emedia Card Designer Crack Key Keygen Free.md deleted file mode 100644 index b27973fc0696a3582bd870b66cb3bed23db09bc6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Emedia Card Designer Crack Key Keygen Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Emedia Card Designer Crack Key Keygen Free


          Download File ••• https://geags.com/2uCsFr



          -
          - 3cee63e6c2
          -
          -
          -

          diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v1.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v1.py deleted file mode 100644 index 65087853a85bcc16fcbb1911d04519d27a38f0bd..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v1.py +++ /dev/null @@ -1,701 +0,0 @@ -import os -import logging -import sys -from dotenv import load_dotenv - -load_dotenv() - -os.environ["OMP_NUM_THREADS"] = "4" -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -now_dir = os.getcwd() -sys.path.append(now_dir) -import multiprocessing - -logger = logging.getLogger(__name__) - - -class Harvest(multiprocessing.Process): - def __init__(self, inp_q, opt_q): - multiprocessing.Process.__init__(self) - self.inp_q = inp_q - self.opt_q = opt_q - - def run(self): - import numpy as np - import pyworld - - while 1: - idx, x, res_f0, n_cpu, ts = self.inp_q.get() - f0, t = pyworld.harvest( - x.astype(np.double), - fs=16000, - f0_ceil=1100, - f0_floor=50, - frame_period=10, - ) - res_f0[idx] = f0 - if len(res_f0.keys()) >= n_cpu: - self.opt_q.put(ts) - - -if __name__ == "__main__": - import json - import re - import threading - import time - from multiprocessing import Queue, cpu_count - - import librosa - from tools.torchgate import TorchGate - import numpy as np - import PySimpleGUI as sg - import sounddevice as sd - import torch - import torch.nn.functional as F - import torchaudio.transforms as tat - - import tools.rvc_for_realtime as rvc_for_realtime - from assets.i18n.i18n import I18nAuto - - i18n = I18nAuto() - device = rvc_for_realtime.config.device - - current_dir = os.getcwd() - inp_q = Queue() - opt_q = Queue() - n_cpu = min(cpu_count(), 8) - for _ in range(n_cpu): - Harvest(inp_q, opt_q).start() - - class GUIConfig: - def __init__(self) -> None: - self.pth_path: str = "" - self.index_path: str = "" - self.pitch: int = 0 - self.samplerate: int = 40000 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -60 - self.crossfade_time: float = 0.04 - self.extra_time: float = 2.0 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.rms_mix_rate = 0.0 - self.index_rate = 0.3 - self.n_cpu = min(n_cpu, 6) - self.f0method = "harvest" - self.sg_input_device = "" - self.sg_output_device = "" - - class GUI: - def __init__(self) -> None: - self.config = GUIConfig() - self.flag_vc = False - - self.launcher() - - def load(self): - input_devices, output_devices, _, _ = self.get_devices() - try: - with open("assets/configs/config.json", "r") as j: - data = json.load(j) - data["pm"] = data["f0method"] == "pm" - data["harvest"] = data["f0method"] == "harvest" - data["crepe"] = data["f0method"] == "crepe" - data["rmvpe"] = data["f0method"] == "rmvpe" - except: - with open("assets/configs/config.json", "w") as j: - data = { - "pth_path": " ", - "index_path": " ", - "sg_input_device": input_devices[sd.default.device[0]], - "sg_output_device": output_devices[sd.default.device[1]], - "threhold": "-60", - "pitch": "0", - "index_rate": "0", - "rms_mix_rate": "0", - "block_time": "0.25", - "crossfade_length": "0.04", - "extra_time": "2", - "f0method": "rmvpe", - } - data["pm"] = data["f0method"] == "pm" - data["harvest"] = data["f0method"] == "harvest" - data["crepe"] = data["f0method"] == "crepe" - data["rmvpe"] = data["f0method"] == "rmvpe" - return data - - def launcher(self): - data = self.load() - sg.theme("LightBlue3") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title=i18n("加载模型"), - layout=[ - [ - sg.Input( - default_text=data.get("pth_path", ""), - key="pth_path", - ), - sg.FileBrowse( - i18n("选择.pth文件"), - initial_folder=os.path.join( - os.getcwd(), "assets/weights" - ), - file_types=((". pth"),), - ), - ], - [ - sg.Input( - default_text=data.get("index_path", ""), - key="index_path", - ), - sg.FileBrowse( - i18n("选择.index文件"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=((". index"),), - ), - ], - ], - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("输入设备")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=data.get("sg_input_device", ""), - ), - ], - [ - sg.Text(i18n("输出设备")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=data.get("sg_output_device", ""), - ), - ], - [sg.Button(i18n("重载设备列表"), key="reload_devices")], - ], - title=i18n("音频设备(请使用同种类驱动)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("响应阈值")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=data.get("threhold", "-60"), - enable_events=True, - ), - ], - [ - sg.Text(i18n("音调设置")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=data.get("pitch", "0"), - enable_events=True, - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=data.get("index_rate", "0"), - enable_events=True, - ), - ], - [ - sg.Text(i18n("响度因子")), - sg.Slider( - range=(0.0, 1.0), - key="rms_mix_rate", - resolution=0.01, - orientation="h", - default_value=data.get("rms_mix_rate", "0"), - enable_events=True, - ), - ], - [ - sg.Text(i18n("音高算法")), - sg.Radio( - "pm", - "f0method", - key="pm", - default=data.get("pm", "") == True, - enable_events=True, - ), - sg.Radio( - "harvest", - "f0method", - key="harvest", - default=data.get("harvest", "") == True, - enable_events=True, - ), - sg.Radio( - "crepe", - "f0method", - key="crepe", - default=data.get("crepe", "") == True, - enable_events=True, - ), - sg.Radio( - "rmvpe", - "f0method", - key="rmvpe", - default=data.get("rmvpe", "") == True, - enable_events=True, - ), - ], - ], - title=i18n("常规设置"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("采样长度")), - sg.Slider( - range=(0.05, 2.4), - key="block_time", - resolution=0.01, - orientation="h", - default_value=data.get("block_time", "0.25"), - enable_events=True, - ), - ], - [ - sg.Text(i18n("harvest进程数")), - sg.Slider( - range=(1, n_cpu), - key="n_cpu", - resolution=1, - orientation="h", - default_value=data.get( - "n_cpu", min(self.config.n_cpu, n_cpu) - ), - enable_events=True, - ), - ], - [ - sg.Text(i18n("淡入淡出长度")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=data.get("crossfade_length", "0.04"), - enable_events=True, - ), - ], - [ - sg.Text(i18n("额外推理时长")), - sg.Slider( - range=(0.05, 5.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=data.get("extra_time", "2.0"), - enable_events=True, - ), - ], - [ - sg.Checkbox( - i18n("输入降噪"), - key="I_noise_reduce", - enable_events=True, - ), - sg.Checkbox( - i18n("输出降噪"), - key="O_noise_reduce", - enable_events=True, - ), - ], - ], - title=i18n("性能设置"), - ), - ], - [ - sg.Button(i18n("开始音频转换"), key="start_vc"), - sg.Button(i18n("停止音频转换"), key="stop_vc"), - sg.Text(i18n("推理时间(ms):")), - sg.Text("0", key="infer_time"), - ], - ] - self.window = sg.Window("RVC - GUI", layout=layout, finalize=True) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "reload_devices": - prev_input = self.window["sg_input_device"].get() - prev_output = self.window["sg_output_device"].get() - input_devices, output_devices, _, _ = self.get_devices(update=True) - if prev_input not in input_devices: - self.config.sg_input_device = input_devices[0] - else: - self.config.sg_input_device = prev_input - self.window["sg_input_device"].Update(values=input_devices) - self.window["sg_input_device"].Update( - value=self.config.sg_input_device - ) - if prev_output not in output_devices: - self.config.sg_output_device = output_devices[0] - else: - self.config.sg_output_device = prev_output - self.window["sg_output_device"].Update(values=output_devices) - self.window["sg_output_device"].Update( - value=self.config.sg_output_device - ) - if event == "start_vc" and self.flag_vc == False: - if self.set_values(values) == True: - logger.info("Use CUDA: %s", torch.cuda.is_available()) - self.start_vc() - settings = { - "pth_path": values["pth_path"], - "index_path": values["index_path"], - "sg_input_device": values["sg_input_device"], - "sg_output_device": values["sg_output_device"], - "threhold": values["threhold"], - "pitch": values["pitch"], - "rms_mix_rate": values["rms_mix_rate"], - "index_rate": values["index_rate"], - "block_time": values["block_time"], - "crossfade_length": values["crossfade_length"], - "extra_time": values["extra_time"], - "n_cpu": values["n_cpu"], - "f0method": ["pm", "harvest", "crepe", "rmvpe"][ - [ - values["pm"], - values["harvest"], - values["crepe"], - values["rmvpe"], - ].index(True) - ], - } - with open("assets/configs/config.json", "w") as j: - json.dump(settings, j) - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - # Parameter hot update - if event == "threhold": - self.config.threhold = values["threhold"] - elif event == "pitch": - self.config.pitch = values["pitch"] - if hasattr(self, "rvc"): - self.rvc.change_key(values["pitch"]) - elif event == "index_rate": - self.config.index_rate = values["index_rate"] - if hasattr(self, "rvc"): - self.rvc.change_index_rate(values["index_rate"]) - elif event == "rms_mix_rate": - self.config.rms_mix_rate = values["rms_mix_rate"] - elif event in ["pm", "harvest", "crepe", "rmvpe"]: - self.config.f0method = event - elif event == "I_noise_reduce": - self.config.I_noise_reduce = values["I_noise_reduce"] - elif event == "O_noise_reduce": - self.config.O_noise_reduce = values["O_noise_reduce"] - elif event != "start_vc" and self.flag_vc == True: - # Other parameters do not support hot update - self.flag_vc = False - - def set_values(self, values): - if len(values["pth_path"].strip()) == 0: - sg.popup(i18n("请选择pth文件")) - return False - if len(values["index_path"].strip()) == 0: - sg.popup(i18n("请选择index文件")) - return False - pattern = re.compile("[^\x00-\x7F]+") - if pattern.findall(values["pth_path"]): - sg.popup(i18n("pth文件路径不可包含中文")) - return False - if pattern.findall(values["index_path"]): - sg.popup(i18n("index文件路径不可包含中文")) - return False - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.rms_mix_rate = values["rms_mix_rate"] - self.config.index_rate = values["index_rate"] - self.config.n_cpu = values["n_cpu"] - self.config.f0method = ["pm", "harvest", "crepe", "rmvpe"][ - [ - values["pm"], - values["harvest"], - values["crepe"], - values["rmvpe"], - ].index(True) - ] - return True - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.rvc = rvc_for_realtime.RVC( - self.config.pitch, - self.config.pth_path, - self.config.index_path, - self.config.index_rate, - self.config.n_cpu, - inp_q, - opt_q, - device, - self.rvc if hasattr(self, "rvc") else None - ) - self.config.samplerate = self.rvc.tgt_sr - self.zc = self.rvc.tgt_sr // 100 - self.block_frame = int(np.round(self.config.block_time * self.config.samplerate / self.zc)) * self.zc - self.block_frame_16k = 160 * self.block_frame // self.zc - self.crossfade_frame = int(np.round(self.config.crossfade_time * self.config.samplerate / self.zc)) * self.zc - self.sola_search_frame = self.zc - self.extra_frame = int(np.round(self.config.extra_time * self.config.samplerate / self.zc)) * self.zc - self.input_wav: torch.Tensor = torch.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - device=device, - dtype=torch.float32, - ) - self.input_wav_res: torch.Tensor= torch.zeros(160 * self.input_wav.shape[0] // self.zc, device=device,dtype=torch.float32) - self.pitch: np.ndarray = np.zeros( - self.input_wav.shape[0] // self.zc, - dtype="int32", - ) - self.pitchf: np.ndarray = np.zeros( - self.input_wav.shape[0] // self.zc, - dtype="float64", - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.nr_buffer: torch.Tensor = self.sola_buffer.clone() - self.output_buffer: torch.Tensor = self.input_wav.clone() - self.res_buffer: torch.Tensor = torch.zeros(2 * self.zc, device=device,dtype=torch.float32) - self.valid_rate = 1 - (self.extra_frame - 1) / self.input_wav.shape[0] - self.fade_in_window: torch.Tensor = ( - torch.sin( - 0.5 - * np.pi - * torch.linspace( - 0.0, - 1.0, - steps=self.crossfade_frame, - device=device, - dtype=torch.float32, - ) - ) - ** 2 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ).to(device) - self.tg = TorchGate(sr=self.config.samplerate, n_fft=4*self.zc, prop_decrease=0.9).to(device) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - channels = 1 if sys.platform == "darwin" else 2 - with sd.Stream( - channels=channels, - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - logger.debug("Audio block passed.") - logger.debug("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.threhold > -60: - rms = librosa.feature.rms( - y=indata, frame_length=4*self.zc, hop_length=self.zc - ) - db_threhold = ( - librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - ) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * self.zc : (i + 1) * self.zc] = 0 - self.input_wav[: -self.block_frame] = self.input_wav[self.block_frame :].clone() - self.input_wav[-self.block_frame: ] = torch.from_numpy(indata).to(device) - self.input_wav_res[ : -self.block_frame_16k] = self.input_wav_res[self.block_frame_16k :].clone() - # input noise reduction and resampling - if self.config.I_noise_reduce: - input_wav = self.input_wav[-self.crossfade_frame -self.block_frame-2*self.zc: ] - input_wav = self.tg(input_wav.unsqueeze(0), self.input_wav.unsqueeze(0))[0, 2*self.zc:] - input_wav[: self.crossfade_frame] *= self.fade_in_window - input_wav[: self.crossfade_frame] += self.nr_buffer * self.fade_out_window - self.nr_buffer[:] = input_wav[-self.crossfade_frame: ] - input_wav = torch.cat((self.res_buffer[:], input_wav[: self.block_frame])) - self.res_buffer[:] = input_wav[-2*self.zc: ] - self.input_wav_res[-self.block_frame_16k-160: ] = self.resampler(input_wav)[160: ] - else: - self.input_wav_res[-self.block_frame_16k-160: ] = self.resampler(self.input_wav[-self.block_frame-2*self.zc: ])[160: ] - # infer - f0_extractor_frame = self.block_frame_16k + 800 - if self.config.f0method == 'rmvpe': - f0_extractor_frame = 5120 * ((f0_extractor_frame - 1) // 5120 + 1) - infer_wav = self.rvc.infer( - self.input_wav_res, - self.input_wav_res[-f0_extractor_frame :].cpu().numpy(), - self.block_frame_16k, - self.valid_rate, - self.pitch, - self.pitchf, - self.config.f0method, - ) - infer_wav = infer_wav[ - -self.crossfade_frame - self.sola_search_frame - self.block_frame : - ] - # output noise reduction - if self.config.O_noise_reduce: - self.output_buffer[: -self.block_frame] = self.output_buffer[self.block_frame :].clone() - self.output_buffer[-self.block_frame: ] = infer_wav[-self.block_frame:] - infer_wav = self.tg(infer_wav.unsqueeze(0), self.output_buffer.unsqueeze(0)).squeeze(0) - # volume envelop mixing - if self.config.rms_mix_rate < 1: - rms1 = librosa.feature.rms( - y=self.input_wav_res[-160*infer_wav.shape[0]//self.zc :].cpu().numpy(), - frame_length=640, - hop_length=160, - ) - rms1 = torch.from_numpy(rms1).to(device) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=infer_wav.shape[0] + 1, mode="linear",align_corners=True, - )[0,0,:-1] - rms2 = librosa.feature.rms( - y=infer_wav[:].cpu().numpy(), frame_length=4*self.zc, hop_length=self.zc - ) - rms2 = torch.from_numpy(rms2).to(device) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=infer_wav.shape[0] + 1, mode="linear",align_corners=True, - )[0,0,:-1] - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-3) - infer_wav *= torch.pow(rms1 / rms2, torch.tensor(1 - self.config.rms_mix_rate)) - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - conv_input = infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - cor_nom = F.conv1d(conv_input, self.sola_buffer[None, None, :]) - cor_den = torch.sqrt( - F.conv1d(conv_input ** 2, torch.ones(1, 1, self.crossfade_frame, device=device)) + 1e-8) - if sys.platform == "darwin": - _, sola_offset = torch.max(cor_nom[0, 0] / cor_den[0, 0]) - sola_offset = sola_offset.item() - else: - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - logger.debug("sola_offset = %d", int(sola_offset)) - infer_wav = infer_wav[sola_offset: sola_offset + self.block_frame + self.crossfade_frame] - infer_wav[: self.crossfade_frame] *= self.fade_in_window - infer_wav[: self.crossfade_frame] += self.sola_buffer *self.fade_out_window - self.sola_buffer[:] = infer_wav[-self.crossfade_frame:] - if sys.platform == "darwin": - outdata[:] = infer_wav[:-self.crossfade_frame].cpu().numpy()[:, np.newaxis] - else: - outdata[:] = infer_wav[:-self.crossfade_frame].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - logger.info("Infer time: %.2f", total_time) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[ - input_devices.index(input_device) - ] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - logger.info( - "Input device: %s:%s", str(sd.default.device[0]), input_device - ) - logger.info( - "Output device: %s:%s", str(sd.default.device[1]), output_device - ) - - gui = GUI() \ No newline at end of file diff --git a/spaces/r3gm/RVC_HF/Makefile b/spaces/r3gm/RVC_HF/Makefile deleted file mode 100644 index 44de020e6feb7fcd58016d7c3c736681f533b597..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/Makefile +++ /dev/null @@ -1,63 +0,0 @@ -.PHONY: -.ONESHELL: - -help: ## Show this help and exit - @grep -hE '^[A-Za-z0-9_ \-]*?:.*##.*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' - -install: ## Install dependencies (Do everytime you start up a paperspace machine) - apt-get -y install build-essential python3-dev ffmpeg - pip install --upgrade setuptools wheel - pip install --upgrade pip - pip install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.1 - pip install -r requirements.txt - pip install --upgrade lxml - apt-get update - apt -y install -qq aria2 - -basev1: ## Download version 1 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d pretrained -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d pretrained -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d pretrained -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d pretrained -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d pretrained -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d pretrained -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d pretrained -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d pretrained -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d pretrained -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d pretrained -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d pretrained -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d pretrained -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -basev2: ## Download version 2 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained_v2 uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d pretrained_v2 -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d pretrained_v2 -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d pretrained_v2 -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d pretrained_v2 -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d pretrained_v2 -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d pretrained_v2 -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d pretrained_v2 -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d pretrained_v2 -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d pretrained_v2 -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d pretrained_v2 -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d pretrained_v2 -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d pretrained_v2 -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -run-ui: ## Run the python GUI - python infer-web.py --paperspace --pycmd python - -run-cli: ## Run the python CLI - python infer-web.py --pycmd python --is_cli - -tensorboard: ## Start the tensorboard (Run on separate terminal) - echo https://tensorboard-$$(hostname).clg07azjl.paperspacegradient.com - tensorboard --logdir logs --bind_all \ No newline at end of file diff --git a/spaces/r3gm/RVC_HF/i18n/scan_i18n.py b/spaces/r3gm/RVC_HF/i18n/scan_i18n.py deleted file mode 100644 index f3e52cf4f9f06d78877d77d2353f666aa759e36f..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/i18n/scan_i18n.py +++ /dev/null @@ -1,75 +0,0 @@ -import ast -import glob -import json -from collections import OrderedDict - - -def extract_i18n_strings(node): - i18n_strings = [] - - if ( - isinstance(node, ast.Call) - and isinstance(node.func, ast.Name) - and node.func.id == "i18n" - ): - for arg in node.args: - if isinstance(arg, ast.Str): - i18n_strings.append(arg.s) - - for child_node in ast.iter_child_nodes(node): - i18n_strings.extend(extract_i18n_strings(child_node)) - - return i18n_strings - - -# scan the directory for all .py files (recursively) -# for each file, parse the code into an AST -# for each AST, extract the i18n strings - -strings = [] -for filename in glob.iglob("**/*.py", recursive=True): - with open(filename, "r") as f: - code = f.read() - if "I18nAuto" in code: - tree = ast.parse(code) - i18n_strings = extract_i18n_strings(tree) - print(filename, len(i18n_strings)) - strings.extend(i18n_strings) -code_keys = set(strings) -""" -n_i18n.py -gui_v1.py 26 -app.py 16 -infer-web.py 147 -scan_i18n.py 0 -i18n.py 0 -lib/train/process_ckpt.py 1 -""" -print() -print("Total unique:", len(code_keys)) - - -standard_file = "i18n/locale/zh_CN.json" -with open(standard_file, "r", encoding="utf-8") as f: - standard_data = json.load(f, object_pairs_hook=OrderedDict) -standard_keys = set(standard_data.keys()) - -# Define the standard file name -unused_keys = standard_keys - code_keys -print("Unused keys:", len(unused_keys)) -for unused_key in unused_keys: - print("\t", unused_key) - -missing_keys = code_keys - standard_keys -print("Missing keys:", len(missing_keys)) -for missing_key in missing_keys: - print("\t", missing_key) - -code_keys_dict = OrderedDict() -for s in strings: - code_keys_dict[s] = s - -# write back -with open(standard_file, "w", encoding="utf-8") as f: - json.dump(code_keys_dict, f, ensure_ascii=False, indent=4, sort_keys=True) - f.write("\n") diff --git a/spaces/radames/Candle-BERT-Semantic-Similarity-Wasm/utils.js b/spaces/radames/Candle-BERT-Semantic-Similarity-Wasm/utils.js deleted file mode 100644 index e6d42ad85a3d66b49fbd2e5876f1552a02f24727..0000000000000000000000000000000000000000 --- a/spaces/radames/Candle-BERT-Semantic-Similarity-Wasm/utils.js +++ /dev/null @@ -1,109 +0,0 @@ -export async function getEmbeddings( - worker, - weightsURL, - tokenizerURL, - configURL, - modelID, - sentences, - updateStatus = null -) { - return new Promise((resolve, reject) => { - worker.postMessage({ - weightsURL, - tokenizerURL, - configURL, - modelID, - sentences, - }); - function messageHandler(event) { - if ("error" in event.data) { - worker.removeEventListener("message", messageHandler); - reject(new Error(event.data.error)); - } - if (event.data.status === "complete") { - worker.removeEventListener("message", messageHandler); - resolve(event.data); - } - if (updateStatus) updateStatus(event.data); - } - worker.addEventListener("message", messageHandler); - }); -} - -const MODELS = { - intfloat_e5_small_v2: { - base_url: "https://huggingface.co/intfloat/e5-small-v2/resolve/main/", - search_prefix: "query: ", - document_prefix: "passage: ", - }, - intfloat_e5_base_v2: { - base_url: "https://huggingface.co/intfloat/e5-base-v2/resolve/main/", - search_prefix: "query: ", - document_prefix: "passage:", - }, - intfloat_multilingual_e5_small: { - base_url: - "https://huggingface.co/intfloat/multilingual-e5-small/resolve/main/", - search_prefix: "query: ", - document_prefix: "passage: ", - }, - sentence_transformers_all_MiniLM_L6_v2: { - base_url: - "https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/resolve/refs%2Fpr%2F21/", - search_prefix: "", - document_prefix: "", - }, - sentence_transformers_all_MiniLM_L12_v2: { - base_url: - "https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2/resolve/refs%2Fpr%2F4/", - search_prefix: "", - document_prefix: "", - }, - gte_tiny: { - base_url: "https://huggingface.co/TaylorAI/gte-tiny/resolve/refs%2Fpr%2F2/", - search_prefix: "", - document_prefix: "", - }, - bge_micro: { - base_url: "https://huggingface.co/TaylorAI/bge-micro/resolve/refs%2Fpr%2F1/", - search_prefix: "", - document_prefix: "", - }, -}; -export function getModelInfo(id) { - return { - modelURL: MODELS[id].base_url + "model.safetensors", - configURL: MODELS[id].base_url + "config.json", - tokenizerURL: MODELS[id].base_url + "tokenizer.json", - search_prefix: MODELS[id].search_prefix, - document_prefix: MODELS[id].document_prefix, - }; -} - -export function cosineSimilarity(vec1, vec2) { - const dot = vec1.reduce((acc, val, i) => acc + val * vec2[i], 0); - const a = Math.sqrt(vec1.reduce((acc, val) => acc + val * val, 0)); - const b = Math.sqrt(vec2.reduce((acc, val) => acc + val * val, 0)); - return dot / (a * b); -} -export async function getWikiText(article) { - // thanks to wikipedia for the API - const URL = `https://en.wikipedia.org/w/api.php?action=query&prop=extracts&exlimit=1&titles=${article}&explaintext=1&exsectionformat=plain&format=json&origin=*`; - return fetch(URL, { - method: "GET", - headers: { - Accept: "application/json", - }, - }) - .then((r) => r.json()) - .then((data) => { - const pages = data.query.pages; - const pageId = Object.keys(pages)[0]; - const extract = pages[pageId].extract; - if (extract === undefined || extract === "") { - throw new Error("No article found"); - } - return extract; - }) - .catch((error) => console.error("Error:", error)); -} diff --git a/spaces/radames/OpenAI-CLIP-JavaScript/enable-threads.js b/spaces/radames/OpenAI-CLIP-JavaScript/enable-threads.js deleted file mode 100644 index 955e31f60fdafe90ebdfcc69324d84fe108cc16c..0000000000000000000000000000000000000000 --- a/spaces/radames/OpenAI-CLIP-JavaScript/enable-threads.js +++ /dev/null @@ -1,75 +0,0 @@ -// NOTE: This file creates a service worker that cross-origin-isolates the page (read more here: https://web.dev/coop-coep/) which allows us to use wasm threads. -// Normally you would set the COOP and COEP headers on the server to do this, but Github Pages doesn't allow this, so this is a hack to do that. - -/* Edited version of: coi-serviceworker v0.1.6 - Guido Zuidhof, licensed under MIT */ -// From here: https://github.com/gzuidhof/coi-serviceworker -if(typeof window === 'undefined') { - self.addEventListener("install", () => self.skipWaiting()); - self.addEventListener("activate", e => e.waitUntil(self.clients.claim())); - - async function handleFetch(request) { - if(request.cache === "only-if-cached" && request.mode !== "same-origin") { - return; - } - - if(request.mode === "no-cors") { // We need to set `credentials` to "omit" for no-cors requests, per this comment: https://bugs.chromium.org/p/chromium/issues/detail?id=1309901#c7 - request = new Request(request.url, { - cache: request.cache, - credentials: "omit", - headers: request.headers, - integrity: request.integrity, - destination: request.destination, - keepalive: request.keepalive, - method: request.method, - mode: request.mode, - redirect: request.redirect, - referrer: request.referrer, - referrerPolicy: request.referrerPolicy, - signal: request.signal, - }); - } - - let r = await fetch(request).catch(e => console.error(e)); - - if(r.status === 0) { - return r; - } - - const headers = new Headers(r.headers); - headers.set("Cross-Origin-Embedder-Policy", "credentialless"); // or: require-corp - headers.set("Cross-Origin-Opener-Policy", "same-origin"); - - return new Response(r.body, { status: r.status, statusText: r.statusText, headers }); - } - - self.addEventListener("fetch", function(e) { - e.respondWith(handleFetch(e.request)); // respondWith must be executed synchonously (but can be passed a Promise) - }); - -} else { - (async function() { - if(window.crossOriginIsolated !== false) return; - - let registration = await navigator.serviceWorker.register(window.document.currentScript.src).catch(e => console.error("COOP/COEP Service Worker failed to register:", e)); - if(registration) { - console.log("COOP/COEP Service Worker registered", registration.scope); - - registration.addEventListener("updatefound", () => { - console.log("Reloading page to make use of updated COOP/COEP Service Worker."); - window.location.reload(); - }); - - // If the registration is active, but it's not controlling the page - if(registration.active && !navigator.serviceWorker.controller) { - console.log("Reloading page to make use of COOP/COEP Service Worker."); - window.location.reload(); - } - } - })(); -} - -// Code to deregister: -// let registrations = await navigator.serviceWorker.getRegistrations(); -// for(let registration of registrations) { -// await registration.unregister(); -// } diff --git a/spaces/raedeXanto/academic-chatgpt-beta/ Din 53505.pdf .md b/spaces/raedeXanto/academic-chatgpt-beta/ Din 53505.pdf .md deleted file mode 100644 index b010e4d8d1519c1cac74e69ae0c850e522995fae..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/ Din 53505.pdf .md +++ /dev/null @@ -1,161 +0,0 @@ - -

          Norma Din 53505.pdf: A Comprehensive Guide to Shore Hardness Testing of Rubber

          -

          If you work with rubber products or materials, you probably know how important it is to measure their hardness. Hardness is a property that reflects the resistance of a material to deformation under a given force. It can affect many aspects of rubber performance, such as elasticity, abrasion resistance, durability, etc.

          -

          Norma Din 53505.pdf


          DOWNLOAD ►►► https://tinourl.com/2uL2IC



          -

          But how do you measure the hardness of rubber? And what standard do you use to ensure accuracy and consistency?

          -

          In this article, we will introduce you to Norma Din 53505.pdf, a German standard that specifies the Shore hardness testing of rubber test pieces and products. We will explain what this standard is, what it covers, how it works, and why it is useful for your applications. We will also compare it with other standards for hardness testing of rubber, such as ISO 868, ISO 7619-1, DIN 53519-1, DIN 53519-2, etc.

          -

          By the end of this article, you will have a clear understanding of Norma Din 53505.pdf and how to use it for your rubber hardness testing needs.

          -

          Shore hardness testing of rubber

          -

          Shore hardness testing is one of the most common methods for measuring the hardness of rubber. It is based on the principle of indentation hardness, which means that a harder material will make a smaller indentation on a softer material when pressed with a certain force.

          -

          Shore hardness testing uses a device called a durometer, which consists of a spring-loaded indenter that is pressed against the surface of the rubber sample. The depth of penetration of the indenter is measured by a scale or a dial on the durometer. The higher the reading on the scale or dial, the harder the rubber.

          -

          There are different types of durometers that can be used for Shore hardness testing, depending on the hardness range of the rubber. The most common types are type A and type D durometers.

          -

          Norma Din 53505 standard for rubber hardness testing
          -Norma Din 53505 pdf download free
          -Norma Din 53505 shore A and D scales
          -Norma Din 53505 comparison with ASTM D2240
          -Norma Din 53505 compliance for rubber products
          -Norma Din 53505 online calculator for hardness conversion
          -Norma Din 53505 specification and requirements
          -Norma Din 53505 test method and procedure
          -Norma Din 53505 application and examples
          -Norma Din 53505 advantages and disadvantages
          -Norma Din 53505 history and development
          -Norma Din 53505 revision and update
          -Norma Din 53505 certification and accreditation
          -Norma Din 53505 interpretation and analysis
          -Norma Din 53505 measurement uncertainty and error
          -Norma Din 53505 calibration and verification
          -Norma Din 53505 equipment and instruments
          -Norma Din 53505 units and symbols
          -Norma Din 53505 formula and calculation
          -Norma Din 53505 graphs and charts
          -Norma Din 53505 correlation and regression
          -Norma Din 53505 factors and variables
          -Norma Din 53505 range and tolerance
          -Norma Din 53505 influence and significance
          -Norma Din 53505 validation and verification
          -Norma Din 53505 reference and citation
          -Norma Din 53505 summary and conclusion
          -Norma Din 53505 review and feedback
          -Norma Din 53505 questions and answers
          -Norma Din 53505 tips and tricks
          -Norma Din 53505 guide and tutorial
          -Norma Din 53505 video and audio
          -Norma Din 53505 ebook and course
          -Norma Din 53505 blog and article
          -Norma Din 53505 forum and community
          -Norma Din 53505 news and events
          -Norma Din 53505 case study and research paper
          -Norma Din 53505 book and magazine
          -Norma Din 53505 software and tool
          -Norma Din 53505 website and link
          -Norma Din 53505 sample and template
          -Norma Din 53505 data and statistics
          -Norma Din 53505 report and document
          -Norma Din 53505 presentation and slide show
          -Norma Din 53505 image and photo
          -Norma Din 53505 infographic and diagram
          -Norma Din 53505 animation and simulation
          -Norma Din 53505 game and quiz

          -

          Type A durometers

          -

          Type A durometers are suitable for testing in the hardness range from 10 to 90 Shore A. They have a truncated cone-shaped indenter with a diameter of 0.79 mm at its tip and an angle of 35° at its base. The indenter is pressed against the rubber sample with a force of 0.822 N.

          -

          Type A durometers are used for testing soft to medium-hard rubbers, such as natural rubber, neoprene, nitrile, silicone, etc.

          -

          Type D durometers

          -

          Type D durometers are suitable for testing in the high hardness range from 20 to 90 Shore D. They have a cone-shaped indenter with a diameter of 0.1 mm at its tip and an angle of 30° at its base. The indenter is pressed against the rubber sample with a force of 4.448 N.

          -

          Type D durometers are used for testing hard rubbers, such as ebonite, vulcanized rubber, thermoplastic elastomers, etc.

          -

          Advantages and disadvantages of Shore hardness testing

          -

          Some of the advantages of Shore hardness testing are:

          -
            -
          • It is simple, fast, and inexpensive.
          • -
          • It can be performed on any shape or size of rubber sample or product.
          • -
          • It can provide a good indication of the elastic modulus and other mechanical properties of rubber.
          • -
          • It can be used for quality control and comparison purposes.
          • -
          -

          Some of the disadvantages of Shore hardness testing are:

          -
            -
          • It is influenced by many factors, such as temperature, humidity, surface condition, thickness, curvature, etc.
          • -
          • It has limited accuracy and precision due to variations in measurement procedures and equipment.
          • -
          • It does not provide information on other aspects of rubber behavior, such as creep, fatigue, fracture toughness, etc.
          • -
          • It may not correlate well with other methods or standards for hardness testing.
          • -
          -

          Test parameters and procedures

          -

          In order to perform Shore hardness testing according to Norma Din 53505.pdf, you need to follow some specific test parameters and procedures. These include:

          -

          Test pieces and products

          -

          The test pieces and products that can be tested according to Norma Din 53505.pdf are rubber materials that have a thickness of at least 6 mm. If the thickness is less than 6 mm, the test piece or product should be backed by a rigid support. The test piece or product should also have a smooth and flat surface that is free of dust, dirt, grease, or other contaminants.

          -

          The test pieces and products can be of any shape or size, as long as they can fit under the durometer and allow for at least three measurements at different locations. The distance between the measurements should be at least 6 mm for type A durometers and 3 mm for type D durometers. The distance from the edge of the test piece or product should be at least 12 mm for type A durometers and 6 mm for type D durometers.

          -

          Test conditions and equipment

          -

          The test conditions and equipment that are required for Norma Din 53505.pdf are as follows:

          -
            -
          • The standard laboratory temperature should be 23 °C ± 2 °C.
          • -
          • The standard laboratory humidity should be 50 % ± 5 %.
          • -
          • The durometer should conform to the specifications given in ISO 868 or ISO 7619-1, depending on the type of durometer used.
          • -
          • The durometer should be calibrated regularly according to the manufacturer's instructions.
          • -
          • The durometer should be mounted on a stand that can hold it vertically and apply a constant pressure on the test piece or product.
          • -
          • The stand should have a rigid base that can support the test piece or product without deformation.
          • -
          • The stand should have an adjustable height that can accommodate different thicknesses of test pieces or products.
          • -
          • The stand should have a device that can indicate when the indenter has reached its maximum penetration depth.
          • -
          -

          Steps and methods for performing the test

          -

          The steps and methods for performing the test according to Norma Din 53505.pdf are as follows:

          -
            -
          1. Prepare the test pieces or products according to ISO 23529.
          2. -
          3. Condition the test pieces or products at the standard laboratory temperature and humidity for at least 16 h before testing.
          4. -
          5. Place the test piece or product on the base of the stand and adjust the height of the durometer so that the indenter is just touching the surface of the test piece or product.
          6. -
          7. Apply a force on the durometer to press the indenter into the test piece or product until it reaches its maximum penetration depth. The force should be applied gradually and uniformly within 1 s.
          8. -
          9. Read and record the hardness value on the scale or dial of the durometer after 3 s from applying the force. If possible, use a device that can automatically record the hardness value.
          10. -
          11. Release the force on the durometer and remove it from the test piece or product.
          12. -
          13. Repeat steps 3 to 6 for at least two more measurements at different locations on the test piece or product. The measurements should be spaced evenly over the surface of the test piece or product.
          14. -
          15. Calculate and report the arithmetic mean of the three measurements as the Shore hardness value of the test piece or product. Round off the result to the nearest whole number.
          16. -
          -

          Precision data and results

          -

          The precision data and results obtained from Norma Din 53505.pdf are based on an interlaboratory test conducted in 1985 with 14 laboratories participating. The test involved testing three qualities of rubber with different hardness levels using type A durometers. The results are shown in table 1 below.

          - - - - - -
          Rubber qualityMean hardness (Shore A)Repeatability limit (r)Reproducibility limit (R)
          A401,02,5
          B600,82,0
          C800,71,8
          -

          The repeatability limit (r) is defined as the maximum difference between two measurements obtained by one operator using one apparatus under repeatability conditions (same method, same operator, same apparatus, same laboratory, and short interval of time).

          -

          The reproducibility limit (R) is defined as the maximum difference between two measurements obtained by different operators using different apparatus in different laboratories under reproducibility conditions (same method, same sample, same test conditions, and short interval of time).

          -

          The precision data and results are useful for assessing the variability and uncertainty of the Shore hardness measurements and for comparing them with other methods or standards.

          -

          Comparison with other standards

          -

          Norma Din 53505.pdf is not the only standard for Shore hardness testing of rubber. There are other standards that have similar or different specifications and requirements for this method. Some of the most relevant standards are:

          -
            -
          • ISO 868: Plastics and ebonite - Determination of indentation hardness by means of a durometer (Shore hardness). This standard is identical to Norma Din 53505.pdf in terms of the types of durometers, the test conditions, and the test procedures. However, it also covers the testing of plastics and ebonite materials, which are not included in Norma Din 53505.pdf.
          • -
          • ISO 7619-1: Rubber, vulcanized or thermoplastic - Determination of indentation hardness - Part 1: Durometer method (Shore hardness). This standard is also identical to Norma Din 53505.pdf in terms of the types of durometers, the test conditions, and the test procedures. However, it also specifies additional requirements for the calibration and verification of the durometers, as well as for the reporting of the test results.
          • -
          • DIN 53519-1: Testing of rubber - Determination of indentation hardness by means of a ball indenter. This standard specifies a different method for measuring the hardness of rubber in the middle hardness range. It uses a ball indenter with a diameter of 2.5 mm that is pressed into the rubber sample with a force of 0.49 N. The depth of penetration of the ball indenter is measured by a dial gauge or a digital display. The hardness value is calculated from the depth of penetration and expressed in IRHD (International Rubber Hardness Degrees).
          • -
          • DIN 53519-2: Testing of rubber - Determination of indentation hardness by means of a ball indenter - Small test pieces. This standard specifies a modified method for measuring the hardness of rubber test pieces that are too small to be tested according to DIN 53519-1. It uses a ball indenter with a diameter of 2.5 mm that is pressed into the rubber sample with a force of 0.49 N. The depth of penetration of the ball indenter is measured by an optical device or a digital display. The hardness value is calculated from the depth of penetration and expressed in IRHD.
          • -
          -

          The comparison between Norma Din 53505.pdf and other standards for Shore hardness testing of rubber shows that there are some similarities and differences in terms of the scope, the equipment, the procedures, and the results. Depending on the purpose and the preference of the user, one standard may be more suitable than another for a specific application or industry.

          -

          Conclusion

          -

          In this article, we have provided you with a comprehensive guide to Norma Din 53505.pdf, a German standard that specifies the Shore hardness testing of rubber test pieces and products. We have explained what this standard is, what it covers, how it works, and why it is useful for your applications. We have also compared it with other standards for hardness testing of rubber, such as ISO 868, ISO 7619-1, DIN 53519-1, DIN 53519-2, etc.

          -

          We hope that this article has helped you to understand Norma Din 53505.pdf better and to use it effectively for your rubber hardness testing needs. If you have any questions or feedback about this article or this standard, please feel free to contact us anytime.

          -

          FAQs

          -

          Here are some frequently asked questions about Norma Din 53505.pdf:

          -
            -
          1. What is Shore hardness?
          2. -

            Shore hardness is a property that reflects the resistance of a material to deformation under a given force. It is measured by pressing an indenter into the surface of the material and reading the depth of penetration on a scale or dial.

            -
          3. What are type A and type D durometers?
          4. -

            Type A and type D durometers are devices that are used for Shore hardness testing of rubber. Type A durometers have a truncated cone-shaped indenter that is suitable for testing soft to medium-hard rubbers. Type D durometers have a cone-shaped indenter that is suitable for testing hard rubbers.

            -
          5. What are repeatability limit and reproducibility limit?
          6. -

            Repeatability limit and reproducibility limit are statistical parameters that indicate the variability and uncertainty of the Shore hardness measurements. The repeatability limit is the maximum difference between two measurements obtained by the same operator using the same apparatus in the same laboratory. The reproducibility limit is the maximum difference between two measurements obtained by different operators using different apparatus in different laboratories.

            -
          7. What is IRHD?
          8. -

            IRHD stands for International Rubber Hardness Degrees. It is another method for measuring the hardness of rubber in the middle hardness range. It uses a ball indenter with a diameter of 2.5 mm that is pressed into the rubber sample with a force of 5.4 N. The depth of penetration of the ball indenter is measured by an optical device or a digital display. The hardness value is calculated from the depth of penetration and expressed in IRHD.

            -
          9. What are the advantages and disadvantages of Norma Din 53505.pdf?
          10. -

            Some of the advantages of Norma Din 53505.pdf are:

            -
              -
            • It is simple, fast, and inexpensive.
            • -
            • It can be performed on any shape or size of rubber sample or product.
            • -
            • It can provide a good indication of the elastic modulus and other mechanical properties of rubber.
            • -
            • It can be used for quality control and comparison purposes.
            • -
            -

            Some of the disadvantages of Norma Din 53505.pdf are:

            -
              -
            • It is influenced by many factors, such as temperature, humidity, surface condition, thickness, curvature, etc.
            • -
            • It has limited accuracy and precision due to variations in measurement procedures and equipment.
            • -
            • It does not provide information on other aspects of rubber behavior, such as creep, fatigue, fracture toughness, etc.
            • -
            • It may not correlate well with other methods or standards for hardness testing.
            • -
            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Mary Kom 1080p movies download) - Experience the thrill of the world boxing championships.md b/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Mary Kom 1080p movies download) - Experience the thrill of the world boxing championships.md deleted file mode 100644 index adfb170bb633a241c281986c0b2d9e114e98aca5..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Mary Kom 1080p movies download) - Experience the thrill of the world boxing championships.md +++ /dev/null @@ -1,170 +0,0 @@ - -

            HD Online Player (Mary Kom 1080p movies download)

            -

            If you are a fan of biographical sports drama films, you might have heard of Mary Kom, a 2014 Indian movie based on the life of the legendary boxer and Olympic medalist Mangte Chungneijang Mary Kom. The film stars Priyanka Chopra Jonas in the lead role and was directed by Omung Kumar. The film received critical acclaim and was a commercial success, earning over ₹1.04 billion worldwide. It also won several awards, including the National Film Award for Best Popular Film Providing Wholesome Entertainment.

            -

            HD Online Player (Mary Kom 1080p movies download)


            DOWNLOADhttps://tinourl.com/2uL59l



            -

            But if you missed the chance to watch this inspiring film on the big screen, don't worry. You can still enjoy it in high definition (HD) quality on your own device. In this article, we will tell you why you should watch Mary Kom in 1080p resolution, and how to download it using various HD online players. So, let's get started!

            -

            Introduction

            -

            What is Mary Kom?

            -

            Mary Kom is a biographical film that chronicles the life and achievements of Mary Kom, a female boxer from Manipur who overcame poverty, social stigma, and personal challenges to become a six-time world champion and an Olympic bronze medalist. The film follows her journey from a young girl who dreams of becoming a boxer to a married woman who balances her family and career. The film also depicts her comeback after giving birth to twins and facing a near-fatal injury.

            -

            Why watch Mary Kom in 1080p?

            -

            Watching Mary Kom in 1080p resolution has many benefits. First of all, you will be able to appreciate the cinematography and the production design of the film better. The film was shot in various locations, including Manipur, Mumbai, Dharamshala, and Manali. The film also features realistic boxing scenes that were choreographed by Hollywood stunt director Rob Miller. Watching these scenes in HD quality will make you feel like you are in the ring with Mary Kom.

            -

            Watch Mary Kom full movie online in HD quality
            -How to stream Mary Kom 1080p on your device
            -Download Mary Kom movie in high definition for free
            -Best sites to watch Mary Kom online HD
            -Mary Kom HD online player with subtitles
            -Mary Kom 1080p movies download link
            -Stream Mary Kom movie online without ads
            -Where to watch Mary Kom full HD online
            -Mary Kom movie download in 1080p resolution
            -Mary Kom HD online player for PC
            -Watch Mary Kom online HD with English subtitles
            -Download Mary Kom movie in HD for offline viewing
            -Mary Kom 1080p movies download torrent
            -Mary Kom HD online player for Android
            -How to watch Mary Kom full movie in HD quality
            -Mary Kom movie download in HD format
            -Watch Mary Kom online HD free without registration
            -Mary Kom 1080p movies download direct link
            -Mary Kom HD online player for iOS
            -Stream Mary Kom movie in HD on your TV
            -Watch Mary Kom online HD with Hindi subtitles
            -Download Mary Kom movie in 1080p quality
            -Mary Kom 1080p movies download free
            -Mary Kom HD online player for Mac
            -Watch Mary Kom full movie in high definition online
            -Download Mary Kom movie in HD with subtitles
            -Watch Mary Kom online HD on Netflix
            -Mary Kom 1080p movies download mp4
            -Mary Kom HD online player for Windows
            -Stream Mary Kom movie in 1080p on your phone
            -Watch Mary Kom online HD with Tamil subtitles
            -Download Mary Kom movie in HD from YouTube
            -Mary Kom 1080p movies download mkv
            -Mary Kom HD online player for Linux
            -Watch Mary Kom full movie in 1080p online free
            -Download Mary Kom movie in HD from Google Drive
            -Watch Mary Kom online HD on Amazon Prime Video
            -Mary Kom 1080p movies download avi
            -Mary Kom HD online player for Chromebook
            -Stream Mary Kom movie in HD with surround sound
            -Watch Mary Kom online HD with Telugu subtitles
            -Download Mary Kom movie in 1080p from Dropbox
            -Mary Kom 1080p movies download rarbg
            -Mary Kom HD online player for Firestick
            -Watch Mary Kom full movie in HD on Disney Plus Hotstar

            -

            Secondly, you will be able to enjoy the performance of Priyanka Chopra Jonas more. She underwent rigorous training and physical transformation to portray Mary Kom convincingly. She also learned Manipuri language and culture to get into the character. Her performance was praised by critics and audiences alike, and she won several awards for it. Watching her act in 1080p resolution will make you appreciate her dedication and talent more.

            -

            Thirdly, you will be able to experience the emotions and messages of the film more deeply. The film is not just about boxing, but also about courage, perseverance, passion, and love. The film shows how Mary Kom fought against all odds to pursue her dream and make her country proud. The film also shows how she received support from her husband, coach, family, and friends along the way. The film will inspire you to chase your own dreams and overcome your own challenges.

            -

            How to download Mary Kom in 1080p?

            -

            To download Mary Kom in 1080p resolution, you will need an HD online player that can stream or download the film from a reliable source. There are many options available on the internet, but not all of them are safe or legal. Some of them may contain viruses or malware that can harm your device or steal your data. Some of them may also violate the copyright laws and infringe on the rights of the filmmakers and distributors.

            -

            To avoid these risks, you should choose an HD online player that is trustworthy and reputable. Here are some of the best HD online players that you can use to watch or download Mary Kom in 1080p resolution:

            -

            HD Online Player Options

            -

            YouTube

            -

            Pros and Cons

            -

            YouTube is one of the most popular and widely used HD online players in the world. It has millions of videos on various topics, including movies, music, sports, education, entertainment, and more. You can watch or download any video on YouTube for free using your browser or app.

            -

            The pros of using YouTube are:

            -
              -
            • You can access a large collection of videos on any topic or genre.
            • -
            • You can watch or download videos in different resolutions, including 1080p.
            • -
            • You can use subtitles or captions in different languages.
            • -
            • You can share or comment on videos with other users.
            • -
            • You can create playlists or watchlists of your favorite videos.
            • -
            -

            The cons of using YouTube are:

            -
              -
            • You may encounter ads or interruptions while watching or downloading videos.
            • -
            • You may not find some videos due to copyright issues or geo-restrictions.
            • -
            • You may need a stable internet connection and enough storage space to watch or download videos.
            • -
            • You may need a Google account to access some features or functions.
            • -
            -

            How to use YouTube to watch Mary Kom in 1080p?

            -

            To use YouTube to watch Mary Kom in 1080p resolution, follow these steps:

            -
              -
            1. Open your browser or app and go to www.youtube.com.
            2. -
            3. In the search bar, type "Mary Kom full movie" and hit enter.
            4. -
            5. From the results, choose the video that has "HD" or "1080p" in its title or description.
            6. -
            7. Click on the video to start playing it.
            8. -
            9. To adjust the resolution, click on the gear icon at the bottom right corner of the video player.
            10. -
            11. Select "Quality" and choose "1080p" from the options.
            12. -
            13. To download the video for offline viewing, click on the download icon at the bottom right corner of the video player.
            14. -
            15. Select "Download" and choose "1080p" from the options.
            16. -
            17. The video will be downloaded to your device's storage or memory card.
            18. -
            -

            Netflix

            -

            Pros and Cons

            -

            Netflix is one of the most popular and widely used HD online players in the world. It has thousands of movies and shows on various genres and languages. You can watch or download any movie or show on Netflix for a monthly subscription fee using your browser or app.

            -

            The pros of using Netflix are:

            -
              -
            • You can access a large collection of movies and shows on any genre or language.
            • -
            • You can watch or download movies and shows in different resolutions, including 1080p.
            • -
            • You can use subtitles or captions in different languages.
            • -
            • You can share or rate movies and shows with other users.
            • -
            • You can create profiles or lists of your favorite movies and shows.
            • -
            -

            The cons of using Netflix are:

            -
              -
            • You may encounter ads or interruptions while watching or downloading movies and shows.
            • -
            • You may not find some movies or shows due to licensing issues or geo-restrictions.
            • -
            • You may need a stable internet connection and enough storage space to watch or download movies and shows.
            • -
            • You may need a Netflix account and a valid payment method to access some features or functions.
            • -
            -

            How to use Netflix to watch Mary Kom in 1080p?

            -

            To use Netflix to watch Mary Kom in 1080p resolution, follow these steps:

            -
              -
            1. Open your browser or app and go to www.netflix.com.
            2. -
            3. If you have a Netflix account, sign in with your email address and password. If you don't have a Netflix account, sign up for one with your email address and payment method.
            4. -
            5. In the search bar, type "Mary Kom" and hit enter.
            6. -
            7. From the results, choose the movie that has "HD" or "1080p" in its title or description.
            8. -
            9. Click on the movie to start playing it.
            10. -
            11. To adjust the resolution, click on the gear icon at the bottom right corner of the movie player.
            12. -
            13. Select "Quality" and choose "High" from the options.
            14. - movie player. -
            15. Select "Download" and choose "High" from the options.
            16. -
            17. The movie will be downloaded to your device's storage or memory card.
            18. -
            -

            Amazon Prime Video

            -

            Pros and Cons

            -

            Amazon Prime Video is one of the most popular and widely used HD online players in the world. It has thousands of movies and shows on various genres and languages. You can watch or download any movie or show on Amazon Prime Video for a monthly or annual subscription fee using your browser or app.

            -

            The pros of using Amazon Prime Video are:

            -
              -
            • You can access a large collection of movies and shows on any genre or language.
            • -
            • You can watch or download movies and shows in different resolutions, including 1080p.
            • -
            • You can use subtitles or captions in different languages.
            • -
            • You can share or comment on movies and shows with other users.
            • -
            • You can create watchlists or recommendations of your favorite movies and shows.
            • -
            -

            The cons of using Amazon Prime Video are:

            -
              -
            • You may encounter ads or interruptions while watching or downloading movies and shows.
            • -
            • You may not find some movies or shows due to licensing issues or geo-restrictions.
            • -
            • You may need a stable internet connection and enough storage space to watch or download movies and shows.
            • -
            • You may need an Amazon account and a valid payment method to access some features or functions.
            • -
            -

            How to use Amazon Prime Video to watch Mary Kom in 1080p?

            -

            To use Amazon Prime Video to watch Mary Kom in 1080p resolution, follow these steps:

            -
              -
            1. Open your browser or app and go to www.amazon.com/prime-video.
            2. -
            3. If you have an Amazon account, sign in with your email address and password. If you don't have an Amazon account, sign up for one with your email address and payment method.
            4. -
            5. In the search bar, type "Mary Kom" and hit enter.
            6. -
            7. From the results, choose the movie that has "HD" or "1080p" in its title or description.
            8. -
            9. Click on the movie to start playing it.
            10. -
            11. To adjust the resolution, click on the gear icon at the bottom right corner of the movie player.
            12. -
            13. Select "Quality" and choose "Best" from the options.
            14. -
            15. To download the movie for offline viewing, click on the download icon at the bottom right corner of the movie player.
            16. -
            17. Select "Download" and choose "Best" from the options.
            18. -
            19. The movie will be downloaded to your device's storage or memory card.
            20. -
            -

            Conclusion

            -

            Mary Kom is a biographical film that tells the inspiring story of Mary Kom, a female boxer from India who became a world champion and an Olympic medalist. The film features Priyanka Chopra Jonas as Mary Kom, who delivers a remarkable performance. The film also showcases the beauty and culture of Manipur, as well as the thrilling action of boxing. The film is worth watching in 1080p resolution, as it enhances the visual and audio quality of the film. You can watch or download Mary Kom in 1080p resolution using various HD online players, such as YouTube, Netflix, or Amazon Prime Video. However, you should be careful about choosing a safe and legal HD online player that respects the rights of the filmmakers and distributors. We hope this article has helped you learn more about Mary Kom and how to watch it in 1080p resolution. Happy watching!

            -

            FAQs

            -

            Here are some frequently asked questions about Mary Kom and HD online players:

            -
              -
            1. Q: When was Mary Kom released?
              A: Mary Kom was released on September 5, 2014 in India and on September 12, 2014 in other countries.
            2. -
            3. Q: How long is Mary Kom?
              A: Mary Kom is 122 minutes long.
            4. -
            5. Q: Who directed Mary Kom?
              A: Mary Kom was directed by Omung Kumar, who also directed Sarbjit (2016) and Bhoomi (2017).
            6. -
            7. Q: Who produced Mary Kom?
              A: Mary Kom was produced by Sanjay Leela Bhansali, who also produced Devdas (2002), Bajirao Mastani (2015), and Padmaavat (2018).
            8. -
            9. Q: What are some other biographical films based on Indian sports personalities?
              A: Some other biographical films based on Indian sports personalities are Bhaag Milkha Bhaag (2013), Dangal (2016), M.S. Dhoni: The Untold Story (2016), Soorma (2018), and Saina (2021).
            10. -
            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/rajesh1729/youtube-video-transcription-with-whisper/app.py b/spaces/rajesh1729/youtube-video-transcription-with-whisper/app.py deleted file mode 100644 index 680e53a7e09596ff97e073fa0510cf254cf21a0b..0000000000000000000000000000000000000000 --- a/spaces/rajesh1729/youtube-video-transcription-with-whisper/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import whisper -from pytube import YouTube -from transformers import pipeline -import gradio as gr -import os - -model = whisper.load_model("base") -summarizer = pipeline("summarization") - -def get_audio(url): - yt = YouTube(url) - video = yt.streams.filter(only_audio=True).first() - out_file=video.download(output_path=".") - base, ext = os.path.splitext(out_file) - new_file = base+'.mp3' - os.rename(out_file, new_file) - a = new_file - return a - -def get_text(url): - result = model.transcribe(get_audio(url)) - return result['text'] - -def get_summary(url): - article = get_text(url) - b = summarizer(article) - b = b[0]['summary_text'] - return b - -with gr.Blocks() as demo: - gr.Markdown("

            Youtube video transcription with OpenAI's Whisper

            ") - gr.Markdown("
            Enter the link of any youtube video to get the transcription of the video and a summary of the video in the form of text.
            ") - with gr.Tab('Get the transcription of any Youtube video'): - with gr.Row(): - input_text_1 = gr.Textbox(placeholder='Enter the Youtube video URL', label='URL') - output_text_1 = gr.Textbox(placeholder='Transcription of the video', label='Transcription') - result_button_1 = gr.Button('Get Transcription') - with gr.Tab('Summary of Youtube video'): - with gr.Row(): - input_text = gr.Textbox(placeholder='Enter the Youtube video URL', label='URL') - output_text = gr.Textbox(placeholder='Summary text of the Youtube Video', label='Summary') - result_button = gr.Button('Get Summary') - - result_button.click(get_summary, inputs = input_text, outputs = output_text) - result_button_1.click(get_text, inputs = input_text_1, outputs = output_text_1) -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/rajeshradhakrishnan/malayalam-tamil/app.py b/spaces/rajeshradhakrishnan/malayalam-tamil/app.py deleted file mode 100644 index 148d3c5bba724fd1d06c3a8b10b05edf8fecdd07..0000000000000000000000000000000000000000 --- a/spaces/rajeshradhakrishnan/malayalam-tamil/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import requests -import json -import gradio as gr - -uri = "https://ai4bharat-indictrans-indic2indic.hf.space/api/predict" - - -examples = [ - ["വടശ്ശേരിക്കര-ഗവി"], - ["തിരുവനന്തപുരം-എറണാകുളം"], - ["നെന്മാറ - നെല്ലിയാമ്പതി"] - ] - -def transIndic(text, srclang, trgLang): - response = requests.post( - uri, - json={ - "data": [text, srclang, trgLang] - } - ) - data = "" - if 200 == response.status_code : - output = json.loads(response.text) - data = output['data'] - return data - -def translate_ML_TM(malayalam_text): - - ml_tm = transIndic(malayalam_text, "Malayalam","Tamil") - - return ml_tm[0] - -interface = gr.Interface( - translate_ML_TM, - inputs="textbox", - outputs='label', - theme="default", - title="Malayalam to Tamil Translater", - description="Try to translate മലയാളം to தமிழ் ? Input a few malayalam text and verify whether the model translated it appropriately!", - article="

            മലയാളം - தமிழ் | Demo Application

            ", - examples=examples, - cache_examples=False, - # live=True, - - ) -interface.launch(debug=True,share=False) \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Singh-Is-Bling-Full-Movie-Hd-Free-Download-Filmywap-Hindi.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Singh-Is-Bling-Full-Movie-Hd-Free-Download-Filmywap-Hindi.md deleted file mode 100644 index a9c179c068457796f13b1e48aecf921817193da5..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Singh-Is-Bling-Full-Movie-Hd-Free-Download-Filmywap-Hindi.md +++ /dev/null @@ -1,41 +0,0 @@ -## Singh Is Bling Full Movie Hd Free Download Filmywap Hindi - - - -**Singh Is Bling Full Movie Hd Free Download Filmywap Hindi ✦✦✦ [https://www.google.com/url?q=https%3A%2F%2Furluss.com%2F2twEBc&sa=D&sntz=1&usg=AOvVaw2peAIf5B1ycj1K7r5jtTER](https://www.google.com/url?q=https%3A%2F%2Furluss.com%2F2twEBc&sa=D&sntz=1&usg=AOvVaw2peAIf5B1ycj1K7r5jtTER)** - - - - Here is the title and article with html formatting for the keyword "Singh Is Bling Full Movie Hd Free Download Filmywap Hindi": - -# Singh Is Bling Full Movie Hd Free Download Filmywap Hindi - - - -Singh Is Bling is a 2015 Bollywood comedy film starring Akshay Kumar, Amy Jackson, Lara Dutta and Kay Kay Menon. The film is directed by Prabhu Deva and produced by Ashvini Yardi and Jayantilal Gada. The film follows the adventures of Raftaar Singh, a fun-loving Sikh who falls in love with a Romanian girl named Sara. - - - -If you are looking for a way to watch Singh Is Bling full movie hd free download filmywap Hindi, then you are in luck. There are many websites that offer this service, but not all of them are safe and legal. Some of them may contain viruses, malware, or other harmful content that can damage your device or compromise your privacy. Therefore, you should be careful and choose a reliable and trustworthy source to download or stream Singh Is Bling full movie hd free filmywap Hindi. - - - -One of the best options to watch Singh Is Bling full movie hd free download filmywap Hindi is to use an online streaming platform that has a license to show the film. This way, you can enjoy the movie in high quality and without any interruptions or ads. Some of the popular platforms that have Singh Is Bling full movie hd free download filmywap Hindi are: - - - -- Amazon Prime Video: This is a subscription-based service that offers a wide range of movies and shows, including Singh Is Bling full movie hd free download filmywap Hindi. You can watch the movie on any device that supports Prime Video, such as smartphones, tablets, laptops, smart TVs, etc. You can also download the movie offline and watch it later. Prime Video also has other benefits, such as free delivery, exclusive deals, music streaming, etc. - -- Netflix: This is another subscription-based service that has a huge collection of movies and shows, including Singh Is Bling full movie hd free download filmywap Hindi. You can watch the movie on any device that supports Netflix, such as smartphones, tablets, laptops, smart TVs, etc. You can also download the movie offline and watch it later. Netflix also has other features, such as personalized recommendations, profiles, parental controls, etc. - -- Hotstar: This is a free service that offers a variety of movies and shows, including Singh Is Bling full movie hd free download filmywap Hindi. You can watch the movie on any device that supports Hotstar, such as smartphones, tablets, laptops, smart TVs, etc. You can also download the movie offline and watch it later. Hotstar also has other content, such as sports, news, live TV channels, etc. - - - -These are some of the best ways to watch Singh Is Bling full movie hd free download filmywap Hindi online legally and safely. However, if you still want to use other websites that claim to offer Singh Is Bling full movie hd free download filmywap Hindi, then you should be aware of the risks involved. These websites may not have the proper rights to show the film and may be violating the copyright laws. They may also have poor quality videos or audio or may require you to register or pay for their services. Moreover, they may expose you to viruses, malware, or other harmful content that can harm your device or steal your personal information. - - - -Therefore, we advise you to avoid using such websites and instead use the official and licensed platforms to watch Singh Is Bling full movie hd free download filmywap Hindi online. This way, you can enjoy the movie without any worries and support the filmmakers and actors who worked hard to make it. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cube Escape Paradox - Chapter 2 Crack Serial Key.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cube Escape Paradox - Chapter 2 Crack Serial Key.md deleted file mode 100644 index 89246c6fe6bcc78f5c504f10c84238f2c3d6b9f5..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cube Escape Paradox - Chapter 2 Crack Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ - -

            Wheatley may be an amalgam of multiple characters. He is both designed, programmed, and voiced by the same man: Matt Firmin. Firmin has also worked with other Portal co-creator Erik Wolpaw on the storyboards and sound design for the Portal series, both in the first game and Portal 2. In a 2010 interview, Firmin stated that he originally came up with the idea for Wheatley's voice in Portal when he was watching a 5-year old version of Portal, and he thought the use of the jokey voice would be funny. He wanted Wheatley to refer to GLaDOS as a "lovely girl" instead of a "hostile force", and to use the word "dude" instead of the line in Portal 2 where the robot companion of that game's protagonist refers to Wheatley as "Bro" in a mocking way. Wheatley's look is also thought by Firmin to be modeled on the character named "Art", whom he worked with on the storyboards and voiced for the first Portal. Wheatley's role in Portal 2 is expanded considerably, with multiple sections devoted to explaining his background.
            Cube Escape: Paradox - Chapter 2 Crack Serial Key

            Wheatley is believed to be created in the body of the "thin crust" of Lava Tubes, where GLaDOS was born.
            Category: Science, Science Fiction, Technology

            -

            The problem of memory, of course, is not the same as the problem of the past. The past is the present, and the future, more than now. Its the problem of the urgency of memory, of the need to remember. But we want to remember, even if we remember, the past. We want to remember even if we forget. How can we do this? How can we remember our present? To recall, to remember a past is to help us to cope with a future in which we will not be there. Only Uncle Vespas, Barbara Harris comments, the circle itself, the one we are moving in and through, the corporeal and corporealized, allows us to know that an ending is coming. If it doesnt, we fear it. Were in an endless loop, unable to stop, would drive ourselves crazy. Were in a time paradox, rushing into the future to no end, would drive us to despair.

            -

            Cube Escape: Paradox - Chapter 2 Crack Serial Key


            Download Zip ===> https://urlgoal.com/2uCKSV



            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dany Gamepad Driver Gp-400 12.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dany Gamepad Driver Gp-400 12.md deleted file mode 100644 index b036492d0d7ff9b89cc03a6ba66c5f0c94af0858..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dany Gamepad Driver Gp-400 12.md +++ /dev/null @@ -1,28 +0,0 @@ -

            Dany Gamepad Driver Gp-400 12


            DOWNLOAD ★★★★★ https://urlgoal.com/2uCM5O



            -
            -The dany gamepad driver gp-400 12 12 2017 is a input driver of the oasysg.com it's one of the dany gamepad driver gp-400 12 12 2017 that we publish on our website. - -Screens from the forthcoming game of thunder. - -Screens from the new game of thunder. The best place to buy at the dany gamepad driver gp-400 12 12 2017. If you are trying to get specific information about dany gamepad driver gp-400 12 12 2017 or an online store that offers dany gamepad driver gp-400 12 12 2017. There are many websites offering goods which are designed for your own personal or practical use. You can make use of the reviews, discussions and price comparisons on the shopping site in order to decide if you can find the dany gamepad driver gp-400 12 12 2017 or not. If you would like some assist, it is possible to ask friends who already purchase dany gamepad driver gp-400 12 12 2017 or a shop that has the product to help you make your decision. - -dany gamepad driver gp-400 12 12 2017 - -Published on 12/12/2017 11:35:29 by admin - -We have a total of 126 products to choose from for your purchase or order. You can find dany gamepad driver gp-400 12 12 2017 at many popular stores including more than 11 online stores. - -Scroll down to discover all the dany gamepad driver gp-400 12 12 2017 we have on offer today. We sell the best quality dany gamepad driver gp-400 12 12 2017 at a great price. The dany gamepad driver gp-400 12 12 2017 range has been carefully selected and only the best products are listed. Order the dany gamepad driver gp-400 12 12 2017 you need today with complete confidence. If you have previously ordered from this website, you will know you can always rely on us for all of your future shopping. Our 24/7 customer service is second to none and we are always here to help.var baseProperty = require('./_baseProperty'), - - basePropertyDeep = require('./_basePropertyDeep'), - - isKey = require('./_isKey'), - - toKey = require('./_toKey'); - -/** - - * Creates a function that returns the property value 4fefd39f24
            -
            -
            -

            diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fiza Part 1 In Hindi Free Download 1080p FREE.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fiza Part 1 In Hindi Free Download 1080p FREE.md deleted file mode 100644 index e2638839b2556232bce5254b5a900f4c3301acb8..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fiza Part 1 In Hindi Free Download 1080p FREE.md +++ /dev/null @@ -1,10 +0,0 @@ -
            -

            literacy
            strategies like books across oregon are helping children read and learn. through a grant program, special olympics has expanded this effort to local schools. watch on youtube download mp4, 5 min (32.5 mb) watch on youtube download mp4, 5 min (33.6 mb)

            -

            no matter where you come from
            special olympics helps people get involved in athletics and physical activity, regardless of age, ethnicity or disability. special olympics usa provides opportunities for children with intellectual disabilities to participate in sports from head to toe, including those who are blind or visually impaired. watch on youtube download mov, 5 min (20.2 mb)

            -

            Fiza part 1 in hindi free download 1080p


            DOWNLOAD ---> https://urlgoal.com/2uCKyR



            -

            training
            at special olympics, no athlete is overlooked. the athlete development initiative has created opportunities for athletes with intellectual disabilities and athletes from marginalized communities to get the training they need to compete successfully.

            -

            the final part includes clips from usa today's inside special olympics video of the 2014 usa games including crawford's olympic bobsled team and dylan pavey's volleyball team. this is a great way to bring students into the special olympics picture.

            -

            special olympics is a global movement of people with intellectual disabilities. 
            the idea for the movement began nearly 30 years ago, in 1972, when mary t. dye, an olympic swimmer , found herself in a world where people with intellectual disabilities were being ignored, humiliated , or even killed. dye was inspired by athletes like natalie roberts , who became her mentor.

            -

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/remotewith/image-to-text-app/README.md b/spaces/remotewith/image-to-text-app/README.md deleted file mode 100644 index a77fc0bba5c48bd7254e5e13fb79e00e08462716..0000000000000000000000000000000000000000 --- a/spaces/remotewith/image-to-text-app/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Image To Text App -emoji: 🌖 -colorFrom: gray -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: afl-3.0 -duplicated_from: Younghak/image-to-text-app ---- - -# image2text-app -Demo of huggingface spaces deployment of a streamlit python app diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/max_iou_assigner.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/max_iou_assigner.py deleted file mode 100644 index 676421f7653f37e936c7152ed64bebe80564d147..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/max_iou_assigner.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class MaxIoUAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, or a semi-positive integer - indicating the ground truth index. - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - `min_pos_iou` is set to avoid assigning bboxes that have extremely - small iou with GT as positive samples. It brings about 0.3 mAP - improvements in 1x schedule but does not affect the performance of - 3x schedule. More comparisons can be found in - `PR #7464 `_. - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow low quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. Details are demonstrated in Step 4. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, or a semi-positive number. -1 means negative - sample, semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to the background - 2. assign proposals whose iou with all gts < neg_iou_thr to 0 - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - - Example: - >>> self = MaxIoUAssigner(0.5, 0.5) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - gt_bboxes.shape[0] > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = bboxes.device - bboxes = bboxes.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, bboxes, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result - - def assign_wrt_overlaps(self, overlaps, gt_labels=None): - """Assign w.r.t. the overlaps of bboxes with gts. - - Args: - overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes, - shape(k, n). - gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = overlaps.size(0), overlaps.size(1) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - # 2. assign negative: below - # the negative inds are set to be 0 - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps < self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, tuple): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0]) - & (max_overlaps < self.neg_iou_thr[1])] = 0 - - # 3. assign positive: above positive IoU threshold - pos_inds = max_overlaps >= self.pos_iou_thr - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - if self.match_low_quality: - # Low-quality matching will overwrite the assigned_gt_inds assigned - # in Step 3. Thus, the assigned gt might not be the best one for - # prediction. - # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2, - # bbox 1 will be assigned as the best target for bbox A in step 3. - # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's - # assigned_gt_inds will be overwritten to be bbox 2. - # This might be the reason that it is not used in ROI Heads. - for i in range(num_gts): - if gt_max_overlaps[i] >= self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = overlaps[i, :] == gt_max_overlaps[i] - assigned_gt_inds[max_iou_inds] = i + 1 - else: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/ronvolutional/sk-node/app/svelte.config.js b/spaces/ronvolutional/sk-node/app/svelte.config.js deleted file mode 100644 index ce4ff82e7a1ea8364be9e015aabba26858e95ff7..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/sk-node/app/svelte.config.js +++ /dev/null @@ -1,15 +0,0 @@ -import adapter from '@sveltejs/adapter-node'; -import preprocess from 'svelte-preprocess'; - -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://github.com/sveltejs/svelte-preprocess - // for more information about preprocessors - preprocess: preprocess(), - - kit: { - adapter: adapter() - } -}; - -export default config; diff --git a/spaces/saefro991/aet_demo/app.py b/spaces/saefro991/aet_demo/app.py deleted file mode 100644 index 7fa5074bf1915bd8fa1527287f9f699ab582636d..0000000000000000000000000000000000000000 --- a/spaces/saefro991/aet_demo/app.py +++ /dev/null @@ -1,125 +0,0 @@ -import pathlib -import yaml -import torch -import torchaudio -import numpy as np -from lightning_module import SSLDualLightningModule -import gradio as gr -import subprocess -import requests - -def normalize_waveform(wav, sr, db=-3): - wav, _ = torchaudio.sox_effects.apply_effects_tensor( - wav.unsqueeze(0), - sr, - [["norm", "{}".format(db)]], - ) - return wav.squeeze(0) - -def download_file_from_google_drive(id, destination): - URL = "https://docs.google.com/uc?export=download" - - session = requests.Session() - - response = session.get(URL, params = { 'id' : id }, stream = True) - token = get_confirm_token(response) - - if token: - params = { 'id' : id, 'confirm' : token } - response = session.get(URL, params = params, stream = True) - - save_response_content(response, destination) - -def get_confirm_token(response): - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - - return None - -def save_response_content(response, destination): - CHUNK_SIZE = 32768 - - with open(destination, "wb") as f: - for chunk in response.iter_content(CHUNK_SIZE): - if chunk: # filter out keep-alive new chunks - f.write(chunk) - -def calc_spectrogram(wav, config): - spec_module = torchaudio.transforms.MelSpectrogram( - sample_rate=config["preprocess"]["sampling_rate"], - n_fft=config["preprocess"]["fft_length"], - win_length=config["preprocess"]["frame_length"], - hop_length=config["preprocess"]["frame_shift"], - f_min=config["preprocess"]["fmin"], - f_max=config["preprocess"]["fmax"], - n_mels=config["preprocess"]["n_mels"], - power=1, - center=True, - norm="slaney", - mel_scale="slaney", - ) - specs = spec_module(wav) - log_spec = torch.log( - torch.clamp_min(specs, config["preprocess"]["min_magnitude"]) - * config["preprocess"]["comp_factor"] - ).to(torch.float32) - return log_spec - -def transfer(audio): - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - wp_src = pathlib.Path("aet_sample/src.wav") - wav_src, sr = torchaudio.load(wp_src) - sr_inp, wav_tar = audio - wav_tar = wav_tar / (np.max(np.abs(wav_tar)) * 1.1) - wav_tar = torch.from_numpy(wav_tar.astype(np.float32)) - resampler = torchaudio.transforms.Resample( - orig_freq=sr_inp, - new_freq=sr, - ) - wav_tar = resampler(wav_tar) - config_path = pathlib.Path("configs/test/melspec/ssl_tono.yaml") - config = yaml.load(open(config_path, "r"), Loader=yaml.FullLoader) - - melspec_src = calc_spectrogram( - normalize_waveform(wav_src.squeeze(0), sr), config - ) - wav_tar = normalize_waveform(wav_tar.squeeze(0), sr) - ckpt_path = pathlib.Path("tono_aet_melspec.ckpt").resolve() - src_model = SSLDualLightningModule(config).load_from_checkpoint( - checkpoint_path=ckpt_path, - config=config, - strict=False - ).eval() - - encoder_src = src_model.encoder.to(device) - channelfeats_src = src_model.channelfeats.to(device) - channel_src = src_model.channel.to(device) - - with torch.no_grad(): - _, enc_hidden_src = encoder_src( - melspec_src.unsqueeze(0).unsqueeze(1).transpose(2, 3).to(device) - ) - chfeats_src = channelfeats_src(enc_hidden_src) - wav_transfer = channel_src(wav_tar.unsqueeze(0), chfeats_src) - wav_transfer = wav_transfer.cpu().detach().numpy()[0, :] - return sr, wav_transfer - -if __name__ == "__main__": - subprocess.run(["curl", "-OL", "https://sarulab.sakura.ne.jp/saeki/selfremaster/pretrained/tono_aet_melspec.ckpt"]) - download_file_from_google_drive("10OJ2iznutxzp8MEIS6lBVaIS_g5c_70V", "hifigan/hifigan_melspec_universal") - - iface = gr.Interface( - transfer, - "audio", - gr.outputs.Audio(type="numpy"), - examples=[ - ["aet_sample/tar.wav"] - ], - layout="horizontal", - title='Audio effect transfer with SelfRemaster', - description='Extracting the channel feature of a historical audio recording with a pretrained SelfRemaster and adding it to any high-quality audio. (Source audio is aet_sample/src.wav)' - ) - - iface.launch() \ No newline at end of file diff --git a/spaces/sagelewis71/ai-lawyer/app.py b/spaces/sagelewis71/ai-lawyer/app.py deleted file mode 100644 index b1dffa071c88d4cf7707c59e5592e53a0fc61c1e..0000000000000000000000000000000000000000 --- a/spaces/sagelewis71/ai-lawyer/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import streamlit as st -import requests -from typing import Optional - -BASE_API_URL = "https://langflow-railway-production-b80d.up.railway.app/api/v1/process" -FLOW_ID = "eb8c05ac-f207-4092-a466-bc0cb0766c38" -# You can tweak the flow by adding a tweaks dictionary -# e.g {"OpenAI-XXXXX": {"model_name": "gpt-4"}} -TWEAKS = { - "ChatOpenAI-arS7O": {}, - "LLMChain-uo1Qh": {}, - "PromptTemplate-mpDUS": {}, - "ConversationBufferMemory-ycM8U": {} -} - -def run_flow(inputs: dict, flow_id: str, tweaks: Optional[dict] = None) -> dict: - """ - Run a flow with a given message and optional tweaks. - - :param message: The message to send to the flow - :param flow_id: The ID of the flow to run - :param tweaks: Optional tweaks to customize the flow - :return: The JSON response from the flow - """ - api_url = f"{BASE_API_URL}/{flow_id}" - - payload = {"inputs": inputs} - - if tweaks: - payload["tweaks"] = tweaks - - response = requests.post(api_url, json=payload) - return response.json() - -# Streamlit interface -st.title('Ask an AI Lawyer') -text_input = st.text_input('Enter your text:') -if st.button('Submit'): - inputs = {"text": text_input} - response = run_flow(inputs, flow_id=FLOW_ID, tweaks=TWEAKS) - result_text = response.get('result', {}).get('text', 'No response') - st.markdown(f'**Response:** {result_text}') diff --git a/spaces/sanchit-gandhi/whisper-jax/README.md b/spaces/sanchit-gandhi/whisper-jax/README.md deleted file mode 100644 index ae3ed678dacbaa89099ae154669d3e1288e849cf..0000000000000000000000000000000000000000 --- a/spaces/sanchit-gandhi/whisper-jax/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Whisper JAX -emoji: ⚡️ -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sayakpaul/pokemon-sd-kerascv/README.md b/spaces/sayakpaul/pokemon-sd-kerascv/README.md deleted file mode 100644 index 54e4b7b935402a9b86667fa1a880da7774872aa9..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/pokemon-sd-kerascv/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Generate Custom Pokemons with Stable Diffusion -emoji: 🐶 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/Download I Am Legend 2 !!HOT!!.md b/spaces/scedlatioru/img-to-music/example/Download I Am Legend 2 !!HOT!!.md deleted file mode 100644 index 7d12d38b2bc9d8ea87380d46686b5eb74ba88ebc..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download I Am Legend 2 !!HOT!!.md +++ /dev/null @@ -1,9 +0,0 @@ -

            Download I Am Legend 2


            DOWNLOAD --->>> https://gohhs.com/2uEAAZ



            - -Eventually, it was announced that a prequel was in development, but nothing came of it. There are still people ... - I think that everyone has long been refueled and ready to travel, - said Anna, when they got up from their chairs. Let's go back to our table and get our things and then go and find our vehicle. She was right, and as soon as they returned, Anna quickly gathered her things and threw them on the table. “Come on,” she said, pulling them aside. — I know that you all love to ride like a breeze, but I would like you to be on our side and not get hurt during the journey. "I Am Legend" was a critical and commercial hit for star Will Smith, it still hasn't received a sequel or... April 21, 2020 — Despite the 2007 sci-fi horror film I Am Legend was a critical and commercial hit for star Will Smith, and has yet to receive a sequel or remake. -Will Smith wanted to do I Am Legend in another year, but that didn't happen. -According to the Hollywood Reporter, Smith wants the film to come out this year. -Film producer Simon Kinberg and director Francis Lawrence did not respond to a request for comment. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/scedlatioru/img-to-music/example/HD Online Player (Avatar20093DBluRay1080pHSBSDTSx264mk) [WORK].md b/spaces/scedlatioru/img-to-music/example/HD Online Player (Avatar20093DBluRay1080pHSBSDTSx264mk) [WORK].md deleted file mode 100644 index 88da89080f417a4f5f32e52c6eda812f5cbba9b6..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/HD Online Player (Avatar20093DBluRay1080pHSBSDTSx264mk) [WORK].md +++ /dev/null @@ -1,6 +0,0 @@ -

            HD Online Player (Avatar20093DBluRay1080pHSBSDTSx264mk)


            Download ★★★★★ https://gohhs.com/2uEAAN



            - -... Golpo Bangla Pdf Download =LINK= https://seesaawiki.jp/ervidmanear/d/HD Online Player (Avatar20093DBluRay1080pHSBSDTSx264mk) ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/scedlatioru/img-to-music/example/Iata Airport Development Reference Manual 9th Edition Zip.md b/spaces/scedlatioru/img-to-music/example/Iata Airport Development Reference Manual 9th Edition Zip.md deleted file mode 100644 index 5d6764bedf36a481e6f545b373ba47d473c759ce..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Iata Airport Development Reference Manual 9th Edition Zip.md +++ /dev/null @@ -1,52 +0,0 @@ - -

            Iata Airport Development Reference Manual 9th Edition Zip

            -

            If you are looking for a comprehensive guide for airport design and planning, you may want to download the Iata Airport Development Reference Manual 9th Edition Zip file. This file contains the latest edition of the ADRM, which is a joint publication from Airports Council International (ACI) and IATA. The ADRM is recognized globally as the aviation industry’s guide for planning new airports or expanding existing infrastructure.

            -

            Iata Airport Development Reference Manual 9th Edition Zip


            Download »»» https://gohhs.com/2uEyUJ



            -

            The ADRM covers all aspects of airport development, from forecasting and planning to design and construction. It provides best practices and recommendations from leading industry experts on how to develop sustainable and efficient ‘world class airports’. The ADRM also helps you to achieve optimal airport performance through the use of the latest in airport development best practices and principles.

            -

            The Iata Airport Development Reference Manual 9th Edition Zip file contains many updates and new sections that reflect the changing needs and challenges of the airport industry. Some of the updates include:

            -
              -
            • Newly developed formulas for airport capacity calculation
            • -
            • Updated Level of Service guidelines
            • -
            • Extended information on Baggage Handling Systems, MARS stands, Airport Simulation as well as Passenger Terminal Wayfinding and Signage
            • -
            • New sections on: Airport Technology, Commercial Development, Airport Transfer, Jet Fuel Infrastructure and Operational Readiness
            • -
            -

            The Iata Airport Development Reference Manual 9th Edition Zip file is available in e-formats. You can also order a print format (in combo only) from the IATA website. The file size is about 300 MB and it requires a password to open. You can get the password by contacting IATA or ACI.

            -

            The Iata Airport Development Reference Manual 9th Edition Zip file is a valuable resource for anyone involved in airport development, whether you are an airport operator, planner, designer, consultant, regulator, or airline representative. By downloading and using the ADRM, you will be able to plan and design airports that are affordable, demand driven, fit for purpose, flexible, efficient to operate, and linked to a master plan.

            -

            In this article, we will explore some of the key features and benefits of the ADRM and how it can help you to achieve your airport development goals. We will also provide some examples of how the ADRM has been applied in real-world projects and what lessons can be learned from them.

            -

            Key Features and Benefits of the ADRM

            -

            The ADRM is a comprehensive and user-friendly manual that covers all aspects of airport development, from conceptual planning to operational readiness. It is divided into 14 chapters, each focusing on a specific topic or area of airport development. The chapters are:

            -
              -
            1. Airport Development Process
            2. -
            3. Airport Master Planning
            4. -
            5. Airport Capacity and Demand
            6. -
            7. Airport Level of Service
            8. -
            9. Airport Layout and Configuration
            10. -
            11. Airside Facilities
            12. -
            13. Passenger Terminal Facilities
            14. -
            15. Landside Facilities
            16. -
            17. Support Facilities
            18. -
            19. Airport Technology
            20. -
            21. Commercial Development
            22. -
            23. Airport Transfer
            24. -
            25. Jet Fuel Infrastructure
            26. -
            27. Operational Readiness
            28. -
            -

            The ADRM provides clear and concise guidance on how to plan and design each of these aspects, using diagrams, tables, charts, formulas, and examples. It also provides references to other relevant standards and documents that can supplement the information in the ADRM.

            -

            -

            The ADRM is based on the principles of collaboration and consultation among all stakeholders involved in airport development, including airports, airlines, governments, regulators, consultants, contractors, and users. The ADRM aims to balance the interests and needs of all parties and to ensure that airport development is affordable, demand driven, fit for purpose, flexible, efficient to operate, and linked to a master plan.

            -

            The ADRM also takes into account the current and future trends and challenges that affect the airport industry, such as environmental sustainability, security, safety, innovation, digitalization, customer experience, and resilience. The ADRM provides recommendations on how to address these issues and how to incorporate them into the airport development process.

            -

            By using the ADRM as a guide for airport development, you will be able to:

            -
              -
            • Plan and design airports that meet the current and future needs of the aviation industry and the traveling public
            • -
            • Optimize the use of available resources and minimize the environmental impact of airport development
            • -
            • Enhance the operational efficiency and performance of airports and airlines
            • -
            • Improve the level of service and customer satisfaction at airports
            • -
            • Create value-added opportunities for commercial development at airports
            • -
            • Ensure a smooth transition from construction to operation of new or expanded airport facilities
            • -
            -

            Conclusion

            -

            Airport development is a complex and challenging process that requires careful planning and design, as well as effective collaboration and consultation among all stakeholders. The ADRM is a valuable resource that can help you to achieve your airport development goals and to create sustainable and efficient ‘world class airports’.

            -

            The Iata Airport Development Reference Manual 9th Edition Zip file contains the latest edition of the ADRM, which has been updated and improved to reflect the changing needs and challenges of the airport industry. You can download the file from the IATA website and use it as a guide for your airport development projects.

            -

            We hope that this article has given you an overview of the ADRM and its benefits for airport development. If you have any questions or comments, please feel free to contact us. We would love to hear from you.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Paragon HFS For Windows 11.0.0.175 Incl Crack Serial Key __FULL__ Keygen.md b/spaces/scedlatioru/img-to-music/example/Paragon HFS For Windows 11.0.0.175 Incl Crack Serial Key __FULL__ Keygen.md deleted file mode 100644 index b90142dc7635722e22fee78068e3eb22712beec4..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Paragon HFS For Windows 11.0.0.175 Incl Crack Serial Key __FULL__ Keygen.md +++ /dev/null @@ -1,64 +0,0 @@ -

            Paragon HFS for Windows 11.0.0.175 Incl Crack Serial Key keygen


            Download Zip ☆☆☆☆☆ https://gohhs.com/2uEAmf



            - -e cinci usluri compatibile - -sudo apt install -y apache2 - -mkdir web/ /var/www - -sudo chown -R www-data:www-data web/ /var/www - -sudo chown -R root:root web/ /var/www - -apt install apache2 - -cd /var/www - -/usr/bin/apache2ctl -S - -You will get information that apache has started. - -computername, ipaddress, port - -HELP - -Advertise your IP adress - -Use the -A parameter: - -iptables -A INPUT -s 172.16.10.5 -j ACCEPT - -ADVERTISE your IP - -You can use iptables to advertise your IP adress. You need to go to your terminal and type: - -iptables -A INPUT -p icmp -j ACCEPT - -ADVERTISE your IP with iptables. You need to go to your terminal and type: - -You will get information that iptables has started. - -IP ADDRESS - -NAME - -ALLOW - -DENY - -This is to prevent someone to use your IP adress - -iptables -I INPUT 1 -p tcp -s 172.16.10.5 --dport 80 -j DROP - -iptables -A INPUT -p icmp -s 172.16.10.5 --icmp-type 11 -j ACCEPT - -Prevent others to use your IP - -You can use iptables to restrict your IP to only people whom you want to allow to use it. You need to go to your terminal and type: - -iptables -A INPUT -i eth0 -p icmp -m state --state new -j ACCEPT - -INPUT P 4fefd39f24
            -
            -
            -

            diff --git a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py b/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py deleted file mode 100644 index ab6aa82d3e9055a838f1f9076b12f05fdfc154d0..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def conv_bn(inp, oup, stride=1, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_bn_no_relu(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - ) - - -def conv_bn1X1(inp, oup, stride, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_dw(inp, oup, stride, leaky=0.1): - return nn.Sequential( - nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), - nn.BatchNorm2d(inp), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - ) - - -class SSH(nn.Module): - - def __init__(self, in_channel, out_channel): - super(SSH, self).__init__() - assert out_channel % 4 == 0 - leaky = 0 - if (out_channel <= 64): - leaky = 0.1 - self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1) - - self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky) - self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky) - self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - def forward(self, input): - conv3X3 = self.conv3X3(input) - - conv5X5_1 = self.conv5X5_1(input) - conv5X5 = self.conv5X5_2(conv5X5_1) - - conv7X7_2 = self.conv7X7_2(conv5X5_1) - conv7X7 = self.conv7x7_3(conv7X7_2) - - out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1) - out = F.relu(out) - return out - - -class FPN(nn.Module): - - def __init__(self, in_channels_list, out_channels): - super(FPN, self).__init__() - leaky = 0 - if (out_channels <= 64): - leaky = 0.1 - self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky) - self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky) - self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky) - - self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky) - self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky) - - def forward(self, input): - # names = list(input.keys()) - # input = list(input.values()) - - output1 = self.output1(input[0]) - output2 = self.output2(input[1]) - output3 = self.output3(input[2]) - - up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest') - output2 = output2 + up3 - output2 = self.merge2(output2) - - up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest') - output1 = output1 + up2 - output1 = self.merge1(output1) - - out = [output1, output2, output3] - return out - - -class MobileNetV1(nn.Module): - - def __init__(self): - super(MobileNetV1, self).__init__() - self.stage1 = nn.Sequential( - conv_bn(3, 8, 2, leaky=0.1), # 3 - conv_dw(8, 16, 1), # 7 - conv_dw(16, 32, 2), # 11 - conv_dw(32, 32, 1), # 19 - conv_dw(32, 64, 2), # 27 - conv_dw(64, 64, 1), # 43 - ) - self.stage2 = nn.Sequential( - conv_dw(64, 128, 2), # 43 + 16 = 59 - conv_dw(128, 128, 1), # 59 + 32 = 91 - conv_dw(128, 128, 1), # 91 + 32 = 123 - conv_dw(128, 128, 1), # 123 + 32 = 155 - conv_dw(128, 128, 1), # 155 + 32 = 187 - conv_dw(128, 128, 1), # 187 + 32 = 219 - ) - self.stage3 = nn.Sequential( - conv_dw(128, 256, 2), # 219 +3 2 = 241 - conv_dw(256, 256, 1), # 241 + 64 = 301 - ) - self.avg = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(256, 1000) - - def forward(self, x): - x = self.stage1(x) - x = self.stage2(x) - x = self.stage3(x) - x = self.avg(x) - # x = self.model(x) - x = x.view(-1, 256) - x = self.fc(x) - return x - - -class ClassHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(ClassHead, self).__init__() - self.num_anchors = num_anchors - self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 2) - - -class BboxHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(BboxHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 4) - - -class LandmarkHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(LandmarkHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 10) - - -def make_class_head(fpn_num=3, inchannels=64, anchor_num=2): - classhead = nn.ModuleList() - for i in range(fpn_num): - classhead.append(ClassHead(inchannels, anchor_num)) - return classhead - - -def make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2): - bboxhead = nn.ModuleList() - for i in range(fpn_num): - bboxhead.append(BboxHead(inchannels, anchor_num)) - return bboxhead - - -def make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2): - landmarkhead = nn.ModuleList() - for i in range(fpn_num): - landmarkhead.append(LandmarkHead(inchannels, anchor_num)) - return landmarkhead diff --git a/spaces/segments-tobias/conex/espnet/transform/add_deltas.py b/spaces/segments-tobias/conex/espnet/transform/add_deltas.py deleted file mode 100644 index 93f941c5f04ffb84776c2fcafd59229b6b5e8fd4..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/transform/add_deltas.py +++ /dev/null @@ -1,34 +0,0 @@ -import numpy as np - - -def delta(feat, window): - assert window > 0 - delta_feat = np.zeros_like(feat) - for i in range(1, window + 1): - delta_feat[:-i] += i * feat[i:] - delta_feat[i:] += -i * feat[:-i] - delta_feat[-i:] += i * feat[-1] - delta_feat[:i] += -i * feat[0] - delta_feat /= 2 * sum(i ** 2 for i in range(1, window + 1)) - return delta_feat - - -def add_deltas(x, window=2, order=2): - feats = [x] - for _ in range(order): - feats.append(delta(feats[-1], window)) - return np.concatenate(feats, axis=1) - - -class AddDeltas(object): - def __init__(self, window=2, order=2): - self.window = window - self.order = order - - def __repr__(self): - return "{name}(window={window}, order={order}".format( - name=self.__class__.__name__, window=self.window, order=self.order - ) - - def __call__(self, x): - return add_deltas(x, window=self.window, order=self.order) diff --git a/spaces/serdaryildiz/TRCaptionNet/demo.py b/spaces/serdaryildiz/TRCaptionNet/demo.py deleted file mode 100644 index 43725ed0df6d629636ee08b3d2638d2cc0d9e74e..0000000000000000000000000000000000000000 --- a/spaces/serdaryildiz/TRCaptionNet/demo.py +++ /dev/null @@ -1,54 +0,0 @@ -import argparse -import glob -import os - -import cv2 -import numpy -import torch -from PIL import Image - -from Model import TRCaptionNet, clip_transform - - -def demo(opt): - preprocess = clip_transform(224) - model = TRCaptionNet({ - "max_length": 35, - "clip": "ViT-L/14", - "bert": "dbmdz/bert-base-turkish-cased", - "proj": True, - "proj_num_head": 16 - }) - device = torch.device(opt.device) - model.load_state_dict(torch.load(opt.model_ckpt, map_location=device)["model"], strict=True) - model = model.to(device) - model.eval() - - image_paths = glob.glob(os.path.join(opt.input_dir, '*.jpg')) - - for image_path in sorted(image_paths): - img_name = image_path.split('/')[-1] - img0 = Image.open(image_path) - batch = preprocess(img0).unsqueeze(0).to(device) - caption = model.generate(batch, min_length=11, repetition_penalty=1.6)[0] - print(f"{img_name} :", caption) - - orj_img = numpy.array(img0)[:, :, ::-1] - h, w, _ = orj_img.shape - new_h = 800 - new_w = int(new_h * (w / h)) - orj_img = cv2.resize(orj_img, (new_w, new_h)) - - cv2.imshow("image", orj_img) - cv2.waitKey(0) - - return - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Turkish-Image-Captioning!') - parser.add_argument('--model-ckpt', type=str, default='./checkpoints/TRCaptionNet_L14_berturk.pth') - parser.add_argument('--input-dir', type=str, default='./images/') - parser.add_argument('--device', type=str, default='cuda:0') - args = parser.parse_args() - demo(args) diff --git a/spaces/sh20raj/Test/style.css b/spaces/sh20raj/Test/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/sh20raj/Test/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/optimus_models/configuration_bert.py b/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/optimus_models/configuration_bert.py deleted file mode 100644 index 7fff3e5d058720900fb0388b3c54e31e86045a71..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/optimus_models/configuration_bert.py +++ /dev/null @@ -1,113 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" BERT model configuration """ - -from __future__ import absolute_import, division, print_function, unicode_literals - -import json -import logging -import sys -from io import open - -from .configuration_utils import PretrainedConfig - -logger = logging.getLogger(__name__) - -BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - 'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json", - 'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json", - 'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json", - 'bert-large-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json", - 'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json", - 'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json", - 'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json", - 'bert-base-german-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json", - 'bert-large-uncased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json", - 'bert-large-cased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json", - 'bert-large-uncased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json", - 'bert-large-cased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json", - 'bert-base-cased-finetuned-mrpc': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json", -} - - -class BertConfig(PretrainedConfig): - r""" - :class:`~pytorch_transformers.BertConfig` is the configuration class to store the configuration of a - `BertModel`. - - - Arguments: - vocab_size_or_config_json_file: Vocabulary size of `inputs_ids` in `BertModel`. - hidden_size: Size of the encoder layers and the pooler layer. - num_hidden_layers: Number of hidden layers in the Transformer encoder. - num_attention_heads: Number of attention heads for each attention layer in - the Transformer encoder. - intermediate_size: The size of the "intermediate" (i.e., feed-forward) - layer in the Transformer encoder. - hidden_act: The non-linear activation function (function or string) in the - encoder and pooler. If string, "gelu", "relu" and "swish" are supported. - hidden_dropout_prob: The dropout probabilitiy for all fully connected - layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob: The dropout ratio for the attention - probabilities. - max_position_embeddings: The maximum sequence length that this model might - ever be used with. Typically set this to something large just in case - (e.g., 512 or 1024 or 2048). - type_vocab_size: The vocabulary size of the `token_type_ids` passed into - `BertModel`. - initializer_range: The sttdev of the truncated_normal_initializer for - initializing all weight matrices. - layer_norm_eps: The epsilon used by LayerNorm. - """ - pretrained_config_archive_map = BERT_PRETRAINED_CONFIG_ARCHIVE_MAP - - def __init__(self, - vocab_size_or_config_json_file=30522, - hidden_size=768, - num_hidden_layers=12, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=512, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-12, - **kwargs): - super(BertConfig, self).__init__(**kwargs) - if isinstance(vocab_size_or_config_json_file, str) or (sys.version_info[0] == 2 - and isinstance(vocab_size_or_config_json_file, unicode)): - with open(vocab_size_or_config_json_file, "r", encoding='utf-8') as reader: - json_config = json.loads(reader.read()) - for key, value in json_config.items(): - self.__dict__[key] = value - elif isinstance(vocab_size_or_config_json_file, int): - self.vocab_size = vocab_size_or_config_json_file - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.hidden_act = hidden_act - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.type_vocab_size = type_vocab_size - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - else: - raise ValueError("First argument must be either a vocabulary size (int)" - " or the path to a pretrained model config file (str)") diff --git a/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/modeling/backbone/hourglass.py b/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/modeling/backbone/hourglass.py deleted file mode 100644 index a4785ae52105df938ca4527e3ac76b5752d69da7..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/modeling/backbone/hourglass.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) Malong Technologies Co., Ltd. -# All rights reserved. -# -# Contact: github@malong.com -# -# This source code is licensed under the LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -import torch.nn.functional as F - - -_norm_func = lambda num_features: nn.BatchNorm2d(num_features, eps=1e-5) - - -def _make_layer(in_channels, out_channels, num_blocks, **kwargs): - blocks = [] - blocks.append(Residual(in_channels, out_channels)) - for _ in range(1, num_blocks): - blocks.append(Residual(out_channels, out_channels, **kwargs)) - return nn.Sequential(*blocks) - - -def _make_layer_revr(in_channels, out_channels, num_blocks, **kwargs): - blocks = [] - for _ in range(num_blocks - 1): - blocks.append(Residual(in_channels, in_channels, **kwargs)) - blocks.append(Residual(in_channels, out_channels, **kwargs)) - return nn.Sequential(*blocks) - - -class Residual(nn.Module): - def __init__(self, in_channels, out_channels, stride=1): - super(Residual, self).__init__() - - self.conv_1 = nn.Sequential( - nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=stride, bias=False), - _norm_func(out_channels), - nn.ReLU() - ) - self.conv_2 = nn.Sequential( - nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1, stride=1, bias=False), - _norm_func(out_channels) - ) - if stride != 1 or in_channels != out_channels: - self.skip = nn.Sequential( - nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=stride, bias=False), - _norm_func(out_channels) - ) - else: - self.skip = None - self.out_relu = nn.ReLU() - - def forward(self, x): - b1 = self.conv_2(self.conv_1(x)) - if self.skip is None: - return self.out_relu(b1 + x) - else: - return self.out_relu(b1 + self.skip(x)) - - -class HourGlassBlock(nn.Module): - def __init__(self, n, channels, blocks): - super(HourGlassBlock, self).__init__() - - self.up_1 = _make_layer(channels[0], channels[0], blocks[0]) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - self.low_1 = _make_layer(channels[0], channels[1], blocks[0]) - if n <= 1: - self.low_2 = _make_layer(channels[1], channels[1], blocks[1]) - else: - self.low_2 = HourGlassBlock(n - 1, channels[1:], blocks[1:]) - self.low_3 = _make_layer_revr(channels[1], channels[0], blocks[0]) - - def forward(self, x): - upsample = lambda input: F.interpolate(input, scale_factor=2, mode='bilinear', align_corners=True) - up_1 = self.up_1(x) - low = self.low_3(self.low_2(self.low_1(self.pool(x)))) - return upsample(low) + up_1 - - -class HourGlassNet(nn.Module): - def __init__(self, n, channels, blocks): - super(HourGlassNet, self).__init__() - self.pre = nn.Sequential( - nn.Conv2d(3, 128, kernel_size=7, stride=2, padding=3, bias=False), - _norm_func(128), - nn.ReLU(), - Residual(128, 256, stride=2) - ) - hourglass_blocks = [] - for _ in range(2): - hourglass_blocks.append( - HourGlassBlock(n, channels, blocks) - ) - self.hourglass_blocks = nn.Sequential(*hourglass_blocks) - - def forward(self, x): - return self.hourglass_blocks(self.pre(x)) - - -def hourglass88(): - return HourGlassNet(3, [256, 256, 256, 512], [2, 2, 2, 2]) diff --git a/spaces/shireenchand/depth-map/app.py b/spaces/shireenchand/depth-map/app.py deleted file mode 100644 index caefee792ef900af25265a89711e111a6cb00787..0000000000000000000000000000000000000000 --- a/spaces/shireenchand/depth-map/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr -import numpy as np -import cv2 -import os -from PIL import Image -import matplotlib.pyplot as plt - -def depthMap(imgL,imgR): - imgL = cv2.cvtColor(imgL, cv2.COLOR_RGB2GRAY) - imgR = cv2.cvtColor(imgR, cv2.COLOR_RGB2GRAY) - stereoMatcher = cv2.StereoBM_create() - stereoMatcher.setMinDisparity(4) - stereoMatcher.setNumDisparities(128) - stereoMatcher.setBlockSize(21) - stereoMatcher.setSpeckleRange(16) - stereoMatcher.setSpeckleWindowSize(45) - disparity = stereoMatcher.compute(imgL,imgR) - gray = plt.get_cmap('gray') - disparity = disparity - np.min(disparity) - disparity = disparity / np.max(disparity) - disparity = gray(disparity)[:, :, :3] - return disparity - -leftCam = gr.inputs.Image(type="numpy") -RightCam = gr.inputs.Image(type="numpy") - -map = gr.Interface(fn=depthMap, - inputs=[leftCam,RightCam], - outputs="image").launch(debug=True), \ No newline at end of file diff --git a/spaces/shiyi11/QQsign/devices/device_8958.js b/spaces/shiyi11/QQsign/devices/device_8958.js deleted file mode 100644 index 455ddb0108b70276949e6539926481590a98e0d9..0000000000000000000000000000000000000000 --- a/spaces/shiyi11/QQsign/devices/device_8958.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform = exports.Platform || (exports.Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.58.11175", - version: "8.9.58.11175", - ver: "8.9.58", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1684467300, - appid: 16, - subid: 537163194, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2545", - display: "Android_8.9.58", - qua: 'V1_AND_SQ_8.9.58_4108_YYB_D', - ssover: 20, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537163242, - display: 'aPad_8.9.58' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: '8.9.50.611', - ver: '8.9.50', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/README.md b/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/README.md deleted file mode 100644 index a86a64a60a14ccea6dc3c0a0048a243750fe98fe..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/README.md +++ /dev/null @@ -1,232 +0,0 @@ -## StyleGAN — Official TensorFlow Implementation -![Python 3.6](https://img.shields.io/badge/python-3.6-green.svg?style=plastic) -![TensorFlow 1.10](https://img.shields.io/badge/tensorflow-1.10-green.svg?style=plastic) -![cuDNN 7.3.1](https://img.shields.io/badge/cudnn-7.3.1-green.svg?style=plastic) -![License CC BY-NC](https://img.shields.io/badge/license-CC_BY--NC-green.svg?style=plastic) - -![Teaser image](./stylegan-teaser.png) -**Picture:** *These people are not real – they were produced by our generator that allows control over different aspects of the image.* - -This repository contains the official TensorFlow implementation of the following paper: - -> **A Style-Based Generator Architecture for Generative Adversarial Networks**
            -> Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)
            -> https://arxiv.org/abs/1812.04948 -> -> **Abstract:** *We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.* - -For business inquiries, please contact [researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com)
            -For press and other inquiries, please contact Hector Marinez at [hmarinez@nvidia.com](mailto:hmarinez@nvidia.com)
            - -**★★★ NEW: StyleGAN2 is available at [https://github.com/NVlabs/stylegan2](https://github.com/NVlabs/stylegan2) ★★★** - -## Resources - -Material related to our paper is available via the following links: - -- Paper: https://arxiv.org/abs/1812.04948 -- Video: https://youtu.be/kSLJriaOumA -- Code: https://github.com/NVlabs/stylegan -- FFHQ: https://github.com/NVlabs/ffhq-dataset - -Additional material can be found on Google Drive: - -| Path | Description -| :--- | :---------- -| [StyleGAN](https://drive.google.com/open?id=1uka3a1noXHAydRPRbknqwKVGODvnmUBX) | Main folder. -| ├  [stylegan-paper.pdf](https://drive.google.com/open?id=1v-HkF3Ehrpon7wVIx4r5DLcko_U_V6Lt) | High-quality version of the paper PDF. -| ├  [stylegan-video.mp4](https://drive.google.com/open?id=1uzwkZHQX_9pYg1i0d1Nbe3D9xPO8-qBf) | High-quality version of the result video. -| ├  [images](https://drive.google.com/open?id=1-l46akONUWF6LCpDoeq63H53rD7MeiTd) | Example images produced using our generator. -| │  ├  [representative-images](https://drive.google.com/open?id=1ToY5P4Vvf5_c3TyUizQ8fckFFoFtBvD8) | High-quality images to be used in articles, blog posts, etc. -| │  └  [100k-generated-images](https://drive.google.com/open?id=100DJ0QXyG89HZzB4w2Cbyf4xjNK54cQ1) | 100,000 generated images for different amounts of truncation. -| │     ├  [ffhq-1024x1024](https://drive.google.com/open?id=14lm8VRN1pr4g_KVe6_LvyDX1PObst6d4) | Generated using Flickr-Faces-HQ dataset at 1024×1024. -| │     ├  [bedrooms-256x256](https://drive.google.com/open?id=1Vxz9fksw4kgjiHrvHkX4Hze4dyThFW6t) | Generated using LSUN Bedroom dataset at 256×256. -| │     ├  [cars-512x384](https://drive.google.com/open?id=1MFCvOMdLE2_mpeLPTiDw5dxc2CRuKkzS) | Generated using LSUN Car dataset at 512×384. -| │     └  [cats-256x256](https://drive.google.com/open?id=1gq-Gj3GRFiyghTPKhp8uDMA9HV_0ZFWQ) | Generated using LSUN Cat dataset at 256×256. -| ├  [videos](https://drive.google.com/open?id=1N8pOd_Bf8v89NGUaROdbD8-ayLPgyRRo) | Example videos produced using our generator. -| │  └  [high-quality-video-clips](https://drive.google.com/open?id=1NFO7_vH0t98J13ckJYFd7kuaTkyeRJ86) | Individual segments of the result video as high-quality MP4. -| ├  [ffhq-dataset](https://drive.google.com/open?id=1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP) | Raw data for the [Flickr-Faces-HQ dataset](https://github.com/NVlabs/ffhq-dataset). -| └  [networks](https://drive.google.com/open?id=1MASQyN5m0voPcx7-9K0r5gObhvvPups7) | Pre-trained networks as pickled instances of [dnnlib.tflib.Network](./dnnlib/tflib/network.py). -|    ├  [stylegan-ffhq-1024x1024.pkl](https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ) | StyleGAN trained with Flickr-Faces-HQ dataset at 1024×1024. -|    ├  [stylegan-celebahq-1024x1024.pkl](https://drive.google.com/uc?id=1MGqJl28pN4t7SAtSrPdSRJSQJqahkzUf) | StyleGAN trained with CelebA-HQ dataset at 1024×1024. -|    ├  [stylegan-bedrooms-256x256.pkl](https://drive.google.com/uc?id=1MOSKeGF0FJcivpBI7s63V9YHloUTORiF) | StyleGAN trained with LSUN Bedroom dataset at 256×256. -|    ├  [stylegan-cars-512x384.pkl](https://drive.google.com/uc?id=1MJ6iCfNtMIRicihwRorsM3b7mmtmK9c3) | StyleGAN trained with LSUN Car dataset at 512×384. -|    ├  [stylegan-cats-256x256.pkl](https://drive.google.com/uc?id=1MQywl0FNt6lHu8E_EUqnRbviagS7fbiJ) | StyleGAN trained with LSUN Cat dataset at 256×256. -|    └  [metrics](https://drive.google.com/open?id=1MvYdWCBuMfnoYGptRH-AgKLbPTsIQLhl) | Auxiliary networks for the quality and disentanglement metrics. -|       ├  [inception_v3_features.pkl](https://drive.google.com/uc?id=1MzTY44rLToO5APn8TZmfR7_ENSe5aZUn) | Standard [Inception-v3](https://arxiv.org/abs/1512.00567) classifier that outputs a raw feature vector. -|       ├  [vgg16_zhang_perceptual.pkl](https://drive.google.com/uc?id=1N2-m9qszOeVC9Tq77WxsLnuWwOedQiD2) | Standard [LPIPS](https://arxiv.org/abs/1801.03924) metric to estimate perceptual similarity. -|       ├  [celebahq-classifier-00-male.pkl](https://drive.google.com/uc?id=1Q5-AI6TwWhCVM7Muu4tBM7rp5nG_gmCX) | Binary classifier trained to detect a single attribute of CelebA-HQ. -|       └ ⋯ | Please see the file listing for remaining networks. - -## Licenses - -All material, excluding the Flickr-Faces-HQ dataset, is made available under [Creative Commons BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license by NVIDIA Corporation. You can **use, redistribute, and adapt** the material for **non-commercial purposes**, as long as you give appropriate credit by **citing our paper** and **indicating any changes** that you've made. - -For license information regarding the FFHQ dataset, please refer to the [Flickr-Faces-HQ repository](https://github.com/NVlabs/ffhq-dataset). - -`inception_v3_features.pkl` and `inception_v3_softmax.pkl` are derived from the pre-trained [Inception-v3](https://arxiv.org/abs/1512.00567) network by Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. The network was originally shared under [Apache 2.0](https://github.com/tensorflow/models/blob/master/LICENSE) license on the [TensorFlow Models](https://github.com/tensorflow/models) repository. - -`vgg16.pkl` and `vgg16_zhang_perceptual.pkl` are derived from the pre-trained [VGG-16](https://arxiv.org/abs/1409.1556) network by Karen Simonyan and Andrew Zisserman. The network was originally shared under [Creative Commons BY 4.0](https://creativecommons.org/licenses/by/4.0/) license on the [Very Deep Convolutional Networks for Large-Scale Visual Recognition](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) project page. - -`vgg16_zhang_perceptual.pkl` is further derived from the pre-trained [LPIPS](https://arxiv.org/abs/1801.03924) weights by Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The weights were originally shared under [BSD 2-Clause "Simplified" License](https://github.com/richzhang/PerceptualSimilarity/blob/master/LICENSE) on the [PerceptualSimilarity](https://github.com/richzhang/PerceptualSimilarity) repository. - -## System requirements - -* Both Linux and Windows are supported, but we strongly recommend Linux for performance and compatibility reasons. -* 64-bit Python 3.6 installation. We recommend Anaconda3 with numpy 1.14.3 or newer. -* TensorFlow 1.10.0 or newer with GPU support. -* One or more high-end NVIDIA GPUs with at least 11GB of DRAM. We recommend NVIDIA DGX-1 with 8 Tesla V100 GPUs. -* NVIDIA driver 391.35 or newer, CUDA toolkit 9.0 or newer, cuDNN 7.3.1 or newer. - -## Using pre-trained networks - -A minimal example of using a pre-trained StyleGAN generator is given in [pretrained_example.py](./pretrained_example.py). When executed, the script downloads a pre-trained StyleGAN generator from Google Drive and uses it to generate an image: - -``` -> python pretrained_example.py -Downloading https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ .... done - -Gs Params OutputShape WeightShape ---- --- --- --- -latents_in - (?, 512) - -... -images_out - (?, 3, 1024, 1024) - ---- --- --- --- -Total 26219627 - -> ls results -example.png # https://drive.google.com/uc?id=1UDLT_zb-rof9kKH0GwiJW_bS9MoZi8oP -``` - -A more advanced example is given in [generate_figures.py](./generate_figures.py). The script reproduces the figures from our paper in order to illustrate style mixing, noise inputs, and truncation: -``` -> python generate_figures.py -results/figure02-uncurated-ffhq.png # https://drive.google.com/uc?id=1U3r1xgcD7o-Fd0SBRpq8PXYajm7_30cu -results/figure03-style-mixing.png # https://drive.google.com/uc?id=1U-nlMDtpnf1RcYkaFQtbh5oxnhA97hy6 -results/figure04-noise-detail.png # https://drive.google.com/uc?id=1UX3m39u_DTU6eLnEW6MqGzbwPFt2R9cG -results/figure05-noise-components.png # https://drive.google.com/uc?id=1UQKPcvYVeWMRccGMbs2pPD9PVv1QDyp_ -results/figure08-truncation-trick.png # https://drive.google.com/uc?id=1ULea0C12zGlxdDQFNLXOWZCHi3QNfk_v -results/figure10-uncurated-bedrooms.png # https://drive.google.com/uc?id=1UEBnms1XMfj78OHj3_cx80mUf_m9DUJr -results/figure11-uncurated-cars.png # https://drive.google.com/uc?id=1UO-4JtAs64Kun5vIj10UXqAJ1d5Ir1Ke -results/figure12-uncurated-cats.png # https://drive.google.com/uc?id=1USnJc14prlu3QAYxstrtlfXC9sDWPA-W -``` - -The pre-trained networks are stored as standard pickle files on Google Drive: - -``` -# Load pre-trained network. -url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # karras2019stylegan-ffhq-1024x1024.pkl -with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f: - _G, _D, Gs = pickle.load(f) - # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. - # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. - # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. -``` - -The above code downloads the file and unpickles it to yield 3 instances of [dnnlib.tflib.Network](./dnnlib/tflib/network.py). To generate images, you will typically want to use `Gs` – the other two networks are provided for completeness. In order for `pickle.load()` to work, you will need to have the `dnnlib` source directory in your PYTHONPATH and a `tf.Session` set as default. The session can initialized by calling `dnnlib.tflib.init_tf()`. - -There are three ways to use the pre-trained generator: - -1. Use `Gs.run()` for immediate-mode operation where the inputs and outputs are numpy arrays: - ``` - # Pick latent vector. - rnd = np.random.RandomState(5) - latents = rnd.randn(1, Gs.input_shape[1]) - - # Generate image. - fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) - images = Gs.run(latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=fmt) - ``` - The first argument is a batch of latent vectors of shape `[num, 512]`. The second argument is reserved for class labels (not used by StyleGAN). The remaining keyword arguments are optional and can be used to further modify the operation (see below). The output is a batch of images, whose format is dictated by the `output_transform` argument. - -2. Use `Gs.get_output_for()` to incorporate the generator as a part of a larger TensorFlow expression: - ``` - latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:]) - images = Gs_clone.get_output_for(latents, None, is_validation=True, randomize_noise=True) - images = tflib.convert_images_to_uint8(images) - result_expr.append(inception_clone.get_output_for(images)) - ``` - The above code is from [metrics/frechet_inception_distance.py](./metrics/frechet_inception_distance.py). It generates a batch of random images and feeds them directly to the [Inception-v3](https://arxiv.org/abs/1512.00567) network without having to convert the data to numpy arrays in between. - -3. Look up `Gs.components.mapping` and `Gs.components.synthesis` to access individual sub-networks of the generator. Similar to `Gs`, the sub-networks are represented as independent instances of [dnnlib.tflib.Network](./dnnlib/tflib/network.py): - ``` - src_latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in src_seeds) - src_dlatents = Gs.components.mapping.run(src_latents, None) # [seed, layer, component] - src_images = Gs.components.synthesis.run(src_dlatents, randomize_noise=False, **synthesis_kwargs) - ``` - The above code is from [generate_figures.py](./generate_figures.py). It first transforms a batch of latent vectors into the intermediate *W* space using the mapping network and then turns these vectors into a batch of images using the synthesis network. The `dlatents` array stores a separate copy of the same *w* vector for each layer of the synthesis network to facilitate style mixing. - -The exact details of the generator are defined in [training/networks_stylegan.py](./training/networks_stylegan.py) (see `G_style`, `G_mapping`, and `G_synthesis`). The following keyword arguments can be specified to modify the behavior when calling `run()` and `get_output_for()`: - -* `truncation_psi` and `truncation_cutoff` control the truncation trick that that is performed by default when using `Gs` (ψ=0.7, cutoff=8). It can be disabled by setting `truncation_psi=1` or `is_validation=True`, and the image quality can be further improved at the cost of variation by setting e.g. `truncation_psi=0.5`. Note that truncation is always disabled when using the sub-networks directly. The average *w* needed to manually perform the truncation trick can be looked up using `Gs.get_var('dlatent_avg')`. - -* `randomize_noise` determines whether to use re-randomize the noise inputs for each generated image (`True`, default) or whether to use specific noise values for the entire minibatch (`False`). The specific values can be accessed via the `tf.Variable` instances that are found using `[var for name, var in Gs.components.synthesis.vars.items() if name.startswith('noise')]`. - -* When using the mapping network directly, you can specify `dlatent_broadcast=None` to disable the automatic duplication of `dlatents` over the layers of the synthesis network. - -* Runtime performance can be fine-tuned via `structure='fixed'` and `dtype='float16'`. The former disables support for progressive growing, which is not needed for a fully-trained generator, and the latter performs all computation using half-precision floating point arithmetic. - -## Preparing datasets for training - -The training and evaluation scripts operate on datasets stored as multi-resolution TFRecords. Each dataset is represented by a directory containing the same image data in several resolutions to enable efficient streaming. There is a separate *.tfrecords file for each resolution, and if the dataset contains labels, they are stored in a separate file as well. By default, the scripts expect to find the datasets at `datasets//-.tfrecords`. The directory can be changed by editing [config.py](./config.py): - -``` -result_dir = 'results' -data_dir = 'datasets' -cache_dir = 'cache' -``` - -To obtain the FFHQ dataset (`datasets/ffhq`), please refer to the [Flickr-Faces-HQ repository](https://github.com/NVlabs/ffhq-dataset). - -To obtain the CelebA-HQ dataset (`datasets/celebahq`), please refer to the [Progressive GAN repository](https://github.com/tkarras/progressive_growing_of_gans). - -To obtain other datasets, including LSUN, please consult their corresponding project pages. The datasets can be converted to multi-resolution TFRecords using the provided [dataset_tool.py](./dataset_tool.py): - -``` -> python dataset_tool.py create_lsun datasets/lsun-bedroom-full ~/lsun/bedroom_lmdb --resolution 256 -> python dataset_tool.py create_lsun_wide datasets/lsun-car-512x384 ~/lsun/car_lmdb --width 512 --height 384 -> python dataset_tool.py create_lsun datasets/lsun-cat-full ~/lsun/cat_lmdb --resolution 256 -> python dataset_tool.py create_cifar10 datasets/cifar10 ~/cifar10 -> python dataset_tool.py create_from_images datasets/custom-dataset ~/custom-images -``` - -## Training networks - -Once the datasets are set up, you can train your own StyleGAN networks as follows: - -1. Edit [train.py](./train.py) to specify the dataset and training configuration by uncommenting or editing specific lines. -2. Run the training script with `python train.py`. -3. The results are written to a newly created directory `results/-`. -4. The training may take several days (or weeks) to complete, depending on the configuration. - -By default, `train.py` is configured to train the highest-quality StyleGAN (configuration F in Table 1) for the FFHQ dataset at 1024×1024 resolution using 8 GPUs. Please note that we have used 8 GPUs in all of our experiments. Training with fewer GPUs may not produce identical results – if you wish to compare against our technique, we strongly recommend using the same number of GPUs. - -Expected training times for the default configuration using Tesla V100 GPUs: - -| GPUs | 1024×1024 | 512×512 | 256×256 | -| :--- | :-------------- | :------------ | :------------ | -| 1 | 41 days 4 hours | 24 days 21 hours | 14 days 22 hours | -| 2 | 21 days 22 hours | 13 days 7 hours | 9 days 5 hours | -| 4 | 11 days 8 hours | 7 days 0 hours | 4 days 21 hours | -| 8 | 6 days 14 hours | 4 days 10 hours | 3 days 8 hours | - -## Evaluating quality and disentanglement - -The quality and disentanglement metrics used in our paper can be evaluated using [run_metrics.py](./run_metrics.py). By default, the script will evaluate the Fréchet Inception Distance (`fid50k`) for the pre-trained FFHQ generator and write the results into a newly created directory under `results`. The exact behavior can be changed by uncommenting or editing specific lines in [run_metrics.py](./run_metrics.py). - -Expected evaluation time and results for the pre-trained FFHQ generator using one Tesla V100 GPU: - -| Metric | Time | Result | Description -| :----- | :--- | :----- | :---------- -| fid50k | 16 min | 4.4159 | Fréchet Inception Distance using 50,000 images. -| ppl_zfull | 55 min | 664.8854 | Perceptual Path Length for full paths in *Z*. -| ppl_wfull | 55 min | 233.3059 | Perceptual Path Length for full paths in *W*. -| ppl_zend | 55 min | 666.1057 | Perceptual Path Length for path endpoints in *Z*. -| ppl_wend | 55 min | 197.2266 | Perceptual Path Length for path endpoints in *W*. -| ls | 10 hours | z: 165.0106
            w: 3.7447 | Linear Separability in *Z* and *W*. - -Please note that the exact results may vary from run to run due to the non-deterministic nature of TensorFlow. - -## Acknowledgements - -We thank Jaakko Lehtinen, David Luebke, and Tuomas Kynkäänniemi for in-depth discussions and helpful comments; Janne Hellsten, Tero Kuosmanen, and Pekka Jänis for compute infrastructure and help with the code release. diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bike Rider Mod APK Download and Enjoy Unlimited Money and Fun.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bike Rider Mod APK Download and Enjoy Unlimited Money and Fun.md deleted file mode 100644 index 08789578c23baa99f1b69e1e7f82a2b6555918d8..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bike Rider Mod APK Download and Enjoy Unlimited Money and Fun.md +++ /dev/null @@ -1,114 +0,0 @@ - - - - - - - - -
            Article with HTML formatting
            -

            Mod APK of Bike Rider: What You Need to Know

            -

            If you love playing bike rider games on your Android device, you might have heard of mod apks. These are modified versions of original applications that offer some advantages over the official ones. For example, you might get unlimited money, unlocked features, or enhanced graphics in a mod apk. But what exactly are mod ap ks and why do people use them? And what are bike rider games and what are their features? In this article, we will answer these questions and more. We will also give you some examples of popular bike rider games and their mod apks, as well as the benefits and risks of using them. Finally, we will show you how to download and install mod apks for bike rider games, and share some tips and tricks for playing them. So, let's get started!

            -

            Introduction

            -

            A mod apk is a modified version of an Android application that usually offers some advantages over the original version, such as unlimited money, unlocked features, or enhanced graphics. Mod apks are created by third-party developers or hackers who modify the original code of the app to change its behavior or appearance. Mod apks are not available on the official Google Play Store, but can be downloaded from various websites or forums that host them.

            -

            mod apk of bike rider


            Download ☆☆☆☆☆ https://ssurll.com/2uNURD



            -

            A bike rider game is a game genre that involves racing or performing stunts with a motorcycle on various tracks or environments. Bike rider games are popular among gamers who enjoy speed, adrenaline, and challenge. Bike rider games usually have realistic physics, graphics, and sound effects that create an immersive biking experience. Bike rider games also have different modes, levels, and features that add variety and fun to the gameplay.

            -

            Some examples of popular bike rider games and their mod apks are:

            -
              -
            • Bike Race Free: This is a simple but addictive game that lets you race against millions of players online or offline. You can also create your own levels and share them with others. The mod apk of this game gives you unlimited money and unlocks all bikes and tracks.
            • -
            • Traffic Rider: This is a realistic game that puts you behind the handlebars of a motorcycle and lets you ride through endless highway traffic. You can choose from 29 different bikes and upgrade them as you progress. The mod apk of this game gives you unlimited money and gold, and unlocks all bikes and modes.
            • -
            • Mad Skills Motocross 2: This is a challenging game that tests your skills and reflexes on various tracks and terrains. You can compete with other players online or offline, and customize your bike and rider. The mod apk of this game gives you unlimited rockets and unlocks all bikes and tracks.
            • -
            -

            Benefits of Using Mod APKs for Bike Rider Games

            -

            Using mod apks for bike rider games can have some benefits that make the gameplay more enjoyable and rewarding. Some of these benefits are:

            -
              -
            • Unlimited money and resources: With mod apks, you can get unlimited money and resources to upgrade your bike and unlock new tracks. This way, you can enjoy the full potential of the game without spending real money or waiting for long hours.
            • -
            • Enhanced graphics and sound effects: Some mod apks offer enhanced graphics and sound effects that improve the quality of the game. For example, you might get better lighting, shadows, textures, or animations that make the game more realistic and immersive.
            • -
            • Access to exclusive features and modes: Some mod apks give you access to exclusive features and modes that are not available in the original version of the game. For example, you might get extra power-ups, weapons, vehicles, or challenges that add more fun and variety to the gameplay.
            • -
            • Ability to customize your bike and rider: Some mod apks allow you to customize your bike and rider according to your preferences. For example, you might be able to change the color, design, or performance of your bike, or the appearance, outfit, or accessories of your rider.
            • -
            -

            Risks of Using Mod APKs for Bike Rider Games

            -

            However, using mod apks for bike rider games also comes with some risks that you should be aware of before downloading them. Some of these risks are:

            -
              -
            • Potential malware and viruses: Since mod apks are not verified by Google Play Store, they might contain malware or viruses that can harm your device or steal your data. Therefore, you should always download mod apks from reliable sources and scan them with antivirus software before installing them.
            • -
            • Legal issues and ethical concerns: Using mod apks might violate the terms of service of the original developers of the game. This might result in legal actions or bans from the official servers or platforms. Moreover, using mod apks might be unfair to other players who play by the rules and respect the work of the developers.
            • -
            • Compatibility and stability issues: Mod apks might not be compatible with your device model or operating system, or might not be updated to the latest version of the game. This might cause crashes or glitches in the game, or prevent you from playing it at all. Therefore, you should always check the compatibility and update status of the mod apk before installing it.
            • -
            • Loss of progress and achievements: Mod apks might not sync with the official servers or cloud storage of the game, or might overwrite your existing data. This might cause you to lose your progress and achievements in the game, or make them invalid or inaccessible. Therefore, you should always backup your data before using mod apks, and use them at your own risk.
            • -
            -

            How to Download and Install Mod APKs for Bike Rider Games

            -

            If you want to try out mod apks for bike rider games, you need to follow some steps to download and install them on your device. Here are the steps:

            -
              -
            1. Find a reliable source that offers mod apks for bike rider games: You can search online for websites or forums that host mod apks for bike rider games. You can also check the reviews and ratings of the mod apks from other users to see if they are safe and working.
            2. -
            3. Check the reviews and ratings of the mod apk before downloading it: Before you download a mod apk, you should read the description and details of the mod apk to see what features and changes it offers. You should also check the reviews and ratings of the mod apk from other users to see if they are satisfied with it and if they encountered any problems or issues.
            4. -
            5. Enable unknown sources in your device settings to allow installation of third-party apps: Since mod apks are not available on the Google Play Store, you need to enable unknown sources in your device settings to allow installation of third-party apps. To do this, go to Settings > Security > Unknown Sources and toggle it on.
            6. -
            7. Download and install the mod apk file on your device: After you find a reliable source and check the reviews and ratings of the mod apk, you can download it on your device. You might need to grant some permissions to the app to access your device storage or other features. After the download is complete, you can install the mod apk file by tapping on it.
            8. -
            9. Launch the game and enjoy the modded features: After you install the mod apk file, you can launch the game and enjoy the modded features. You might need to disable your internet connection or use a VPN to avoid detection by the official servers or platforms.
            10. -
            -

            Tips and Tricks for Playing Bike Rider Games with Mod APKs

            -

            Playing bike rider games with mod apks can be fun and exciting, but it can also be challenging and tricky. Here are some tips and tricks for playing bike rider games with mod apks:

            -
              -
            • Use the brake button wisely to make sharp turns and avoid obstacles: In bike rider games, you need to use the brake button to slow down your speed and make sharp turns on tight corners. You also need to use it to avoid crashing into obstacles or falling off cliffs. However, don't use it too much or too little, as it might affect your momentum and balance.
            • -
            • Master the double jump technique to perform amazing stunts and earn extra points: In bike rider games, you can perform amazing stunts by using the double jump technique. This technique involves tapping the jump button twice in quick succession to make your bike fly higher and longer in the air. You can then tilt your device or use the arrow buttons to rotate your bike and perform flips, spins, or twists. This will earn you extra points and boost your score.
            • -
            • Choose the right bike and gear for each track and environment: In bike rider games, you can choose from different bikes and gear that have different attributes and performance. For example, some bikes are faster, lighter, or more agile than others. Some gear can improve your speed, acceleration, or handling. You should choose the right bike and gear for each track and environment that suit your style and strategy.
            • -
            • Experiment with different camera angles and styles to find your optimal view: In bike rider games, you can change the camera angle and style to find your optimal view of the game. For example, some camera angles are closer or farther from your bike, while some camera styles are fixed or dynamic. You should experiment with different camera angles and styles to find the one that gives you the best visibility and control of your bike.
            • -
            • Collect power-ups and coins to boost your speed and score: In bike rider games, you can collect power-ups and coins that can boost your speed and score. For example, some power-ups can give you a turbo boost, a shield, or a magnet that attracts coins. Some coins can increase your score, unlock new bikes or tracks, or activate special features. You should collect as many power-ups and coins as you can to enhance your gameplay.
            • -
            -

            Conclusion

            -

            Bike rider games are fun and exciting games that let you race or perform stunts with a motorcycle on various tracks or environments. Mod apks are modified versions of original applications that offer some advantages over the official ones, such as unlimited money, unlocked features, or enhanced graphics. Using mod apks for bike rider games can have some benefits, such as enjoying the full potential of the game, but also some risks, such as potential malware or legal issues. Therefore, you should always be careful and responsible when using mod apks for bike rider games. You should also follow some steps to download and install mod apks for bike rider games, and some tips and tricks to play them.

            -

            bike rider mod apk unlimited money
            -bike rider mod apk download for android
            -bike rider mod apk latest version
            -bike rider mod apk free download
            -bike rider mod apk hack
            -bike rider mod apk revdl
            -bike rider mod apk rexdl
            -bike rider mod apk happymod
            -bike rider mod apk android 1
            -bike rider mod apk offline
            -bike rider mod apk no ads
            -bike rider mod apk all bikes unlocked
            -bike rider mod apk unlimited coins and gems
            -bike rider mod apk unlimited everything
            -bike rider mod apk 2023
            -bike rider mod apk 5.7.5
            -bike rider mod apk 5.7.4
            -bike rider mod apk 5.7.3
            -bike rider mod apk 5.7.2
            -bike rider mod apk 5.7.1
            -bike rider mod apk 5.7.0
            -bike rider mod apk 5.6.9
            -bike rider mod apk 5.6.8
            -bike rider mod apk 5.6.7
            -bike rider mod apk 5.6.6
            -bike rider mod apk for ios
            -bike rider mod apk for pc
            -bike rider mod apk for windows 10
            -bike rider mod apk for laptop
            -bike rider mod apk for macbook
            -bike rider mod apk online play
            -bike rider mod apk multiplayer
            -bike rider mod apk new update
            -bike rider mod apk old version
            -bike rider mod apk original
            -bike rider mod apk pure
            -bike rider mod apk premium
            -bike rider mod apk pro
            -bike rider mod apk vip
            -bike rider mod apk full version
            -bike rider hack mod apk download
            -download game bike rider mod apk
            -real moto: traffic racer - motorbike racing game - free games - offline games - motorbike games - motorcycle games - moto games - moto racing games - moto traffic racer - moto traffic games - moto traffic race - moto traffic race 2 - moto traffic race 3d - moto traffic simulator - moto traffic highway - moto traffic highway racer - moto traffic highway racing - moto traffic highway racing game - moto traffic highway racing game 2023 - moto traffic highway racing game download - moto traffic highway racing game free download - moto traffic highway racing game hack - moto traffic highway racing game latest version - moto traffic highway racing game offline - moto traffic highway racing game online play - moto traffic highway racing game unlimited money

            -

            If you want to try out mod apks for bike rider games, you can search online for reliable sources that offer them. You can also check out some examples of popular bike rider games and their mod apks, such as Bike Race Free, Traffic Rider, and Mad Skills Motocross 2. However, you should always respect the work of the original developers and play by the rules. Mod apks are meant to enhance your gaming experience, not to ruin it.

            -

            We hope you enjoyed this article and learned something new about mod apks for bike rider games. If you have any questions, comments, or feedback, please feel free to share them with us. We would love to hear from you!

            -

            FAQs

            -

            Here are some frequently asked questions about mod apks for bike rider games:

            -
              -
            1. What is a mod apk?: A mod apk is a modified version of an Android application that usually offers some advantages over the original version, such as unlimited money, unlocked features, or enhanced graphics.
            2. -
            3. What is a bike rider game?: A bike rider game is a game genre that involves racing or performing stunts with a motorcycle on various tracks or environments.
            4. -
            5. What are some popular bike rider games and their mod apks?: Some examples of popular bike rider games and their mod apks are Bike Race Free, Traffic Rider, and Mad Skills Motocross 2.
            6. -
            7. What are the benefits and risks of using mod apks for bike rider games?: Some benefits of using mod apks for bike rider games are unlimited money and resources, enhanced graphics and sound effects, access to exclusive features and modes, and ability to customize your bike and rider. Some risks of using mod apks for bike rider games are potential malware and viruses, legal issues and ethical concerns, compatibility and stability issues, and loss of progress and achievements.
            8. -
            9. How can I download and install mod apks for bike rider games?: To download and install mod apks for bike rider games, you need to find a reliable source that offers them, check the reviews and ratings of the mod apk before downloading it, enable unknown sources in your device settings to allow installation of third-party apps, download and install the mod apk file on your device, and launch the game and enjoy the modded features.
            10. -
            -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/sinian/nihao/index.html b/spaces/sinian/nihao/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/sinian/nihao/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
            -

            Welcome to your static Space!

            -

            - You can modify this app directly by editing index.html in the - Files and versions tab. -

            -

            - Also don't forget to check the - Spaces documentation. -

            -
            - - diff --git a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh b/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh deleted file mode 100644 index 7fab2998437ef8c12dcd93466371d0324eec4c79..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_large_weibo # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_large - -TASK=weibo - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/weibo/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.all.bmes \ - --valid_data test.all.bmes \ - --test_data test.all.bmes \ - --train_batchsize 16 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name weibo \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bioes \ - --middle_prefix M- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 20 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/utils/registry.py b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/utils/registry.py deleted file mode 100644 index 655753b3b9cbd0cfe73fe93a77cf1fcc3db6d827..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/utils/registry.py +++ /dev/null @@ -1,82 +0,0 @@ -# Modified from: https://github.com/facebookresearch/fvcore/blob/master/fvcore/common/registry.py # noqa: E501 - - -class Registry(): - """ - The registry that provides name -> object mapping, to support third-party - users' custom modules. - - To create a registry (e.g. a backbone registry): - - .. code-block:: python - - BACKBONE_REGISTRY = Registry('BACKBONE') - - To register an object: - - .. code-block:: python - - @BACKBONE_REGISTRY.register() - class MyBackbone(): - ... - - Or: - - .. code-block:: python - - BACKBONE_REGISTRY.register(MyBackbone) - """ - - def __init__(self, name): - """ - Args: - name (str): the name of this registry - """ - self._name = name - self._obj_map = {} - - def _do_register(self, name, obj): - assert (name not in self._obj_map), (f"An object named '{name}' was already registered " - f"in '{self._name}' registry!") - self._obj_map[name] = obj - - def register(self, obj=None): - """ - Register the given object under the the name `obj.__name__`. - Can be used as either a decorator or not. - See docstring of this class for usage. - """ - if obj is None: - # used as a decorator - def deco(func_or_class): - name = func_or_class.__name__ - self._do_register(name, func_or_class) - return func_or_class - - return deco - - # used as a function call - name = obj.__name__ - self._do_register(name, obj) - - def get(self, name): - ret = self._obj_map.get(name) - if ret is None: - raise KeyError(f"No object named '{name}' found in '{self._name}' registry!") - return ret - - def __contains__(self, name): - return name in self._obj_map - - def __iter__(self): - return iter(self._obj_map.items()) - - def keys(self): - return self._obj_map.keys() - - -DATASET_REGISTRY = Registry('dataset') -ARCH_REGISTRY = Registry('arch') -MODEL_REGISTRY = Registry('model') -LOSS_REGISTRY = Registry('loss') -METRIC_REGISTRY = Registry('metric') diff --git a/spaces/smartinezbragado/reddit-topic-modelling/views.py b/spaces/smartinezbragado/reddit-topic-modelling/views.py deleted file mode 100644 index 5bef08c3579d680dc9d900ce5eabded156ec8dbc..0000000000000000000000000000000000000000 --- a/spaces/smartinezbragado/reddit-topic-modelling/views.py +++ /dev/null @@ -1,74 +0,0 @@ -import os -import pandas as pd -import tempfile -from bertopic import BERTopic -from src.reddit import RedditBot -from pretty_html_table import build_table -from flask import Blueprint, render_template, request, send_file, redirect, url_for, send_from_directory - -# DOWNLOADS_PATH = os.path.join(os.getcwd(), 'downloads') - -views = Blueprint(__name__, 'views') -reddit = RedditBot() -topic_model = BERTopic() - - -def retrieve_subreddits(data: dict) -> pd.DataFrame: - # Retrieve subreddits through its API - posts = reddit.get_subreddits_posts( - name=data.get('subreddit'), - type=data.get('type'), - amount=int(data.get('amount')) - ) - df = reddit.convert_posts_to_df(posts=posts) - df['Text'] = df.apply(lambda row: row.Title + ': ' + row.Content, axis=1) - return df - -@views.route('/', methods=['POST', 'GET']) -def home(): - data = request.form - if request.method == 'POST': - if (int(data.get('amount')) < 0 or int(data.get('amount')) > 1000): - return redirect(url_for('views.error', type_of_error='amount')) - elif data.get('type') not in ['hot', 'new', 'rising', 'top']: - print(data.get('type')) - return redirect(url_for('views.error', type_of_error='type')) - elif not reddit.subreddit_exists(data.get('subreddit')): - return redirect(url_for('views.error', type_of_error='subreddit')) - else: - # Retrieve subreddits - subreddits_df = retrieve_subreddits(data=data) - # Topic modelling using BERTtopic - _, _ = topic_model.fit_transform(subreddits_df.Text) - topics_df = topic_model.get_topic_info() - for t in topics_df.Topic: - topics_df.loc[topics_df.Topic == t, 'Top words'] = str([w for w, p in topic_model.get_topic(t)]) - # Donwload topics - # topics_df.to_csv(os.path.join(DOWNLOADS_PATH, 'topics.csv'), index=False) - topics_df.to_csv('topics.csv', index=False) - send_file('topics.csv', as_attachment=True) - # Download docs info - docs_df = topic_model.get_document_info(subreddits_df.Text) - docs_df.to_csv('docs_with_topics_info.csv', index=False) - send_file('docs_with_topics_info.csv', as_attachment=True) - return render_template('success.html', - topics = [build_table(topics_df, 'blue_light')], - titles=topics_df.columns.values, - docs = [build_table(docs_df, 'blue_light')], - docs_titles=docs_df.columns.values - ) - - return render_template('index.html') - -@views.route('/succes', methods=['GET']) -def success(): - return render_template('success.html') - -@views.route('/error/', methods=['GET']) -def error(type_of_error: str): - if type_of_error == 'amount': - return render_template('error.html', type_of_error='The amount is higher than 1000 or lower than 0') - elif type_of_error == 'type': - return render_template('error.html', type_of_error='The ordering is not within hot, rising, new, top') - elif type_of_error == 'subreddit': - return render_template('error.html', type_of_error='The subreddit does not exist') diff --git a/spaces/spacerini/code-search/app.py b/spaces/spacerini/code-search/app.py deleted file mode 100644 index b99647d6d9e56920bb44a4c368261b2ae35bf8d1..0000000000000000000000000000000000000000 --- a/spaces/spacerini/code-search/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import gradio as gr -from datasets import load_from_disk -from pyserini.search.lucene import LuceneSearcher -from pyserini.analysis import JWhiteSpaceAnalyzer -from itertools import chain -from nltk.util import everygrams - -searcher = LuceneSearcher("index") -searcher.set_analyzer(JWhiteSpaceAnalyzer()) - -def tokenize_word(word, min_len=2, max_len=4): - return [''.join(ngram) for ngram in list(everygrams(word, min_len=min_len, max_len=max_len))] - -def tokenize_sentence(sentence, min_len=2, max_len=4): - return " ".join(chain(*[tokenize_word(word, min_len=min_len, max_len=max_len) for word in sentence.split()])) - -ds = load_from_disk("data") -NUM_PAGES = 10 # STATIC. THIS CAN'T CHANGE BECAUSE GRADIO CAN'T DYNAMICALLY CREATE COMPONENTS. -RESULTS_PER_PAGE = 5 - -TEXT_FIELD = "content" -METADATA_FIELD = "docid" - -def result_html(result, meta): - return ( - f"
            docid: {meta}

            " - f"
            {result[:250]}...

            {result[250:]}




            " - ) - -def format_results(results, query): - text_content = results[TEXT_FIELD] - query_words = query.split() - for word in query_words: - text_content = [text.replace(word, f"{word}") for text in text_content] - return "\n".join([result_html(result, meta) for result,meta in zip(text_content, results[METADATA_FIELD])]) - -def page_0(query): - untokenized_query = query - query = tokenize_sentence(query) - hits = searcher.search(query, k=NUM_PAGES*RESULTS_PER_PAGE) - ix = [int(hit.docid) for hit in hits] - results = ds.select(ix).shard(num_shards=NUM_PAGES, index=0, contiguous=True) - results = format_results(results, untokenized_query) - return results, [ix], gr.update(visible=True), untokenized_query - -def page_i(i, ix, query): - ix = ix[0] - results = ds.select(ix).shard(num_shards=NUM_PAGES, index=i, contiguous=True) - results = format_results(results, query) - return results, [ix], query - -with gr.Blocks(css="#b {min-width:15px;background:transparent;}") as demo: #border:white;box-shadow:none; - with gr.Row(): - gr.Markdown(value="""##

            Code search

            """) - with gr.Row(): - with gr.Column(scale=1): - pass - with gr.Column(scale=15): - gr.Markdown("""
            This search tool was used to validate tokenization scheme for code retrieval for the BigCode project. We indexed the 🎅 Santacoder training dataset (Python, Java, and JavaScript) and use a (2,4)-gram tokenizer to build the index. This is the same tokenization scheme that ended up being used to power the ⭐ StarCoder search tool.
            """) - with gr.Column(scale=1): - pass - with gr.Row(): - with gr.Column(scale=1): - result_list = gr.Dataframe(type="array", visible=False, col_count=1) - with gr.Column(scale=15): - query = gr.Textbox(lines=1, max_lines=1, placeholder="Search…", label="Query") - with gr.Column(scale=1): - with gr.Row(scale=1): - pass - with gr.Row(scale=1): - submit_btn = gr.Button("🔍", elem_id="b").style(full_width=False) - with gr.Row(scale=1): - pass - - with gr.Row(): - with gr.Column(scale=1): - pass - with gr.Column(scale=13): - c = gr.HTML(label="Results") - with gr.Row(visible=False) as pagination: - # left = gr.Button(value="◀", elem_id="b", visible=False).style(full_width=True) - page_1 = gr.Button(value="1", elem_id="b").style(full_width=True) - page_2 = gr.Button(value="2", elem_id="b").style(full_width=True) - page_3 = gr.Button(value="3", elem_id="b").style(full_width=True) - page_4 = gr.Button(value="4", elem_id="b").style(full_width=True) - page_5 = gr.Button(value="5", elem_id="b").style(full_width=True) - page_6 = gr.Button(value="6", elem_id="b").style(full_width=True) - page_7 = gr.Button(value="7", elem_id="b").style(full_width=True) - page_8 = gr.Button(value="8", elem_id="b").style(full_width=True) - page_9 = gr.Button(value="9", elem_id="b").style(full_width=True) - page_10 = gr.Button(value="10", elem_id="b").style(full_width=True) - # right = gr.Button(value="▶", elem_id="b", visible=False).style(full_width=True) - with gr.Column(scale=1): - pass - query.submit(fn=page_0, inputs=[query], outputs=[c, result_list, pagination, query]) - submit_btn.click(page_0, inputs=[query], outputs=[c, result_list, pagination, query]) - with gr.Box(visible=False): - nums = [gr.Number(i, visible=False, precision=0) for i in range(NUM_PAGES)] - page_1.click(fn=page_i, inputs=[nums[0], result_list, query], outputs=[c, result_list, query]) - page_2.click(fn=page_i, inputs=[nums[1], result_list, query], outputs=[c, result_list, query]) - page_3.click(fn=page_i, inputs=[nums[2], result_list, query], outputs=[c, result_list, query]) - page_4.click(fn=page_i, inputs=[nums[3], result_list, query], outputs=[c, result_list, query]) - page_5.click(fn=page_i, inputs=[nums[4], result_list, query], outputs=[c, result_list, query]) - page_6.click(fn=page_i, inputs=[nums[5], result_list, query], outputs=[c, result_list, query]) - page_7.click(fn=page_i, inputs=[nums[6], result_list, query], outputs=[c, result_list, query]) - page_8.click(fn=page_i, inputs=[nums[7], result_list, query], outputs=[c, result_list, query]) - page_9.click(fn=page_i, inputs=[nums[8], result_list, query], outputs=[c, result_list, query]) - page_10.click(fn=page_i, inputs=[nums[9], result_list, query], outputs=[c, result_list, query]) -demo.launch(enable_queue=True, debug=True) \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/((NEW)) Download Ebook Aku Melawan Teroris.md b/spaces/stomexserde/gpt4-ui/Examples/((NEW)) Download Ebook Aku Melawan Teroris.md deleted file mode 100644 index 4293000aefe758f9041e23f99a180554558d8342..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/((NEW)) Download Ebook Aku Melawan Teroris.md +++ /dev/null @@ -1,24 +0,0 @@ - -I can try to write a title and an article with SEO optimization and HTML formatting for the keyword "Download Ebook Aku Melawan Teroris". Here is what I came up with: - -

            Download Ebook Aku Melawan Teroris: The Inspiring Story of a Former Terrorist Who Became a Peace Activist

            - -

            If you are looking for a book that will challenge your perspective on terrorism and inspire you to make a positive change in the world, you should download ebook Aku Melawan Teroris. This is the autobiography of Nasir Abas, a former leader of Jemaah Islamiyah, a terrorist group responsible for several bombings in Indonesia and Southeast Asia.

            - -

            Nasir Abas was born into a devout Muslim family in Malaysia and joined Jemaah Islamiyah when he was 18 years old. He was trained in Afghanistan and Pakistan and became an expert in explosives and military tactics. He rose through the ranks and became one of the most trusted lieutenants of the group's founder, Abdullah Sungkar.

            -

            Download Ebook Aku Melawan Teroris


            Download Zip ☆☆☆☆☆ https://urlgoal.com/2uI6cF



            - -

            However, everything changed when Nasir Abas was arrested by the Indonesian police in 2003. He decided to cooperate with the authorities and reveal the secrets of Jemaah Islamiyah. He also renounced violence and extremism and dedicated his life to promoting peace and tolerance. He wrote his memoir, Aku Melawan Teroris (I Fight Terrorists), to share his journey from terrorism to peace activism.

            - -

            In this ebook, you will learn about Nasir Abas's personal experiences as a terrorist and a peace activist. You will also gain insights into the history, ideology, and operations of Jemaah Islamiyah and other terrorist groups in Southeast Asia. You will discover how Nasir Abas transformed himself from a radical militant to a moderate Muslim who respects diversity and dialogue.

            - -

            Download ebook Aku Melawan Teroris today and get ready to be inspired by this remarkable story of redemption and courage. You will not regret reading this ebook that will open your eyes to the realities of terrorism and the possibilities of peace.

            -

            Since his release from prison in 2006, Nasir Abas has been actively involved in various peace initiatives and counterterrorism efforts. He has collaborated with the Indonesian police, the National Counterterrorism Agency (BNPT), and several civil society organizations to spread his anti-violence message and to prevent radicalization among vulnerable groups.

            - -

            Some of his activities include giving lectures and seminars at schools, universities, mosques, and prisons; producing books, comics, and videos that expose the fallacies of extremist ideology; conducting deradicalization programs for former terrorists and their families; and facilitating dialogue and reconciliation between victims and perpetrators of terrorism.

            - -

            Nasir Abas has also participated in regional and international forums to share his experiences and insights on countering violent extremism. He has visited countries such as Australia, Singapore, Malaysia, Thailand, the Philippines, and the United States to engage with various stakeholders and audiences. He has received recognition and appreciation from many governments and institutions for his contributions to peace and security.

            - -

            Nasir Abas believes that his work is a form of repentance and redemption for his past mistakes. He hopes that by telling his story, he can inspire others to reject violence and embrace tolerance. He also hopes that by working with the authorities and the society, he can help prevent further bloodshed and suffering caused by terrorism.

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Authentication For T6m Crack 47 UPD.md b/spaces/stomexserde/gpt4-ui/Examples/Authentication For T6m Crack 47 UPD.md deleted file mode 100644 index 9bab3d52bbc9c00276925a57e9bdf1bdf3b4416e..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Authentication For T6m Crack 47 UPD.md +++ /dev/null @@ -1,27 +0,0 @@ - -

            How to Bypass Authentication for T6M Crack 47

            -

            T6M is a modified version of Call of Duty Black Ops II that allows you to play multiplayer mode without buying the game. However, some users may encounter a problem when they try to launch the game through t6mp.exe. They may see a window asking for a username and password, which they do not have.

            -

            authentication for t6m crack 47


            DOWNLOAD · https://urlgoal.com/2uIanp



            -

            This article will show you how to bypass this authentication and play T6M Crack 47 without any hassle. All you need is a simple tool called Volta Sensor Decoding, which you can download from here[^1^]. Follow these steps to use it:

            -
              -
            1. Extract the Volta Sensor Decoding.rar file to a folder of your choice.
            2. -
            3. Run Volta Sensor Decoding.exe as administrator.
            4. -
            5. Select t6mp.exe from the T6M folder and click Open.
            6. -
            7. Click Decode and wait for the process to finish.
            8. -
            9. Close Volta Sensor Decoding and run t6mp.exe again.
            10. -
            -

            You should now be able to play T6M Crack 47 without any authentication. Enjoy!

            -

            Note: This method may not work for all versions of T6M or Windows. If you encounter any errors or problems, please refer to the original thread on MPGH[^1^] or other online sources[^2^] [^3^] [^4^] [^5^] for more information and solutions.

            - -

            But why would you want to play T6M Crack 47 in the first place? Well, there are many benefits of playing video games, especially multiplayer ones like T6M. Here are some of them:

            -
              -
            • Playing video games can improve your cognitive skills, such as attention, memory, problem-solving, and spatial reasoning. Studies have shown that gamers have better performance on tasks that require these skills than non-gamers[^2^].
            • -
            • Playing video games can also enhance your emotional well-being, as they can provide a source of entertainment, relaxation, socialization, and achievement. Video games can help you cope with stress, anxiety, depression, and boredom[^2^].
            • -
            • Playing video games can also foster your creativity and imagination, as they can expose you to different worlds, characters, stories, and challenges. Video games can stimulate your curiosity and exploration, and encourage you to express yourself in various ways[^3^].
            • -
            -

            Of course, playing video games also has some drawbacks, such as potential addiction, violence, aggression, and isolation. Therefore, it is important to play video games in moderation and balance them with other activities and responsibilities. You should also be aware of the legal and ethical issues of playing cracked games like T6M Crack 47.

            -

            -

            T6M Crack 47 is a modified version of Call of Duty Black Ops II that allows you to play multiplayer mode without buying the game. However, this also means that you are violating the intellectual property rights of the game developers and publishers. You may also be exposed to malware, viruses, hackers, and scammers when you download or play cracked games. Furthermore, you may miss out on some features and updates that are only available for the official version of the game.

            -

            Therefore, you should consider buying the game if you enjoy playing it and want to support the creators. You can find Call of Duty Black Ops II on Steam or other platforms for a reasonable price. You can also look for discounts or sales that may lower the cost. Buying the game will not only give you access to the full content and quality of the game, but also show your appreciation and respect for the people who made it.

            e93f5a0c3f
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/DOA Death Of Amar Movie Online 720p.md b/spaces/stomexserde/gpt4-ui/Examples/DOA Death Of Amar Movie Online 720p.md deleted file mode 100644 index 122b1ae424239d9cc79d047b373f3aaed0242eb4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/DOA Death Of Amar Movie Online 720p.md +++ /dev/null @@ -1,20 +0,0 @@ - -

            How to Watch DOA: Death of Amar Movie Online 720p

            -

            DOA: Death of Amar is a 2014 Hindi-language Bollywood film that stars Rajeev Khandelwal, Zareen Khan and Prashant Narayanan. The film is a thriller that revolves around a struggling actor who is poisoned and has only a few hours to live. He must find out who killed him and why before he dies.

            -

            If you are looking for a way to watch DOA: Death of Amar movie online 720p, you have come to the right place. In this article, we will show you how to stream or download the movie legally and safely. We will also give you some information about the movie, such as its plot, cast, reviews and awards.

            -

            DOA: Death of Amar movie online 720p


            Download Zip ►►► https://urlgoal.com/2uI9P0



            -

            Where to Watch DOA: Death of Amar Movie Online 720p

            -

            There are several options to watch DOA: Death of Amar movie online 720p, depending on your preference and budget. Here are some of the best ones:

            -
              -
            • Streaming Services: You can watch DOA: Death of Amar movie online 720p on various streaming platforms, such as Netflix, Amazon Prime Video, Hotstar, Zee5 and Eros Now. These services offer high-quality video and audio, as well as subtitles and other features. However, you will need to pay a monthly or annual subscription fee to access their content. You can also check if they offer a free trial or a discount for new users.
            • -
            • Download Sites: You can also download DOA: Death of Amar movie online 720p from various websites, such as Filmywap, Moviescounter, Worldfree4u and Pagalworld. These sites offer free downloads of the movie in different formats and sizes. However, you should be careful when using these sites, as they may contain viruses, malware or illegal content. You should also respect the copyright laws and avoid piracy.
            • -
            • Torrent Sites: Another option to watch DOA: Death of Amar movie online 720p is to use torrent sites, such as The Pirate Bay, Kickass Torrents, 1337x and RARBG. These sites allow you to download the movie using peer-to-peer technology, which means you share files with other users. However, this method is also risky and illegal, as you may expose your device to cyberattacks or face legal consequences.
            • -
            -

            What is DOA: Death of Amar Movie About

            -

            DOA: Death of Amar is a 2014 Hindi-language Bollywood film that was directed by Param Gill and written by Param Gill and Jitendra Tiwari. The film is a thriller that follows the story of Amar (Rajeev Khandelwal), a struggling actor who is poisoned by an unknown assailant and has only a few hours to live. He must find out who killed him and why before he succumbs to the poison.

            -

            The film also stars Zareen Khan as Jia, a journalist who helps Amar in his quest; Prashant Narayanan as Prem Chopra, a film producer who has a grudge against Amar; Murli Sharma as Inspector Rathore, a corrupt cop who is after Amar; Ravi Kishan as Ravi Khanna, a superstar who is involved in Amar's murder; Manoj Pahwa as Dr. Chawla, a doctor who treats Amar; and Mona Singh as Mona Singh, an actress who is Amar's ex-girlfriend.

            -

            What are the Reviews and Awards of DOA: Death of Amar Movie

            -

            DOA: Death of Amar received mixed reviews from critics and audiences. The film was praised for its performances, especially by Rajeev Khandelwal and Prashant Narayanan; its suspenseful plot; its cinematography; and its music. However, the film was also criticized for its weak script; its lack of originality; its poor editing; its low production value; and its unrealistic ending.

            -

            The film received its world premiere on 16 August 2014 as an official selection at the 22nd San Francisco Global Movie Fest and won the Audience Choice Award[^1^]. The film was also awarded

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dil Se Movie Watch Online.md b/spaces/stomexserde/gpt4-ui/Examples/Dil Se Movie Watch Online.md deleted file mode 100644 index a15c23b394b3f8242c45d7e2aaac510b30cd693d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dil Se Movie Watch Online.md +++ /dev/null @@ -1,23 +0,0 @@ -
            -

            How to Watch Dil Se, a Romantic Thriller Starring Shah Rukh Khan

            -

            Dil Se is a 1998 Hindi-language movie directed by Mani Ratnam and starring Shah Rukh Khan, Manisha Koirala and Preity Zinta. The movie tells the story of a radio journalist who falls in love with a mysterious woman who is involved in a terrorist plot. The movie is known for its stunning cinematography, haunting music and intense performances.

            -

            If you are looking for a way to watch Dil Se online, you have several options. Here are some of the platforms where you can stream or download the movie legally:

            -

            Dil Se Movie Watch Online


            Download File ►►►►► https://urlgoal.com/2uI5Ws



            -
              -
            • Netflix: Netflix is a popular streaming service that offers a wide range of movies and shows from different genres and countries. You can watch Dil Se on Netflix with a subscription plan that starts from $8.99 per month. You can also download the movie on your device and watch it offline.
            • -
            • Disney+ Hotstar: Disney+ Hotstar is another streaming service that features movies and shows from Disney, Marvel, Star Wars, National Geographic and more. You can watch Dil Se on Disney+ Hotstar with a subscription plan that starts from $9.99 per month. You can also download the movie on your device and watch it offline.
            • -
            • Google Play Movies, YouTube, Apple TV: These are platforms where you can buy or rent movies online. You can buy Dil Se for $3.99 or rent it for $2.99 on Google Play Movies or YouTube. You can also buy or rent it on Apple TV for the same price.
            • -
            -

            Dil Se is a movie that will keep you on the edge of your seat with its thrilling plot and romantic chemistry. If you are a fan of Shah Rukh Khan or Mani Ratnam, you should definitely watch this movie online.

            - -

            If you are wondering what makes Dil Se a must-watch movie, here are some of the reasons:

            -
              -
            1. The story: Dil Se is not a typical Bollywood romance. It explores the complex and tragic relationship between two people who are drawn to each other but belong to different worlds. The movie also touches upon the sensitive issues of terrorism, insurgency and nationalism in India's northeast region. The movie does not shy away from showing the harsh realities and the human cost of violence and conflict.
            2. -
            3. The direction: Mani Ratnam is one of the most acclaimed and influential filmmakers in India. He is known for his realistic and poetic style of storytelling. He uses cinematic techniques such as symbolism, imagery, editing and sound to create a powerful and immersive experience for the viewers. He also extracts brilliant performances from his actors and makes them portray their characters with depth and nuance.
            4. -
            5. The music: A.R. Rahman is a musical genius who has composed some of the most memorable and iconic songs in Indian cinema. Dil Se is one of his finest works, where he blends traditional and modern elements to create a diverse and dynamic soundtrack. The songs are not only catchy and melodious, but also convey the mood and emotions of the scenes. The songs are also beautifully choreographed and picturized, especially the famous "Chaiyya Chaiyya" song that was shot on a moving train.
            6. -
            7. The acting: Shah Rukh Khan, Manisha Koirala and Preity Zinta deliver stellar performances in Dil Se. Shah Rukh Khan plays Amar, a passionate and impulsive journalist who falls madly in love with Meghna, a mysterious and aloof woman who has a dark past. Manisha Koirala plays Meghna, a conflicted and tormented soul who is torn between her love for Amar and her loyalty to her cause. Preity Zinta plays Preeti, a bubbly and cheerful girl who is engaged to Amar but realizes that he loves someone else.
            8. -
            -

            Dil Se is a movie that will make you feel a range of emotions, from joy to sorrow, from hope to despair, from love to hate. It is a movie that will make you think about the meaning of love, life and sacrifice. It is a movie that will stay with you long after it ends.

            -

            e93f5a0c3f
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Google Book Downloader Userscript Di Firefox For Mac PATCHED.md b/spaces/stomexserde/gpt4-ui/Examples/Google Book Downloader Userscript Di Firefox For Mac PATCHED.md deleted file mode 100644 index 0d05446c4767ab82299dca4e883db42ab0609f70..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Google Book Downloader Userscript Di Firefox For Mac PATCHED.md +++ /dev/null @@ -1,22 +0,0 @@ -
            -

            How to Download Google Books as PDFs Using Firefox for Mac

            -

            Google Books is a great resource for finding and reading books online, but sometimes you may want to download them as PDF files for offline access or printing. Unfortunately, not all books on Google Books are available for download, and some of them are only partially viewable. However, there is a way to bypass these limitations using a userscript and a browser extension for Firefox on Mac.

            -

            A userscript is a piece of code that runs on certain web pages to modify their appearance or functionality. A browser extension is a software that adds new features or capabilities to your browser. In this case, we will use a userscript called Google Book Downloader[^1^] and a browser extension called Greasemonkey[^2^] to download Google Books as PDFs.

            -

            Google Book Downloader Userscript Di Firefox For Mac


            Download Ziphttps://urlgoal.com/2uI8sp



            -

            Here are the steps to follow:

            -
              -
            1. Download and install Firefox for Mac from here if you don't have it already.
            2. -
            3. Download and install Greasemonkey from here. This will allow you to run userscripts on Firefox.
            4. -
            5. Download the Google Book Downloader userscript from here. This will enable you to download Google Books as PDFs.
            6. -
            7. Open Firefox and go to the Google Book Downloader userscript page. Click on the "Install" button and confirm the installation.
            8. -
            9. Go to the Google Books website and find the book you want to download. You should see a new button on the top right corner of the book preview that says "Download this book". Click on it and wait for the download to start.
            10. -
            11. You will get a PDF file of the book in your Downloads folder. You can open it with any PDF reader or print it as you wish.
            12. -
            -

            Note: This method may not work for all books on Google Books, especially those that are very large or have complex formatting. Also, some books may have missing pages or images due to the way they are scanned by Google. Use this method at your own risk and respect the copyright of the authors and publishers.

            - -

            Google Books is one of the largest online libraries in the world, with millions of books in various languages and genres. You can browse, search, and read books on any topic you are interested in, from fiction to non-fiction, from history to science, from classics to contemporary. You can also discover new books based on your preferences and recommendations.

            -

            However, Google Books also has some limitations that may prevent you from fully enjoying the books you find. For example, some books are only available for preview, which means you can only see a few pages or chapters of the book. Some books are not available for download at all, which means you cannot save them on your device or print them out. Some books have poor quality scans or missing content due to the way they are digitized by Google.

            -

            That's why some people may want to use a userscript and a browser extension to download Google Books as PDFs. A userscript is a piece of code that runs on certain web pages to modify their appearance or functionality. A browser extension is a software that adds new features or capabilities to your browser. In this case, we will use a userscript called Google Book Downloader and a browser extension called Greasemonkey to download Google Books as PDFs.

            -

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_design_api.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_design_api.py deleted file mode 100644 index e6a396ad008c0b890afeb196456c027a013e76de..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_design_api.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 19:26 -@Author : alexanderwu -@File : test_design_api.py -""" -import pytest - -from metagpt.actions.design_api import WriteDesign -from metagpt.logs import logger -from tests.metagpt.actions.mock import PRD_SAMPLE - - -@pytest.mark.asyncio -async def test_design_api(): - prd = "我们需要一个音乐播放器,它应该有播放、暂停、上一曲、下一曲等功能。" - - design_api = WriteDesign("design_api") - - result = await design_api.run(prd) - logger.info(result) - assert len(result) > 0 - - -@pytest.mark.asyncio -async def test_design_api_calculator(): - prd = PRD_SAMPLE - - design_api = WriteDesign("design_api") - result = await design_api.run(prd) - logger.info(result) - - assert len(result) > 10 diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/tools/test_web_browser_engine.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/tools/test_web_browser_engine.py deleted file mode 100644 index 283633bd6adeb362c5e9cb2938bc4fd7121050b9..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/tools/test_web_browser_engine.py +++ /dev/null @@ -1,31 +0,0 @@ -""" -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" - -import pytest - -from metagpt.config import Config -from metagpt.tools import WebBrowserEngineType, web_browser_engine - - -@pytest.mark.asyncio -@pytest.mark.parametrize( - "browser_type, url, urls", - [ - (WebBrowserEngineType.PLAYWRIGHT, "https://fuzhi.ai", ("https://fuzhi.ai",)), - (WebBrowserEngineType.SELENIUM, "https://fuzhi.ai", ("https://fuzhi.ai",)), - ], - ids=["playwright", "selenium"], -) -async def test_scrape_web_page(browser_type, url, urls): - conf = Config() - browser = web_browser_engine.WebBrowserEngine(options=conf.runtime_options, engine=browser_type) - result = await browser.run(url) - assert isinstance(result, str) - assert "深度赋智" in result - - if urls: - results = await browser.run(url, *urls) - assert isinstance(results, list) - assert len(results) == len(urls) + 1 - assert all(("深度赋智" in i) for i in results) diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/README.md b/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/README.md deleted file mode 100644 index 3bff31bd15749ce4ccfafa118219b0cf2bedb0ec..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/README.md +++ /dev/null @@ -1,41 +0,0 @@ -## Openpose Editor - -日本語 | [English](README.en.md)|[简体中文](README.zh-cn.md) - -![image](https://user-images.githubusercontent.com/92153597/219921945-468b2e4f-a3a0-4d44-a923-13ceb0258ddc.png) - -Automatic1111/stable-diffusion-webui用のOpenpose Editor - -- ポーズの編集 -- ポーズの検出 - -ができます - -- 「Add」: 人を追加する -- 「Detect from image」: 画像からポーズを検出する -- 「Add Background image」: 背景を追加する - -- 「Save PNG」: PNGで保存する -- 「Send to ControlNet」: Controlnet拡張機能がインストールされている場合、画像をそこに送る - -## インストール方法 - -1. "Extension" タブを開く -2. "Install from URL" タブを開く -3. "URL for extension's git repository" 欄にこのリポジトリの URL (https://github.com/fkunn1326/openpose-editor.git) を入れます。 -4. "Install" ボタンを押す -5. WebUIを再起動する - -## 注意 - -ConrtolNetの "Preprocessor" には、何も指定しないようにしてください。 - -## エラーの対策 - -> urllib.error.URLError: - - -以下のファイルを開いてださい -``` -/Applications/Python\ $version /Install\ Certificates.command -``` diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/Grundlagen-Der-Elektrotechnik-1-Albachpdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/Grundlagen-Der-Elektrotechnik-1-Albachpdf.md deleted file mode 100644 index 336cc965d7bbcff316b3f8e9da9bad7530b356d7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/Grundlagen-Der-Elektrotechnik-1-Albachpdf.md +++ /dev/null @@ -1,102 +0,0 @@ -## Grundlagen Der Elektrotechnik 1 Albach.pdf - - - - - - ![Grundlagen Der Elektrotechnik 1 Albach.pdf](https://assets.wakelet.com/monomer/thumbnail/wakelet-socail-thumbnail.png) - - - - - -**Grundlagen Der Elektrotechnik 1 Albach.pdf ->>->>->> [https://urlgoal.com/2txw91](https://urlgoal.com/2txw91)** - - - - - - - - - - - - - -# Grundlagen der Elektrotechnik 1 Albach.pdf: Ein hervorragendes Lehrbuch für Einführungskurse - - - -Wenn Sie auf der Suche nach einem Lehrbuch sind, das Ihnen die physikalischen Grundlagen der Elektrotechnik und Elektronik vermittelt, dann sollten Sie sich das Buch **Grundlagen der Elektrotechnik 1** von Manfred Albach ansehen. Dieses Buch bietet Ihnen in der dritten und aktualisierten Auflage einen hervorragenden Einstieg in das Fachgebiet. - - - -In diesem Buch lernen Sie die wichtigsten Konzepte und Methoden der Elektrotechnik kennen, wie zum Beispiel: - - - -- Kraftwirkungen zwischen Ladungen und Strömen - -- Elektrisches und magnetisches Feld - -- Spannung, Strom, Widerstand, Kapazität und Induktivität - -- Passive Bauelemente und Gleichstromschaltungen - -- Netzwerkanalyse und Wirkungsgrad - -- Stromleitungsmechanismen in verschiedenen Medien - -- Faraday'sches Induktionsgesetz und seine Anwendungen - -- Drehstromgeneratoren, Übertrager und Transformatoren - - - -Dieses Buch ist Teil 1 des Buches **Elektrotechnik** vom gleichen Autor. Es enthält viele praktische Beispiele, Aufgaben und einen mathematischen Anhang, der Ihnen als wertvolles Nachschlagewerk dient. Außerdem können Sie auf der Website von Studocu die Lösungen zu den Übungsaufgaben finden, die auf diesem Buch basieren. - - - -Sie können das Buch **Grundlagen der Elektrotechnik 1 Albach.pdf** online bei bücher.de als eBook herunterladen oder als gedruckte Ausgabe bestellen. Es ist ein empfehlenswertes Lehrbuch für alle Studierenden, die sich mit den Grundlagen der Elektrotechnik beschäftigen wollen. - - - -Wenn Sie das Buch **Grundlagen der Elektrotechnik 1 Albach.pdf** gelesen haben, können Sie Ihr Wissen vertiefen und erweitern, indem Sie sich mit dem Buch **Grundlagen der Elektrotechnik 2** von Manfred Albach beschäftigen. Dieses Buch behandelt die Themen Wechselstromschaltungen, Schwingkreise, Fourier-Analyse, Laplace-Transformation, Filter und Verstärker. Es ist die Fortsetzung des Buches **Elektrotechnik** vom gleichen Autor. - - - -In diesem Buch lernen Sie die wichtigsten Konzepte und Methoden der Wechselstromtechnik kennen, wie zum Beispiel: - - - -- Wechselstromwiderstand und Impedanz - -- Komplexe Rechnung und Zeigerdiagramme - -- Sinusförmige und nichtsinusförmige Wechselströme - -- Schwingkreise und Resonanzphänomene - -- Fourier-Analyse und Spektraldarstellung - -- Laplace-Transformation und Übertragungsfunktionen - -- Filter und Frequenzgang - -- Verstärker und Rückkopplung - - - -Dieses Buch ist Teil 2 des Buches **Elektrotechnik** vom gleichen Autor. Es enthält viele praktische Beispiele, Aufgaben und einen mathematischen Anhang, der Ihnen als wertvolles Nachschlagewerk dient. Außerdem können Sie auf der Website von Studocu die Lösungen zu den Übungsaufgaben finden, die auf diesem Buch basieren. - - - -Sie können das Buch **Grundlagen der Elektrotechnik 2 Albach.pdf** online bei bücher.de als eBook herunterladen oder als gedruckte Ausgabe bestellen. Es ist ein empfehlenswertes Lehrbuch für alle Studierenden, die sich mit den Grundlagen der Wechselstromtechnik beschäftigen wollen. - - dfd1c89656 - - - - - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/The Great Escape Movie Hindi Dubbed Free Mp4 Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/The Great Escape Movie Hindi Dubbed Free Mp4 Download.md deleted file mode 100644 index 4715925d737d178e359fcf17c1285a6f8eaecd55..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/The Great Escape Movie Hindi Dubbed Free Mp4 Download.md +++ /dev/null @@ -1,70 +0,0 @@ -## The Great Escape Movie Hindi Dubbed Free Mp4 Download - - - - - - - - - -**CLICK HERE - [https://urlca.com/2tyrvi](https://urlca.com/2tyrvi)** - - - - - - - - - - - - - -# The Great Escape: A Classic War Film in Hindi - - - -The Great Escape is a 1963 American World War II epic film based on an escape by British Commonwealth prisoners of war from a German POW camp during the war. The film stars Steve McQueen, James Garner, Richard Attenborough, Charles Bronson, Donald Pleasence, James Coburn and many others. It was directed by John Sturges and based on the 1950 book of the same name by Paul Brickhill. - - - -The film is widely regarded as one of the greatest war films of all time, and has been praised for its realism, suspense, humor and action sequences. The film also features one of the most iconic scenes in cinema history, where Steve McQueen's character attempts to jump over a barbed wire fence on a motorcycle while being chased by German soldiers. - - - -If you are a fan of war films or classic Hollywood cinema, you might be interested in watching The Great Escape in Hindi. You can download the movie for free in mp4 format from various online sources. Here are some of the links where you can find the movie: - - - -- [https://urlcod.com/2syUhd](https://urlcod.com/2syUhd) [^1^] - -- [https://forsatudif.mystrikingly.com/blog/the-great-escape-movie-hindi-dubbed-free-mp4-downloadl](https://forsatudif.mystrikingly.com/blog/the-great-escape-movie-hindi-dubbed-free-mp4-downloadl) [^2^] - -- [http://praxisbenefits.net/2022/06/10/the-great-escape-movie-hindi-dubbed-free-free-mp4-download/](http://praxisbenefits.net/2022/06/10/the-great-escape-movie-hindi-dubbed-free-free-mp4-download/) [^3^] - -- [https://soundcloud.com/zardtelunog1984/the-great-escape-movie-hindi-dubbed-free-mp4-download](https://soundcloud.com/zardtelunog1984/the-great-escape-movie-hindi-dubbed-free-mp4-download) [^4^] - - - -Enjoy watching The Great Escape in Hindi and let us know what you think of the movie in the comments below. - - - -The Great Escape is based on a true story of a mass escape attempt by Allied POWs from Stalag Luft III, a German air force prison camp in Sagan (now Zagan, Poland), in March 1944. The escape plan involved digging three tunnels, code-named Tom, Dick and Harry, under the camp's perimeter fence. The tunnels were equipped with ventilation, lighting, railways and forged documents. The escapees also had to make civilian clothes and disguises to blend in with the German population. - - - -Out of the 76 men who escaped through Harry, only three managed to reach freedom: a Norwegian and a Dutchman who boarded a ship to Sweden, and a British officer who flew to Spain. The rest were recaptured by the Germans, and 50 of them were executed by order of Hitler. The film depicts the escape attempt and its aftermath with some fictional elements and dramatization. - - - -The film was a huge commercial and critical success, earning seven Academy Award nominations, including Best Picture. It also won three Golden Globe Awards, including Best Motion Picture - Drama. The film's score by Elmer Bernstein is considered one of the best in film history, especially the main theme, which became synonymous with the film and its spirit of adventure. The film also inspired several sequels, remakes, parodies and video games. - - 145887f19f - - - - - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/2021 Cracked Dc Unlocker Unlimited Credits New Versionl.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/2021 Cracked Dc Unlocker Unlimited Credits New Versionl.md deleted file mode 100644 index 14dd8f40ba1271ae5aa48e1e20b61fd3d600aab4..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/2021 Cracked Dc Unlocker Unlimited Credits New Versionl.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Cracked Dc Unlocker Unlimited Credits New Versionl


            Download File ✯✯✯ https://cinurl.com/2uEX6u



            -
            -Dc Unlocker Cracked with Unlimited Credits unlocker dc. ... dc password username version crack pc jbd windows enterprise wt dl dnt jan works think mixs latest. 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind] Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind] Download.md deleted file mode 100644 index 6297b6c298c39564198d9a59a38dbe393cf4acb9..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind] Download.md +++ /dev/null @@ -1,240 +0,0 @@ - -

            Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind] Download: How to Get the Latest Version of the Best Video Editing Software

            - -

            Adobe Premiere Pro CC 2018 is one of the most popular and powerful video editing software in the market. It offers a comprehensive and professional solution for creating, editing, and delivering high-quality videos for any platform. Whether you are a beginner or a seasoned pro, you can use Adobe Premiere Pro CC 2018 to turn your raw footage into stunning productions with ease and efficiency.

            - -

            However, Adobe Premiere Pro CC 2018 is not a cheap software and you might not be able to afford it. That's why some people are looking for a way to get Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] download. This patch will allow you to use the latest version of Adobe Premiere Pro CC 2018 without paying anything. However, we do not condone piracy and we recommend that you buy the original software if you can. Using a patched software may cause some issues with your system and may violate the terms of service of Adobe.

            -

            Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind] download


            Download >>> https://cinurl.com/2uEYzF



            - -

            What is Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] is a file that modifies the original software to bypass its activation and registration process. It also updates the software to the latest version, which is 12.1.2.69 for Windows 64-bit systems. This patch is created by CracksMind, a website that provides cracks, keygens, patches, serial keys, and other tools for various software and games.

            - -

            By using this patch, you can enjoy all the features and functions of Adobe Premiere Pro CC 2018 without any limitations or restrictions. You can also access all the updates, support, and customer service from Adobe.

            - -

            How to Download Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            There are many websites that claim to offer Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] download, but not all of them are reliable and safe. Some of them may contain viruses, malware, or spyware that can harm your computer. Some of them may also have fake or outdated links that will not work. Therefore, you need to be careful when choosing a website to download Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind].

            - -

            One of the websites that we found to be trustworthy and working is kickasstorrent.cr. This website has a lot of software and games that you can download for free using torrents. It also has a detailed description and installation guide for each software. Here are the steps to download Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] from kickasstorrent.cr:

            - -
              -
            1. Go to kickasstorrent.cr and search for "Adobe Premiere Pro CC 2018 12.1.2.69 (x64) + Patch [CracksMind]".
            2. -
            3. Click on the link that says "Download Adobe Premiere Pro CC 2018 12.1.2.69 (x64) + Patch [CracksMind] Torrent - KickassTorrent".
            4. -
            5. You will be redirected to another page where you will see some information about the software and a download button.
            6. -
            7. Click on the download button and wait for the torrent file to be downloaded.
            8. -
            9. Open the torrent file with your preferred torrent client and start downloading the software.
            10. -
            11. The file size is about 1.4 GB.
            12. -
            13. After the download is complete, extract the file using WinRAR or any other software that can handle RAR files.
            14. -
            15. You will get a folder named "Adobe Premiere Pro CC 2018 12.1.2.69 (x64) + Patch [CracksMind]" that contains the setup file and the patch file.
            16. -
            - -

            How to Install Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            After you have downloaded Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind], you need to install it on your computer -. -Here are -the steps -to install -Adobe Premiere Pro CC -2018 -12 -. -1 -. -2 -. -69 -(x64) -patch -[CracksMind]:

            - -
              -
            1. Run the setup file named "AdobePremiereProCC(win64).rar" as administrator.
            2. -
            3. Follow the instructions on the screen and choose the destination folder where you want to install the software.
            4. -
            5. Wait for the installation to finish.
            6. -
            7. Do not run the software yet.
            8. -
            9. Copy the patch file named "Patch.rar" from the folder "Adobe Premiere Pro CC 2018 12.1.2.69 (x64) + Patch [CracksMind]" and paste it into the folder where you installed the software.
            10. -
            11. Usually, it is located at "C:\Program Files\Adobe\Adobe Premiere Pro CC".
            12. -
            13. Run the patch file as administrator and click on "Patch".
            14. -
            15. You have successfully installed Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind].
            16. -
            - -

            How to Use Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            Now that you have installed Adobe Premiere Pro CC 2018 12 -. -1 -. -2 -. -69 -(x64) -patch -[CracksMind] -, -you can use it as a standalone application or as a plugin in your DAW -. -Here are some tips on how to use Adobe Premiere Pro CC -2018 -12 -. -1 -. -2 -. -69 -(x64) -patch -[CracksMind]:

            - -
              -
            • To use it as a standalone application, run the file named "AdobePremierePro.exe" from the folder where you installed the software.
            • -
            • To use it as a plugin in your DAW, open your DAW and scan for new plugins.
            • -
            • You should see Adobe Premiere Pro CC in your plugin list.
            • -
            • To import a video in Adobe Premiere Pro CC, click on "File" > "Import" and choose a video file from your computer.
            • -
            • To edit a video in Adobe Premiere Pro CC, drag and drop it onto the timeline and use the tools and effects available in the interface.
            • -
            • To export a video in Adobe Premiere Pro CC, click on "File" > "Export" > "Media" and choose your desired format and settings.
            • -
            - -

            Adobe Premiere Pro CC is one of the best video editing software that can help you create amazing videos for any platform.
            However, -it is also very expensive -and not everyone can afford it -. -That's why some people resort to using Adobe Premiere Pro CC -2018 -12 -. -1 -. -2 -. -69 -(x64) -patch -[CracksMind] -download -to get it for free -. -However -, -we do not recommend using a patched software as it may cause some issues with your system or with Adobe.
            If you like -the software -and want to support -the developers -, -you should buy -the original version -from their website -. -Alternatively -, -you can look for some other software -that can offer similar features -and functions -or try to get Adobe Premiere Pro CC for free without using a patch -. -We hope this article was helpful -and informative -for you -. -Happy video editing!

            -

            -

            What are the Features of Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] has many features that make it a powerful and versatile video editing software. Here are some of them:

            - -
              -
            • Link and Locate: This feature helps you track down your clips quickly and easily, no matter where they are stored. You can also relink media files that have been moved or renamed.
            • -
            • Lumetri Deep Color Engine: This feature allows you to apply rich and beautiful color grades to your videos. You can also preview and add looks from Adobe SpeedGrade or import LUTs from other systems.
            • -
            • Precise Audio Control: This feature gives you the ability to control the sound of your videos with the Audio Clip Mixer, which lets you adjust the volume and pan of each clip independently. You can also use the TC Electronic Radar Loudness meter to monitor your audio levels and access various effects plugins like VST3 and Audio Units.
            • -
            • Adobe Anywhere Integration: This feature enables you to collaborate with other editors and producers from anywhere in the world. You can also access your projects and media files from any device using the cloud.
            • -
            • Mezzanine Codecs and Native Formats: This feature supports a wide range of formats and codecs, including industry-standard mezzanine codecs like Apple ProRes and Avid DNxHD. You can also work natively with the latest mobile, DSLR, HD, and RAW formats without any transcoding or rewrapping.
            • -
            • Editing Tools and Effects: This feature provides you with a variety of tools and effects to enhance your videos, such as trimming, cropping, scaling, rotating, stabilizing, keying, masking, transitions, filters, titles, graphics, animations, and more.
            • -
            • Export Options: This feature allows you to export your videos in various formats and settings for different platforms and devices. You can also use Adobe Media Encoder to encode your videos in the background while you continue editing.
            • -
            - -

            What are the Advantages of Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            Using Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] has some advantages that you might want to consider. Here are some of them:

            - -
              -
            • You can save money by not buying the original software, which costs around $240 per year for a single user subscription.
            • -
            • You can use the latest version of Adobe Premiere Pro CC 2018 without any limitations or restrictions.
            • -
            • You can use Adobe Premiere Pro CC 2018 on multiple computers without any activation or registration issues.
            • -
            • You can enjoy the high-quality and performance of Adobe Premiere Pro CC 2018 without compromising your system resources.
            • -
            - -

            What are the Disadvantages of Adobe Premiere Pro CC 2018 12.1.2.69 (x64) Patch [CracksMind]?

            - -

            However, using Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind] also has some disadvantages that you should be aware of. Here are some of them:

            - -
              -
            • You may violate the intellectual property rights of Adobe and face legal consequences.
            • -
            • You may expose your computer to viruses, malware, or spyware that can damage your system or steal your personal information.
            • -
            • You may encounter some errors, bugs, or glitches that can affect your video editing or cause data loss.
            • -
            • You may not receive any updates, support, or customer service from Adobe.
            • -
            - -

            Conclusion

            - -

            In this article, we have discussed how to download and install Adobe Premiere Pro CC 2018 12 -. -1 -. -2 -. -69 -(x64) -patch -[CracksMind] -. -We have also shown you some of the features -, -advantages -, -and disadvantages -of using a patched software -. -We hope that you have found this article useful -and informative -, -and that you have learned something new today -.

            - -

            However -, -we would like to remind you that using a patched software is not legal or ethical -, -and that it may cause some issues with your system or with Adobe -. -We strongly advise you to buy -the original software -from their website -if you can afford it -, -or to look for some other software -that can suit your needs and budget -. -This way -, -you can support -the developers -and enjoy -the full features -and functions -of Adobe Premiere Pro CC -2018 -without any problems -.

            - -

            Thank you for reading this article and we hope to see you again soon. Have a great day and happy video editing!

            -

            Conclusion

            - -

            In this article, we have discussed how to download and install Adobe Premiere Pro CC 2018 12.1.2.69 (x64) patch [CracksMind]. We have also shown you some of the features, advantages, and disadvantages of using a patched software. We hope that you have found this article useful and informative, and that you have learned something new today.

            - -

            However, we would like to remind you that using a patched software is not legal or ethical, and that it may cause some issues with your system or with Adobe. We strongly advise you to buy the original software from their website if you can afford it, or to look for some other software that can suit your needs and budget. This way, you can support the developers and enjoy the full features and functions of Adobe Premiere Pro CC 2018 without any problems.

            - -

            Thank you for reading this article and we hope to see you again soon. Have a great day and happy video editing!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Burial - Untrue (2007).zip.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Burial - Untrue (2007).zip.md deleted file mode 100644 index 9137f5a06ff558b6ca066039921a52ecccf0d5df..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Burial - Untrue (2007).zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Burial - Untrue (2007).zip


            DOWNLOAD 🗸 https://cinurl.com/2uEYGy



            -
            -Burial will be in Roselawn Memorial Park. ... On 2/20/07, infoweb@newsbank.com wrote: ... It is obviously untrue. ... carried a hidden camera that peeked out through a discreet hole she'd cut just beneath the zipper. 1fdad05405
            -
            -
            -

            diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Goodgame Empire Hack V2.4.rar.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Goodgame Empire Hack V2.4.rar.md deleted file mode 100644 index 67c104716802d369e95b738c7ff9b51da1b24993..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Goodgame Empire Hack V2.4.rar.md +++ /dev/null @@ -1,135 +0,0 @@ -
            -

            Goodgame Empire Hack V2.4.rar: The Ultimate Cheat Tool for Your Browser Game

            - -

            If you are a fan of Goodgame Empire, the popular medieval strategy game, you might have wondered how to get more resources, coins, rubies, and other advantages without spending real money or hours of gameplay. Well, wonder no more, because we have the perfect solution for you: Goodgame Empire Hack V2.4.rar.

            -

            Goodgame Empire Hack V2.4.rar


            Download Zip ››› https://cinurl.com/2uEXrT



            - -

            Goodgame Empire Hack V2.4.rar is a powerful cheat tool that can generate unlimited resources, coins, rubies, and more for your account in just a few minutes. You don't need to download anything, just run the tool from your browser and enter your username and the amount of resources you want. The tool will do the rest and add them to your account instantly.

            - -

            How to Use Goodgame Empire Hack V2.4.rar

            - -

            Using Goodgame Empire Hack V2.4.rar is very easy and safe. You don't need to worry about viruses, malware, or bans, because the tool is undetectable and updated regularly. Here are the steps to follow:

            - -
              -
            1. Visit the official website of Goodgame Empire Hack V2.4.rar and click on the "Start Hack" button.
            2. -
            3. Enter your username and select your server.
            4. -
            5. Choose the amount of resources, coins, rubies, and other features you want to add to your account.
            6. -
            7. Click on the "Generate" button and wait for the process to complete.
            8. -
            9. Verify that you are not a robot by completing a short survey or offer.
            10. -
            11. Enjoy your free resources and dominate the game!
            12. -
            - -

            Why You Should Use Goodgame Empire Hack V2.4.rar

            - -

            Goodgame Empire Hack V2.4.rar is the best cheat tool for Goodgame Empire because it offers many benefits and features that other tools don't have. Here are some of them:

            -

            - -
              -
            • It is free and easy to use.
            • -
            • It works on any browser and device.
            • -
            • It is 100% safe and secure.
            • -
            • It is updated regularly and tested by thousands of users.
            • -
            • It can generate unlimited resources, coins, rubies, and more.
            • -
            • It can unlock premium features and items.
            • -
            • It can speed up your progress and level up faster.
            • -
            • It can help you win battles and conquer territories.
            • -
            - -

            With Goodgame Empire Hack V2.4.rar, you can enjoy Goodgame Empire like never before. You can build your empire, expand your army, forge alliances, and crush your enemies with ease. You can also customize your castle, explore new lands, and complete quests without any limitations. You can have fun and save time and money at the same time.

            - -

            Conclusion

            - -

            If you are looking for a way to enhance your gaming experience in Goodgame Empire, you should definitely try Goodgame Empire Hack V2.4.rar. It is the ultimate cheat tool that can give you everything you need to become the most powerful ruler in the game. You don't need to download anything or pay anything, just visit the website and start hacking. You will be amazed by the results and you will never want to play without it again.

            - -

            So what are you waiting for? Go ahead and try Goodgame Empire Hack V2.4.rar today and see for yourself how awesome it is. You won't regret it!

            -
            What is Goodgame Empire?
            - -

            Goodgame Empire is a free-to-play browser game that lets you build your own medieval empire and compete with other players around the world. You can create your own castle, recruit an army, forge alliances, and fight for glory and resources. You can also explore a vast world map, complete quests, and join events and tournaments.

            - -

            Goodgame Empire is a game that requires strategy, skill, and patience. You need to manage your resources, plan your attacks, defend your castle, and expand your territory. You also need to deal with other players who might be your friends or foes. You can chat with them, trade with them, or wage war against them.

            - -
            How to Download Goodgame Empire Hack V2.4.rar
            - -

            If you want to download Goodgame Empire Hack V2.4.rar, you don't need to look any further. We have the best and most reliable source for you. Just follow these simple steps:

            - -
              -
            1. Click on the link below to go to the download page of Goodgame Empire Hack V2.4.rar.
            2. -
            3. Choose one of the available download options and complete a quick verification process.
            4. -
            5. Save the file to your device and extract it using WinRAR or any other software.
            6. -
            7. Open the file and run the tool from your browser.
            8. -
            9. Enjoy your free resources and features!
            10. -
            - -

            The link to download Goodgame Empire Hack V2.4.rar is: https://goodgame-empire-hack-v2-4-rar.com

            - -

            This is the only official and working link for Goodgame Empire Hack V2.4.rar. Don't trust any other sources that might scam you or infect your device with malware. Always use our link and stay safe.

            -What are the Features of Goodgame Empire Hack V2.4.rar - -

            Goodgame Empire Hack V2.4.rar is not just a simple cheat tool that can generate resources for your account. It also has many other features that can make your gaming experience more enjoyable and rewarding. Here are some of them:

            - -
              -
            • It can unlock all buildings, units, and decorations in the game.
            • -
            • It can give you access to premium items and features that normally require rubies or real money.
            • -
            • It can boost your reputation, rank, and honor in the game.
            • -
            • It can protect your account from detection and ban by using advanced encryption and proxy systems.
            • -
            • It can work on any browser and device, including Windows, Mac, iOS, and Android.
            • -
            • It can support multiple languages, including English, French, German, Spanish, Italian, and more.
            • -
            - -

            With Goodgame Empire Hack V2.4.rar, you can enjoy all the benefits of the game without any limitations or restrictions. You can have everything you want and need in the game without spending a dime or wasting your time.

            - -How to Get Goodgame Empire Hack V2.4.rar - -

            If you are interested in getting Goodgame Empire Hack V2.4.rar, you don't need to look any further. We have the best and most reliable source for you. Just follow these simple steps:

            - -
              -
            1. Click on the link below to go to the download page of Goodgame Empire Hack V2.4.rar.
            2. -
            3. Choose one of the available download options and complete a quick verification process.
            4. -
            5. Save the file to your device and extract it using WinRAR or any other software.
            6. -
            7. Open the file and run the tool from your browser.
            8. -
            9. Enjoy your free resources and features!
            10. -
            - -

            The link to download Goodgame Empire Hack V2.4.rar is: https://goodgame-empire-hack-v2-4-rar.com

            - -

            This is the only official and working link for Goodgame Empire Hack V2.4.rar. Don't trust any other sources that might scam you or infect your device with malware. Always use our link and stay safe.

            -What are the Advantages of Goodgame Empire Hack V2.4.rar - -

            Goodgame Empire Hack V2.4.rar is not just a cheat tool that can make your game easier and more fun. It also has many advantages that can improve your gaming experience and satisfaction. Here are some of them:

            - -
              -
            • It can save you time and money. You don't need to spend hours of gameplay or real money to get the resources and features you want in the game. You can get them for free and in minutes with Goodgame Empire Hack V2.4.rar.
            • -
            • It can enhance your skills and creativity. You don't need to follow the same strategies and tactics as everyone else in the game. You can create your own unique style and approach with Goodgame Empire Hack V2.4.rar.
            • -
            • It can increase your enjoyment and entertainment. You don't need to get bored or frustrated with the game. You can have more fun and excitement with Goodgame Empire Hack V2.4.rar.
            • -
            • It can make you more popular and respected. You don't need to be a nobody or a loser in the game. You can be a leader and a winner with Goodgame Empire Hack V2.4.rar.
            • -
            - -

            With Goodgame Empire Hack V2.4.rar, you can get the most out of Goodgame Empire and have a blast playing it. You can have more advantages than disadvantages and more pros than cons with Goodgame Empire Hack V2.4.rar.

            - -What are the Testimonials of Goodgame Empire Hack V2.4.rar - -

            Goodgame Empire Hack V2.4.rar is not just a cheat tool that we claim to be the best and the most reliable. It is also a cheat tool that thousands of users have tried and tested and have given their positive feedback and testimonials. Here are some of them:

            - -
            -

            "I love Goodgame Empire Hack V2.4.rar! It is the best cheat tool ever! It works perfectly and it gives me everything I need in the game. I can build my empire, expand my army, forge alliances, and crush my enemies with ease. I can also customize my castle, explore new lands, and complete quests without any limitations. I can have fun and save time and money at the same time."

            -- John, USA -
            - -
            -

            "Goodgame Empire Hack V2.4.rar is amazing! It is the ultimate cheat tool for Goodgame Empire! It is easy and safe to use and it has many features and benefits that other tools don't have. It can generate unlimited resources, coins, rubies, and more for my account in just a few minutes. It can also unlock premium features and items that normally require rubies or real money. It can also speed up my progress and level up faster."

            -- Lisa, UK -
            - -
            -

            "Goodgame Empire Hack V2.4.rar is awesome! It is the only cheat tool that works for Goodgame Empire! It is 100% safe and secure and it is updated regularly and tested by thousands of users. It can give me access to everything I want and need in the game without spending a dime or wasting my time. It can also help me win battles and conquer territories with ease."

            -- Mike, Canada -
            - -

            These are just some of the testimonials of Goodgame Empire Hack V2.4.rar users who have shared their experience and opinion about the cheat tool. You can read more testimonials on our website or on our social media pages.

            -Conclusion - -

            If you are looking for a way to enhance your gaming experience in Goodgame Empire, you should definitely try Goodgame Empire Hack V2.4.rar. It is the best and most reliable cheat tool for Goodgame Empire that can give you everything you need to become the most powerful ruler in the game. You don't need to download anything or pay anything, just visit the website and start hacking. You will be amazed by the results and you will never want to play without it again.

            - -

            So what are you waiting for? Go ahead and try Goodgame Empire Hack V2.4.rar today and see for yourself how awesome it is. You won't regret it!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Timework Reloj Checador V1.7.1.2 LINK Full Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Timework Reloj Checador V1.7.1.2 LINK Full Crack.md deleted file mode 100644 index bce5cab0b408c20d274b8a5dc5f68f85f0fd3129..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Timework Reloj Checador V1.7.1.2 LINK Full Crack.md +++ /dev/null @@ -1,14 +0,0 @@ -

            timework reloj checador v1.7.1.2 full crack


            Download Zip ✏ ✏ ✏ https://cinurl.com/2uEXZI



            -
            -Timework Reloj Checador V1.7.1.2 Full Crack Engview Package Designer Suite ... RobotSoft Automatic Mouse + Keyboard Crack Serial Key this app can ... Instructions: Timework Reloj Checador Crack -Free Download Timework Reloj Checador - Program ... -Jan 9, 2014 ... ... -Checador - Program Timework Reloj Checador - Program Timework Reloj Checador - Program Timework Reloj Checador ... -crack -Timework Reloj Checador Crack -It's easy to use and completely Free! -It's easy to use and completely free. -Timework Reloj Checador Crack Download 8a78ff9644
            -
            -
            -

            diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Aprender A Leer Con Pipo Vol 1.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Aprender A Leer Con Pipo Vol 1.md deleted file mode 100644 index 7c0cc135ebaaa162617d8d92edb9967d42bf0c11..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Aprender A Leer Con Pipo Vol 1.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Aprender a leer con pipo vol 1


            Download 🗸🗸🗸 https://urluss.com/2uCGQM



            - -Aprendo A Leer Y A Escribir - Vol. 3 - G. De .... APRENDE A LEER CON PIPO Está dirigido a niños de 3 a 6 años en edad preescolar. ... 1. Instala el CD-ROM o ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/suryabbrj/vit-gpt-caption-model-CMX/vit_gpt2/modeling_flax_gpt2.py b/spaces/suryabbrj/vit-gpt-caption-model-CMX/vit_gpt2/modeling_flax_gpt2.py deleted file mode 100644 index 3bc9cedc219ac2d24d5d89f0ea29b095364eae5a..0000000000000000000000000000000000000000 --- a/spaces/suryabbrj/vit-gpt-caption-model-CMX/vit_gpt2/modeling_flax_gpt2.py +++ /dev/null @@ -1,752 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Google Flax Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Any, Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict, unfreeze -from flax.linen import combine_masks, make_causal_mask -from flax.linen.attention import dot_product_attention_weights -from jax import lax - -from transformers.file_utils import add_start_docstrings, add_start_docstrings_to_model_forward -from transformers.modeling_flax_outputs import FlaxBaseModelOutput, FlaxBaseModelOutputWithPast, FlaxCausalLMOutput, FlaxBaseModelOutputWithPastAndCrossAttentions, FlaxSeq2SeqLMOutput -from transformers.modeling_flax_utils import ACT2FN, FlaxPreTrainedModel, append_call_sample_docstring -from transformers.utils import logging -from transformers.models.gpt2.configuration_gpt2 import GPT2Config - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "gpt2" -_CONFIG_FOR_DOC = "GPT2Config" -_TOKENIZER_FOR_DOC = "GPT2Tokenizer" - - -GPT2_START_DOCSTRING = r""" - - This model inherits from :class:`~transformers.FlaxPreTrainedModel`. Check the superclass documentation for the - generic methods the library implements for all its model (such as downloading or saving, resizing the input - embeddings, pruning heads etc.) - - This model is also a Flax Linen `flax.nn.Module - `__ subclass. Use it as a regular Flax - Module and refer to the Flax documentation for all matter related to general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - - `Just-In-Time (JIT) compilation `__ - - `Automatic Differentiation `__ - - `Vectorization `__ - - `Parallelization `__ - - Parameters: - config (:class:`~transformers.GPT2Config`): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the :meth:`~transformers.FlaxPreTrainedModel.from_pretrained` method to load the - model weights. -""" - -GPT2_INPUTS_DOCSTRING = r""" - Args: - input_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, input_ids_length)`): - :obj:`input_ids_length` = ``sequence_length``. Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using :class:`~transformers.GPT2Tokenizer`. See - :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for - details. - - `What are input IDs? <../glossary.html#input-ids>`__ - attention_mask (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - `What are attention masks? <../glossary.html#attention-mask>`__ - position_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0, - config.max_position_embeddings - 1]``. - past_key_values (:obj:`Dict[str, np.ndarray]`, `optional`, returned by ``init_cache`` or when passing previous ``past_key_values``): - Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast - auto-regressive decoding. Pre-computed key and value hidden-states are of shape `[batch_size, max_length]`. - output_attentions (:obj:`bool`, `optional`): - Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned - tensors for more detail. - output_hidden_states (:obj:`bool`, `optional`): - Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for - more detail. - return_dict (:obj:`bool`, `optional`): - Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. -""" - - -class FlaxConv1D(nn.Module): - features: int - use_bias: bool = True - dtype: Any = jnp.float32 - precision: Any = None - - @nn.compact - def __call__(self, inputs): - inputs = jnp.asarray(inputs, self.dtype) - kernel = self.param("kernel", jax.nn.initializers.normal(stddev=0.02), (self.features, inputs.shape[-1])) - kernel = jnp.asarray(kernel.transpose(), self.dtype) - y = lax.dot_general(inputs, kernel, (((inputs.ndim - 1,), (0,)), ((), ())), precision=self.precision) - if self.use_bias: - bias = self.param("bias", jax.nn.initializers.zeros, (self.features,)) - bias = jnp.asarray(bias, self.dtype) - y = y + bias - return y - - -class FlaxGPT2Attention(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - causal: bool = True - - def setup(self): - config = self.config - self.embed_dim = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_heads - - self.c_attn = FlaxConv1D(features=3 * self.embed_dim, dtype=self.dtype) - self.c_proj = FlaxConv1D(self.embed_dim, dtype=self.dtype) - - self.c_attn_for_k_v = FlaxConv1D(features=2 * self.embed_dim, dtype=self.dtype) - - self.resid_dropout = nn.Dropout(rate=config.resid_pdrop) - - if self.causal: - self.causal_mask = make_causal_mask(jnp.ones((1, config.max_position_embeddings), dtype="bool"), dtype="bool") - - def _split_heads(self, hidden_states): - return hidden_states.reshape(hidden_states.shape[:2] + (self.num_heads, self.head_dim)) - - def _merge_heads(self, hidden_states): - return hidden_states.reshape(hidden_states.shape[:2] + (self.embed_dim,)) - - @nn.compact - def _concatenate_to_cache(self, key, value, query, attention_mask): - """ - This function takes projected key, value states from a single input token and concatenates the states to cached - states from previous steps. This function is slighly adapted from the official Flax repository: - https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/linen/attention.py#L252 - """ - # detect if we're initializing by absence of existing cache data. - is_initialized = self.has_variable("cache", "cached_key") - cached_key = self.variable("cache", "cached_key", jnp.zeros, key.shape, key.dtype) - cached_value = self.variable("cache", "cached_value", jnp.zeros, value.shape, value.dtype) - cache_index = self.variable("cache", "cache_index", lambda: jnp.array(0, dtype=jnp.int32)) - - if is_initialized: - *batch_dims, max_length, num_heads, depth_per_head = cached_key.value.shape - # update key, value caches with our new 1d spatial slices - cur_index = cache_index.value - indices = (0,) * len(batch_dims) + (cur_index, 0, 0) - key = lax.dynamic_update_slice(cached_key.value, key, indices) - value = lax.dynamic_update_slice(cached_value.value, value, indices) - cached_key.value = key - cached_value.value = value - num_updated_cache_vectors = query.shape[1] - cache_index.value = cache_index.value + num_updated_cache_vectors - # causal mask for cached decoder self-attention: our single query position should only attend to those key positions that have already been generated and cached, not the remaining zero elements. - pad_mask = jnp.broadcast_to( - jnp.arange(max_length) < cur_index + num_updated_cache_vectors, - tuple(batch_dims) + (1, num_updated_cache_vectors, max_length), - ) - attention_mask = combine_masks(pad_mask, attention_mask) - return key, value, attention_mask - - def __call__( - self, - hidden_states, - key_value_states: Optional[jnp.ndarray] = None, - attention_mask=None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - ): - - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - - qkv_out = self.c_attn(hidden_states) - query, key, value = jnp.split(qkv_out, 3, axis=2) - - if is_cross_attention: - _qkv_out = self.c_attn_for_k_v(key_value_states) - key, value = jnp.split(_qkv_out, 2, axis=2) - - query = self._split_heads(query) - key = self._split_heads(key) - value = self._split_heads(value) - - query_length, key_length = query.shape[1], key.shape[1] - - if self.causal: - if self.has_variable("cache", "cached_key"): - mask_shift = self.variables["cache"]["cache_index"] - max_decoder_length = self.variables["cache"]["cached_key"].shape[1] - causal_mask = lax.dynamic_slice( - self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length) - ) - else: - causal_mask = self.causal_mask[:, :, :query_length, :key_length] - - batch_size = hidden_states.shape[0] - causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:]) - - # combine masks if needed - if attention_mask is not None and self.causal: - attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape) - attention_mask = combine_masks(attention_mask, causal_mask) - elif self.causal: - attention_mask = causal_mask - elif attention_mask is not None: - attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) - - dropout_rng = None - if not deterministic and self.config.attn_pdrop > 0.0: - dropout_rng = self.make_rng("dropout") - - # During fast autoregressive decoding, we feed one position at a time, - # and cache the keys and values step by step. - if self.causal and (self.has_variable("cache", "cached_key") or init_cache): - key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask) - - # transform boolean mask into float mask - if attention_mask is not None: - attention_bias = lax.select( - attention_mask > 0, - jnp.full(attention_mask.shape, 0.0).astype(self.dtype), - jnp.full(attention_mask.shape, -1e4).astype(self.dtype), - ) - else: - attention_bias = None - - # usual dot product attention - attn_weights = dot_product_attention_weights( - query, - key, - bias=attention_bias, - dropout_rng=dropout_rng, - dropout_rate=self.config.attn_pdrop, - deterministic=deterministic, - dtype=self.dtype, - precision=None, - ) - - attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value) - attn_output = self._merge_heads(attn_output) - attn_output = self.c_proj(attn_output) - attn_output = self.resid_dropout(attn_output, deterministic=deterministic) - - outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) - return outputs - - -class FlaxGPT2MLP(nn.Module): - config: GPT2Config - intermediate_size: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - embed_dim = self.config.hidden_size - self.c_fc = FlaxConv1D(self.intermediate_size, dtype=self.dtype) - self.c_proj = FlaxConv1D(embed_dim, dtype=self.dtype) - self.act = ACT2FN[self.config.activation_function] - self.dropout = nn.Dropout(rate=self.config.resid_pdrop) - - def __call__(self, hidden_states, deterministic: bool = True): - hidden_states = self.c_fc(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.c_proj(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - return hidden_states - - -class FlaxGPT2Block(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - hidden_size = self.config.hidden_size - inner_dim = self.config.n_inner if self.config.n_inner is not None else 4 * hidden_size - - self.ln_1 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - self.attn = FlaxGPT2Attention(self.config, dtype=self.dtype) - self.ln_3 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - self.encoder_attn = FlaxGPT2Attention(config=self.config, dtype=self.dtype) - self.ln_2 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - self.mlp = FlaxGPT2MLP(self.config, inner_dim, dtype=self.dtype) - - def __call__( - self, - hidden_states, - attention_mask=None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - ): - residual = hidden_states - hidden_states = self.ln_1(hidden_states) - outputs = self.attn( - hidden_states, - attention_mask=attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - ) - # residual connection - attn_output = outputs[0] - hidden_states = attn_output + residual - - # Cross-Attention Block - if encoder_hidden_states is not None: - - residual = hidden_states - hidden_states = self.ln_3(hidden_states) - - cross_attn_outputs = self.encoder_attn( - hidden_states=hidden_states, - key_value_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - ) - - # residual connection - cross_attn_output = cross_attn_outputs[0] - hidden_states = cross_attn_output + residual - - residual = hidden_states - hidden_states = self.ln_2(hidden_states) - feed_forward_hidden_states = self.mlp(hidden_states, deterministic=deterministic) - # residual connection - hidden_states = residual + feed_forward_hidden_states - - output = (hidden_states,) + outputs[1:] - if encoder_hidden_states is not None: - output = output + cross_attn_outputs[1:] - - return output - - -class FlaxGPT2PreTrainedModel(FlaxPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GPT2Config - base_model_prefix = "transformer" - module_class: nn.Module = None - - def __init__( - self, - config: GPT2Config, - input_shape: Tuple = (1, 1), - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs, - ): - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensors - input_ids = jnp.zeros(input_shape, dtype="i4") - attention_mask = jnp.ones_like(input_ids) - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape) - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - if self.config.add_cross_attention: - encoder_hidden_states = jnp.zeros(input_shape + (self.config.n_embd,)) - encoder_attention_mask = attention_mask - module_init_outputs = self.module.init(rngs, input_ids, attention_mask, position_ids, encoder_hidden_states, encoder_attention_mask, return_dict=False) - else: - module_init_outputs = self.module.init(rngs, input_ids, attention_mask, position_ids, return_dict=False) - - return module_init_outputs["params"] - - @classmethod - def _from_config(cls, config, **kwargs): - return super()._from_config(config, **kwargs) - - def init_cache(self, batch_size, max_length): - r""" - Args: - batch_size (:obj:`int`): - batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. - max_length (:obj:`int`): - maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized - cache. - """ - # init input variables to retrieve cache - input_ids = jnp.ones((batch_size, max_length)) - attention_mask = jnp.ones_like(input_ids) - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - init_variables = self.module.init( - jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict=False, init_cache=True - ) - return init_variables["cache"] - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - def __call__( - self, - input_ids, - attention_mask=None, - position_ids=None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - params: dict = None, - past_key_values: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if encoder_hidden_states is not None and encoder_attention_mask is None: - batch_size, sequence_length = encoder_hidden_states.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = input_ids.shape - - if position_ids is None: - if past_key_values is not None: - raise ValueError("Make sure to provide `position_ids` when passing `past_key_values`.") - - position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) - - if attention_mask is None: - attention_mask = jnp.ones((batch_size, sequence_length)) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that it can be changed by FlaxGPT2Attention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - outputs = self.module.apply( - inputs, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - encoder_hidden_states, - encoder_attention_mask, - not train, - False, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - mutable=mutable, - ) - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs, past_key_values = outputs - outputs["past_key_values"] = unfreeze(past_key_values["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs, past_key_values = outputs - outputs = outputs[:1] + (unfreeze(past_key_values["cache"]),) + outputs[1:] - - return outputs - - -class FlaxGPT2BlockCollection(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.blocks = [ - FlaxGPT2Block(self.config, name=str(i), dtype=self.dtype) for i in range(self.config.num_hidden_layers) - ] - - def __call__( - self, - hidden_states, - attention_mask=None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - all_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - - for block in self.blocks: - if output_hidden_states: - all_hidden_states += (hidden_states,) - - layer_outputs = block( - hidden_states, - attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - ) - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions += (layer_outputs[1],) - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states += (hidden_states,) - - outputs = [hidden_states, all_hidden_states, all_attentions, all_cross_attentions] - - if not return_dict: - return tuple(v for v in outputs if v is not None) - - if encoder_hidden_states is None: - return FlaxBaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=None, - hidden_states=all_hidden_states, - attentions=all_attentions, - ) - else: - return FlaxBaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=None, - hidden_states=all_hidden_states, - attentions=all_attentions, - cross_attentions=all_cross_attentions, - ) - -class FlaxGPT2Module(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.embed_dim = self.config.hidden_size - - self.wte = nn.Embed( - self.config.vocab_size, - self.embed_dim, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.wpe = nn.Embed( - self.config.max_position_embeddings, - self.embed_dim, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.dropout = nn.Dropout(rate=self.config.embd_pdrop) - self.h = FlaxGPT2BlockCollection(self.config, dtype=self.dtype) - self.ln_f = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - position_ids, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic=True, - init_cache: bool = False, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - input_embeds = self.wte(input_ids.astype("i4")) - position_embeds = self.wpe(position_ids.astype("i4")) - - hidden_states = input_embeds + position_embeds - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - - outputs = self.h( - hidden_states, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - hidden_states = self.ln_f(hidden_states) - - if not return_dict: - return (hidden_states,) + outputs[1:] - - if encoder_hidden_states is None: - return FlaxBaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - else: - return FlaxBaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - -@add_start_docstrings( - "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.", - GPT2_START_DOCSTRING, -) -class FlaxGPT2Model(FlaxGPT2PreTrainedModel): - module_class = FlaxGPT2Module - - -append_call_sample_docstring( - FlaxGPT2Model, _TOKENIZER_FOR_DOC, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutput, _CONFIG_FOR_DOC -) - - -class FlaxGPT2LMHeadModule(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.transformer = FlaxGPT2Module(self.config, dtype=self.dtype) - self.lm_head = nn.Dense( - self.config.vocab_size, - use_bias=False, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(stddev=self.config.initializer_range, dtype=self.dtype), - ) - - def __call__( - self, - input_ids, - attention_mask, - position_ids, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - outputs = self.transformer( - input_ids, - attention_mask, - position_ids, - encoder_hidden_states, - encoder_attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - - if self.config.tie_word_embeddings: - shared_kernel = self.transformer.variables["params"]["wte"]["embedding"].T - lm_logits = self.lm_head.apply({"params": {"kernel": shared_kernel}}, hidden_states) - else: - lm_logits = self.lm_head(hidden_states) - - if not return_dict: - return (lm_logits,) + outputs[1:] - - if encoder_hidden_states is None: - return FlaxCausalLMOutput(logits=lm_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions) - else: - return FlaxSeq2SeqLMOutput( - logits=lm_logits, - decoder_hidden_states=outputs.hidden_states, - decoder_attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=encoder_hidden_states, - encoder_hidden_states=None, - encoder_attentions=None, - ) - -@add_start_docstrings( - """ - The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - GPT2_START_DOCSTRING, -) -class FlaxGPT2LMHeadModel(FlaxGPT2PreTrainedModel): - module_class = FlaxGPT2LMHeadModule - - def prepare_inputs_for_generation(self, input_ids, max_length, attention_mask: Optional[jnp.DeviceArray] = None): - # initializing the cache - batch_size, seq_length = input_ids.shape - - past_key_values = self.init_cache(batch_size, max_length) - # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. - # But since GPT2 uses a causal mask, those positions are masked anyways. - # Thus we can create a single static attention_mask here, which is more efficient for compilation - extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") - if attention_mask is not None: - position_ids = attention_mask.cumsum(axis=-1) - 1 - extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0)) - else: - position_ids = jnp.broadcast_to(jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length)) - - return { - "past_key_values": past_key_values, - "attention_mask": extended_attention_mask, - "position_ids": position_ids, - } - - def update_inputs_for_generation(self, model_outputs, model_kwargs): - model_kwargs["past_key_values"] = model_outputs.past_key_values - model_kwargs["position_ids"] = model_kwargs["position_ids"][:, -1:] + 1 - return model_kwargs - - -append_call_sample_docstring( - FlaxGPT2LMHeadModel, _TOKENIZER_FOR_DOC, _CHECKPOINT_FOR_DOC, FlaxCausalLMOutput, _CONFIG_FOR_DOC -) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/__init__.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/__init__.py deleted file mode 100644 index ebeaef4a28ef655e43578552a8aef6b77f13a636..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from .ade import ADE20KDataset -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .chase_db1 import ChaseDB1Dataset -from .cityscapes import CityscapesDataset -from .custom import CustomDataset -from .dataset_wrappers import ConcatDataset, RepeatDataset -from .drive import DRIVEDataset -from .hrf import HRFDataset -from .pascal_context import PascalContextDataset, PascalContextDataset59 -from .stare import STAREDataset -from .voc import PascalVOCDataset - -__all__ = [ - 'CustomDataset', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'DATASETS', 'build_dataset', 'PIPELINES', 'CityscapesDataset', - 'PascalVOCDataset', 'ADE20KDataset', 'PascalContextDataset', - 'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset', - 'STAREDataset' -] diff --git a/spaces/swcrazyfan/ppt-generator/README.md b/spaces/swcrazyfan/ppt-generator/README.md deleted file mode 100644 index a966d95547e4b489c522084f64c8ed105e943e24..0000000000000000000000000000000000000000 --- a/spaces/swcrazyfan/ppt-generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ppt Generator -emoji: 🏢 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/convert_t5_from_huggingface_to_tencentpretrain.py b/spaces/szukevin/VISOR-GPT/train/scripts/convert_t5_from_huggingface_to_tencentpretrain.py deleted file mode 100644 index 2ffe4544303576b269d4994755615ab580422b8f..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/scripts/convert_t5_from_huggingface_to_tencentpretrain.py +++ /dev/null @@ -1,104 +0,0 @@ -import argparse -import collections -import torch - - -parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) -parser.add_argument("--input_model_path", type=str, default="models/input_model.bin", - help=".") -parser.add_argument("--output_model_path", type=str, default="models/output_model.bin", - help=".") -parser.add_argument("--layers_num", type=int, default=12, help=".") -parser.add_argument("--decoder_layers_num", type=int, default=12, help=".") -parser.add_argument("--type", choices=["t5", "t5-v1_1"], default="t5", - help="The version of the t5 model.") - -args = parser.parse_args() - -input_model = torch.load(args.input_model_path, map_location="cpu") - -output_model = collections.OrderedDict() - -output_model["embedding.word.embedding.weight"] = \ - input_model["encoder.embed_tokens.weight"] -output_model["tgt_embedding.word.embedding.weight"] = \ - input_model["decoder.embed_tokens.weight"] - -output_model["encoder.relative_pos_emb.relative_attention_bias.weight"] = \ - input_model["encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight"] -output_model["decoder.self_pos_emb.relative_attention_bias.weight"] = \ - input_model["decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight"] -output_model["target.lm.output_layer.weight"] = \ - input_model["lm_head.weight"] - -for i in range(args.layers_num): - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.0.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.0.SelfAttention.q.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.1.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.0.SelfAttention.k.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.2.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.0.SelfAttention.v.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.final_linear.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.0.SelfAttention.o.weight"] - output_model["encoder.transformer." + str(i) + ".layer_norm_1.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.0.layer_norm.weight"] - - if args.type == "t5-v1_1": - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_gate.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.1.DenseReluDense.wi_0.weight"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_1.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.1.DenseReluDense.wi_1.weight"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_2.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.1.DenseReluDense.wo.weight"] - else: - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_1.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.1.DenseReluDense.wi.weight"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_2.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.1.DenseReluDense.wo.weight"] - output_model["encoder.transformer." + str(i) + ".layer_norm_2.weight"] = \ - input_model["encoder.block." + str(i) + ".layer.1.layer_norm.weight"] - -for i in range(args.decoder_layers_num): - output_model["decoder.transformer_decoder." + str(i) + ".self_attn.linear_layers.0.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.0.SelfAttention.q.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".self_attn.linear_layers.1.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.0.SelfAttention.k.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".self_attn.linear_layers.2.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.0.SelfAttention.v.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".self_attn.final_linear.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.0.SelfAttention.o.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".layer_norm_1.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.0.layer_norm.weight"] - - output_model["decoder.transformer_decoder." + str(i) + ".context_attn.linear_layers.0.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.1.EncDecAttention.q.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".context_attn.linear_layers.1.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.1.EncDecAttention.k.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".context_attn.linear_layers.2.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.1.EncDecAttention.v.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".context_attn.final_linear.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.1.EncDecAttention.o.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".layer_norm_2.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.1.layer_norm.weight"] - - if args.type == "t5-v1_1": - output_model["decoder.transformer_decoder." + str(i) + ".feed_forward.linear_gate.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.2.DenseReluDense.wi_0.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".feed_forward.linear_1.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.2.DenseReluDense.wi_1.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".feed_forward.linear_2.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.2.DenseReluDense.wo.weight"] - else: - output_model["decoder.transformer_decoder." + str(i) + ".feed_forward.linear_1.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.2.DenseReluDense.wi.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".feed_forward.linear_2.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.2.DenseReluDense.wo.weight"] - output_model["decoder.transformer_decoder." + str(i) + ".layer_norm_3.weight"] = \ - input_model["decoder.block." + str(i) + ".layer.2.layer_norm.weight"] - -output_model["encoder.layer_norm.weight"] = \ - input_model["encoder.final_layer_norm.weight"] -output_model["decoder.layer_norm.weight"] = \ - input_model["decoder.final_layer_norm.weight"] - -torch.save(output_model, args.output_model_path) diff --git a/spaces/t13718236382/web-ui/index.html b/spaces/t13718236382/web-ui/index.html deleted file mode 100644 index 6fb24f3e9bc4fe4349f8725ec013be091d01bea3..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/index.html +++ /dev/null @@ -1 +0,0 @@ -Gradiobot UI \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Activation Robot Structural Analysis Professional 2019 Crack.md b/spaces/terfces0erbo/CollegeProjectV2/Activation Robot Structural Analysis Professional 2019 Crack.md deleted file mode 100644 index abdf47a60af541bdcd422c4e6b057f64b223e236..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Activation Robot Structural Analysis Professional 2019 Crack.md +++ /dev/null @@ -1,10 +0,0 @@ -

            Activation Robot Structural Analysis Professional 2019 Crack


            Download Ziphttps://bytlly.com/2uGk2A



            - -Create structural models and perform structural analysis in Robot Structural Analysis Professional 2019, and easily transfer the model and results ... Robot Structural Analysis Professional 2019 in Russian for Windows / Mac / Linux / Unix -Robot Structural Analysis Professional 2019 in English for Windows / Mac / Linux / Unix -Robot Structural Analysis Professional 2019 -Robot Structural Analysis Professional allows you to analyze and visualize steel, aluminum, wood, reinforced concrete, concrete and metal structures. -It can determine strength properties, stiffness, internal stresses. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/terfces0erbo/CollegeProjectV2/Fm 2008 Modifier 2.2 Turkce Indir REPACK.md b/spaces/terfces0erbo/CollegeProjectV2/Fm 2008 Modifier 2.2 Turkce Indir REPACK.md deleted file mode 100644 index 3a4028ea75cad32a0a11de3187fe38b64215e569..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Fm 2008 Modifier 2.2 Turkce Indir REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

            fm 2008 modifier 2.2 turkce indir


            DOWNLOADhttps://bytlly.com/2uGiZU



            -
            -kpss a grubu kitap seti indir.total video converter türkçe tam indir.flv dosyası ... cem adrian emir full albüm indir.fm 2008 modifier 2.2 oyun indir.araba sürme ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/terfces0erbo/CollegeProjectV2/Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93.md b/spaces/terfces0erbo/CollegeProjectV2/Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93.md deleted file mode 100644 index 9d6eee5ccc3bb70764810b7f52c00bc52c745dd1..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93.md +++ /dev/null @@ -1,20 +0,0 @@ -
            -

            How to Download a Free Ebook of Electronic Communication by Dennis Roddy and John Coolen

            -

            Electronic Communication by Dennis Roddy and John Coolen is a classic textbook on telecommunication and electronics that covers topics such as modulation, transmission lines, antennas, satellite communication, fiber optics, and more. The book was first published in 1977 and has been updated several times since then. If you are looking for a free ebook of this book, you may have some difficulty finding it online. However, there are some ways to get a copy of this book without paying anything.

            -

            One way is to use the Internet Archive, a non-profit digital library that offers free access to millions of books, movies, music, and other media. The Internet Archive has a scanned version of the 1984 edition of Electronic Communication by Roddy and Coolen that you can borrow for 14 days. To do this, you need to create a free account on the Internet Archive website and then go to this link: https://archive.org/details/electroniccommun0000rodd. You can either read the book online or download it as a PDF or EPUB file.

            -

            Free Ebook Pdf Electronic Communication By Dennis Roddy And John 93


            Download Ziphttps://bytlly.com/2uGjYq



            -

            Another way is to use Scribd, a subscription-based service that offers unlimited access to ebooks, audiobooks, magazines, podcasts, and more. Scribd has a 30-day free trial that you can use to read Electronic Communication by Roddy and Coolen online or download it as a PDF file. To do this, you need to create a free account on Scribd website and then go to this link: https://www.scribd.com/document/352992755/Electronic-Communication-by-Roddy-and-Coolen-Free. You can also browse other books related to electronic communication on Scribd.

            -

            These are some of the ways to get a free ebook of Electronic Communication by Dennis Roddy and John Coolen. However, please note that these methods may not be legal in some countries or regions. Therefore, you should always check the copyright laws and regulations before downloading any ebook from the internet. Also, if you find this book useful and informative, you should consider buying a copy from a reputable source to support the authors and publishers.

            - -

            Electronic communication is the process of transmitting information and messages using electronic devices such as computers, phones, radios, and satellites. Electronic communication has many advantages over traditional forms of communication, such as letters, telegrams, and face-to-face meetings. Some of the benefits of electronic communication are:

            -
              -
            • Speed: Electronic communication allows you to send and receive information instantly, regardless of the distance or time zone. This can save you time and money, as well as improve your efficiency and productivity.
            • -
            • Convenience: Electronic communication enables you to communicate with anyone, anywhere, anytime. You can use different devices and platforms to suit your needs and preferences. You can also store and retrieve information easily with electronic devices.
            • -
            • Accuracy: Electronic communication reduces the risk of errors and misunderstandings that may occur in verbal or written communication. You can use various tools and features to check and edit your messages before sending them. You can also use encryption and authentication to ensure the security and privacy of your information.
            • -
            • Feedback: Electronic communication allows you to get immediate feedback from your recipients. You can use various methods to measure and monitor the effectiveness of your communication, such as surveys, polls, analytics, and reports. You can also use interactive features to engage your audience and encourage participation.
            • -
            • Collaboration: Electronic communication facilitates teamwork and collaboration among people who are working on a common project or goal. You can use various applications and software to share information, documents, files, and resources. You can also use video conferencing and online meetings to communicate with your team members in real time.
            • -
            • Innovation: Electronic communication stimulates creativity and innovation by providing access to a vast amount of information and resources. You can use electronic communication to learn new skills, discover new ideas, and explore new opportunities. You can also use electronic communication to showcase your work and achievements to a wider audience.
            • -
            -

            These are some of the benefits of electronic communication that you can enjoy in your personal and professional life. However, electronic communication also has some drawbacks that you should be aware of, such as technical issues, cyberattacks, information overload, and social isolation. Therefore, you should use electronic communication wisely and responsibly to make the most of it.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Im Pandey Financial Management Ebook Pdf Free 115 [PORTABLE].md b/spaces/terfces0erbo/CollegeProjectV2/Im Pandey Financial Management Ebook Pdf Free 115 [PORTABLE].md deleted file mode 100644 index 82129710baf8d120f9ba5c3f73800e953fe14e7d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Im Pandey Financial Management Ebook Pdf Free 115 [PORTABLE].md +++ /dev/null @@ -1,6 +0,0 @@ - -

            building capacity to support decision-making (high confidence) is supported by four key characteristics: the selection of decision-support tools is based on rigorous analysis of which particular tools best fit a decision-making context; a supporting infrastructure facilitates learning, feedback, and learning-by-doing; a communication infrastructure promotes decision-making and knowledge dissemination; and stakeholder engagement helps to build critical alliances and solidify outcomes (ran et al. 2018 1035 ; ng et al. 2009 1036 ). examples of capacity-building activities include building the capacity of environmental institutions and training in capacity-building, team building, and decision-making (ciais et al. 2012 1107 ). in addition, this section presents a case study where capacity for sustainable use (sus) of forest for improved livelihoods is increased through participatory processes (section 7.5.6).

            -

            the transitions towards a regenerative society operate at different spatial scales. at the smallest scale, there is ongoing work on natural regeneration, understanding when and where regeneration should occur, what particular habitats need to regenerate, how regeneration can be accelerated, and who should be involved in the process. this is especially true of small-scale infrastructure projects, such as new buildings, where planners often design around existing natural vegetation. at a much larger scale, there is the effort to organize cities into sustainable, low-pollution zones, where the goal is to reduce pollution as much as possible. as cities grow, this grows along with their associated pollution, even if transportation-related emissions are reduced. for some areas of the world this level of planning is already taking place. designing new cities to produce less pollution has huge implications, as the environmental costs of city infrastructure are often felt at a larger scale than that of the infrastructure itself (patel 2018). they can require more effort to master and can tend to have more impacts. actions include more efficient urban energy systems; resource-efficient housing, cars, and roads; measures for preserving biodiversity; and steps that improve cycling and walking. governments have the power to regulate the production of pollution (ben-ari et al. 2017). one impact of urbanization is that it requires very large amounts of energy to build and maintain the infrastructures needed to support cities (bretschneider and kothari 2010). an issue of management is that the population of cities is growing faster than the population of rural areas, in many places driving up demands and increasing pressure on the environment (preston 2005). as cities grow, they become more urbanised (bretschneider and kothari 2010). these processes are all occurring within a context of growing political diversity, democratization, and globalization. they require higher levels of planning, and outcomes such as peace and prosperity (bretschneider and kothari 2010). there are environmental impacts linked to these processes, the study of which requires new scientific approaches that have emerged in the past decade.

            -

            Im Pandey Financial Management Ebook Pdf Free 115


            Downloadhttps://bytlly.com/2uGiNS



            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Bengali Movie Titoo MBA Download Movies High Quality.md b/spaces/tialenAdioni/chat-gpt-api/logs/Bengali Movie Titoo MBA Download Movies High Quality.md deleted file mode 100644 index 81164dcaa1c9f58bacf4a82a4bc7333a31f48b17..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Bengali Movie Titoo MBA Download Movies High Quality.md +++ /dev/null @@ -1,23 +0,0 @@ - -

            How to Download Titoo MBA, a Comedy Romance Movie from India

            -

            Titoo MBA is a 2014 Indian movie that tells the story of a couple on the brink of divorce who explain their separate viewpoints and reasons to their respective lawyers. The movie is a light-hearted comedy with a brief emotional roller coaster ride. It stars Nishant Dahiya, Pragya Jaiswal and Nandini Singh, and is directed by Amit Singh, Amitendra Vats and Vats Amit[^1^].

            -

            bengali movie Titoo MBA download movies


            Download Filehttps://urlcod.com/2uKbd4



            -

            If you are looking for a way to download Titoo MBA and watch it offline, you have come to the right place. In this article, we will show you how to download Titoo MBA from YouTube, where it is available for free with ads[^2^]. You will need a YouTube downloader software that can save videos in HD quality and convert them to various formats. Here are the steps to follow:

            -
              -
            1. Go to YouTube and search for "Titoo MBA - Married But Available | Full Movie HD | Latest Punjabi Movies 2017 | Yellow Movies". This is the official channel of Yellow Music, the production company of Titoo MBA.
            2. -
            3. Copy the URL of the video from the address bar.
            4. -
            5. Open your YouTube downloader software and paste the URL in the input box.
            6. -
            7. Select the output format and quality that you prefer. You can choose MP4, AVI, MKV, etc. and 720p, 1080p, etc.
            8. -
            9. Click on the download button and wait for the process to finish.
            10. -
            11. Once the download is complete, you can find the video file in your designated folder.
            12. -
            13. Enjoy watching Titoo MBA offline on your device.
            14. -
            -

            Alternatively, you can also stream Titoo MBA online on MX Player, a free video streaming platform that offers a variety of Indian movies and shows[^3^]. You can access MX Player from your browser or download its app on your smartphone or tablet. You will need an internet connection to watch Titoo MBA on MX Player.

            -

            We hope this article has helped you find a way to download or stream Titoo MBA, a comedy romance movie from India. If you liked this movie, you might also enjoy other movies from Yellow Music, such as Akhanda, Storyline and Doraemon. You can find them on YouTube or MX Player as well. Happy watching!

            -

            - -

            Titoo MBA is a movie that has received mixed reviews from critics and audiences. Some have praised it for its humor, music and performances, while others have criticized it for its weak plot, poor direction and vulgar dialogues. The movie has a rating of 4.4 out of 10 on IMDb[^2^] and no score on Rotten Tomatoes[^1^]. The Times of India gave it 2.5 stars out of 5 and wrote, "Titoo MBA is a film that tries to be funny but ends up being crass. It has a few moments of genuine laughter but they are too few and far between to salvage this mess."[^3^]

            -

            Titoo MBA is a movie that may appeal to some viewers who enjoy adult comedy and romance, but may disappoint others who expect a more refined and engaging story. The movie has some catchy songs composed by Arjuna Harjai and sung by Arijit Singh, Surabhi Dashputra and others. The movie also features some scenic locations in Chandigarh and Jalandhar, where it was shot. The movie has a runtime of 111 minutes and is rated Not Rated by the MPAA.

            -

            Titoo MBA is a movie that is not for everyone. It is a low-budget film that tries to cash in on the popularity of Punjabi cinema and culture, but fails to deliver a satisfying experience. It is a movie that you can watch at your own risk, or skip altogether. There are many other better options available for comedy romance lovers, both in Indian and foreign cinema. You can find them on YouTube, MX Player or other streaming platforms.

            e93f5a0c3f
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Big Fish 64 Bit Games for Mac The Best Casual Games for Your Device.md b/spaces/tialenAdioni/chat-gpt-api/logs/Big Fish 64 Bit Games for Mac The Best Casual Games for Your Device.md deleted file mode 100644 index 3348a53f95a78d4ae9b6619e5690a60357e3ef1a..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Big Fish 64 Bit Games for Mac The Best Casual Games for Your Device.md +++ /dev/null @@ -1,41 +0,0 @@ - -

            Big Fish 64 Bit Games for Mac: A Guide for Gamers

            -

            If you are a fan of Big Fish games, you might be wondering how to play them on your Mac computer. Big Fish is one of the largest distributors of casual games, offering thousands of titles across various genres and platforms. However, not all Big Fish games are compatible with Mac OS, especially the latest versions that require 64 bit architecture.

            -

            big fish 64 bit games for mac


            Download Zip > https://urlcod.com/2uK5gz



            -

            In this article, we will explain what 64 bit means, why it matters for Mac users, and how to find and play Big Fish 64 bit games for Mac. We will also share some tips and tricks to enhance your gaming experience and solve common issues.

            - -

            What is 64 bit and why does it matter?

            -

            64 bit is a term that refers to the amount of data that a processor can handle in one cycle. A 64 bit processor can process 64 bits of data at a time, while a 32 bit processor can only process 32 bits. This means that a 64 bit processor can perform faster and more efficiently than a 32 bit processor, as well as handle larger amounts of memory and data.

            -

            For Mac users, 64 bit matters because Apple has discontinued support for 32 bit applications since macOS Catalina (10.15), which was released in October 2019. This means that any application that is not updated to 64 bit will not run on macOS Catalina or later versions. This includes many Big Fish games that were developed before 2019.

            -

            If you have a Mac computer that runs on macOS Catalina or later, you will need to find Big Fish games that are compatible with 64 bit architecture. Otherwise, you will not be able to play them on your device.

            -

            - -

            How to find and play Big Fish 64 bit games for Mac?

            -

            Fortunately, Big Fish has been updating many of its games to 64 bit, and you can easily find them on their website. Here are some steps to follow:

            -
              -
            1. Go to www.bigfishgames.com and click on the Mac tab.
            2. -
            3. On the left sidebar, click on the filter icon and select "64-bit Support" under the Compatibility section.
            4. -
            5. You will see a list of Big Fish games that are compatible with 64 bit Mac OS. You can browse by genre, popularity, release date, or rating.
            6. -
            7. Once you find a game that you like, click on it and download the trial version or buy the full version.
            8. -
            9. Install the game on your Mac and enjoy playing!
            10. -
            -

            You can also use the Big Fish Games app to manage your games and access new releases and discounts. To download the app, go to www.bigfishgames.com/game-manager/ and follow the instructions.

            - -

            Tips and tricks for playing Big Fish 64 bit games for Mac

            -

            Here are some tips and tricks to help you have a better gaming experience and solve common issues:

            -
              -
            • Make sure your Mac meets the minimum system requirements for each game. You can check them on the game's page or in the app.
            • -
            • Update your Mac OS and your Big Fish Games app regularly to ensure optimal performance and compatibility.
            • -
            • If you encounter any problems with launching or playing a game, try these solutions: -
                -
              • Restart your Mac and try again.
              • -
              • Delete the game and reinstall it from the app or the website.
              • -
              • Contact Big Fish customer support at www.bigfishgames.com/help/ or use the live chat feature in the app.
              • -
              -
            • -
            • If you want to play your old 32 bit Big Fish games that are not compatible with macOS Catalina or later, you have two options: -
                -
              • Create a separate partition on your Mac's hard drive and install an older version of Mac OS that supports 32 bit applications. You can find instructions on how to do this here.
              • -
              • Use a virtual machine software such as Parallels Desktop or VMware Fusion to run Windows on your Mac and play your games there.

                ddb901b051
                -
                -
                \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Bentley WaterCAD V8i Select Series 2 18 The Ultimate Water Distribution Modeling Software.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Bentley WaterCAD V8i Select Series 2 18 The Ultimate Water Distribution Modeling Software.md deleted file mode 100644 index 8eca0352fd97cb344523715799b9cbdb58c6e336..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Bentley WaterCAD V8i Select Series 2 18 The Ultimate Water Distribution Modeling Software.md +++ /dev/null @@ -1,139 +0,0 @@ - -

                Download Bentley WaterCAD V8i SELECTseries 2 18

                -

                If you are looking for a reliable, easy-to-use, and powerful software for water distribution modeling and analysis, you might want to consider Bentley WaterCAD V8i SELECTseries 2 18. This software can help you design new water systems and manage existing water networks effectively, reducing disruption risks and energy use. In this article, we will give you an overview of Bentley WaterCAD, its features, and what's new in this version. We will also show you how to download and install it on your computer.

                -

                download bentley watercad v8i select series 2 18


                Download Ziphttps://urlcod.com/2uK6HV



                -

                What is Bentley WaterCAD?

                -

                Bentley WaterCAD is a hydraulic and water quality modeling application for water distribution systems. It can help you perform various analyses, such as fire flow, constituent concentration, energy cost, pump modeling, pipe renewal, pipe break, and more. You can use Bentley WaterCAD from within MicroStation, AutoCAD, or as a stand-alone application, depending on your preference. Bentley WaterCAD is a subset of Bentley WaterGEMS, which offers additional capabilities for GIS integration and model building.

                -

                What's new in Bentley WaterCAD V8i SELECTseries 2 18?

                -

                Bentley WaterCAD V8i SELECTseries 2 18 introduces many new features and improvements that enhance its functionality and usability. Here are some of the main ones:

                -

                Support for newer platforms

                -

                Bentley WaterCAD now supports the following platforms and applications:

                -
                  -
                • AutoCAD 2010 and 2011 (32-bit version only)
                • -
                • MicroStation V8i SELECTseries 2 (08.11.07.443)
                • -
                • ArcGIS 10 (WaterGEMS only)
                • -
                • ProjectWise 08.11.07.XX
                • -
                -

                Pipe renewal planner

                -

                The pipe renewal planner is a new condition assessment tool that enables you to score and rank existing water mains based on their adequacy in terms of a number of performance indicators called "aspects". The default aspects that the pipe renewal planner primarily considers are pipe break history, capacity (fire flow), and criticality (demand shortfall when pipe is out of service). You can also consider any other aspect of the model, such as calculated model results or user-entered fields (such as installation year, zone, material, etc).

                -

                For each aspect, each pipe is given a normalized score on a 0-100 scale. Each aspect is then given a weight, and an overall score is generated for each pipe. You can use this tool to identify the most critical pipes in your system and prioritize them for renewal or replacement.

                -

                How to download bentley watercad v8i select series 2 18 for free
                -Bentley watercad v8i select series 2 18 crack download
                -Bentley watercad v8i select series 2 18 tutorial pdf download
                -Download bentley watercad v8i select series 2 18 full version
                -Bentley watercad v8i select series 2 18 license key download
                -Download bentley watercad v8i select series 2 18 for windows 10
                -Bentley watercad v8i select series 2 18 software download
                -Download bentley watercad v8i select series 2 18 with patch
                -Bentley watercad v8i select series 2 18 activation code download
                -Download bentley watercad v8i select series 2 18 for mac
                -Bentley watercad v8i select series 2 18 serial number download
                -Download bentley watercad v8i select series 2 18 offline installer
                -Bentley watercad v8i select series 2 18 keygen download
                -Download bentley watercad v8i select series 2 18 for linux
                -Bentley watercad v8i select series 2 18 registration code download
                -Download bentley watercad v8i select series 2 18 portable
                -Bentley watercad v8i select series 2 18 product key download
                -Download bentley watercad v8i select series 2 18 for android
                -Bentley watercad v8i select series 2 18 trial version download
                -Download bentley watercad v8i select series 2 18 latest update
                -Bentley watercad v8i select series 2 18 setup file download
                -Download bentley watercad v8i select series 2 18 for ios
                -Bentley watercad v8i select series 2 18 installation guide download
                -Download bentley watercad v8i select series 2 18 from official website
                -Bentley watercad v8i select series 2 18 system requirements download
                -Download bentley watercad v8i select series 2 18 from torrent
                -Bentley watercad v8i select series 2 18 user manual download
                -Download bentley watercad v8i select series 2 18 from google drive
                -Bentley watercad v8i select series 2 18 features and benefits download
                -Download bentley watercad v8i select series 2 18 from mega.nz
                -Bentley watercad v8i select series 2 18 review and rating download
                -Download bentley watercad v8i select series 2 18 from mediafire
                -Bentley watercad v8i select series 2 18 comparison and alternatives download
                -Download bentley watercad v8i select series

                -

                Pipe break analysis

                -

                The pipe break analysis is a new tool that allows you to analyze the impacts of pipe failures on your system. You can specify different scenarios of pipe breaks, such as location, size, duration, frequency, etc., and evaluate how they affect your system performance, such as pressure, flow, velocity, head loss, etc. You can also assess how pipe breaks affect your customers' service levels, such as demand shortfall or pressure deficiency.

                -

                This tool can help you understand the risks and consequences of pipe breaks in your system and plan appropriate mitigation measures.

                -

                Improved pump curve display

                -

                The pump curve display has been improved to show more information about your pumps' performance. You can now see graphical representations of pump curves that include efficiency curves, power curves, NPSH curves, operating points, control points, etc. You can also compare different pump curves on the same graph and zoom in or out as needed.

                -

                This feature can help you visualize your pumps' behavior better and optimize their operation.

                -

                Pump combination analysis

                -

                The pump combination analysis is a new feature that allows you to model multiple pumps that are in the same structure serving the same pressure zone. You can use a new element type called a pump station to group pumps together and define their operating rules. You can then use a new tool called combination pump curves to generate composite pump curves that represent the combined performance of multiple pumps in a station.

                -

                This feature can help you simplify your model structure and analyze complex pumping scenarios more easily.

                -

                Top-fill tanks with float valves

                -

                You can now model tanks that have variable levels due to top-filling with float valves. You can specify the minimum and maximum levels for these tanks and how they affect the inflow and outflow rates. You can also define different headloss coefficients for inflow and outflow pipes.

                -

                This feature can help you model tanks that have more realistic filling mechanisms and dynamics.

                -

                EPANET calculation engine upgrade

                -

                The EPANET calculation engine has been upgraded to version 2.00.12. This version includes several bug fixes and enhancements that improve the accuracy and stability of hydraulic calculations.

                -

                SCADAConnect ease-of-use improvements

                -

                The SCADAConnect feature that allows you to connect your model to real-time data from SCADA systems has been improved with several ease-of-use enhancements. Some of these are:

                -
                  -
                • A wizard-based interface for creating SCADAConnect simulations
                • -
                • A simplified process for mapping SCADA tags to model elements
                • -
                • A graphical display of SCADA data on model elements
                • -
                • A report generator for comparing SCADA data with model results
                • -
                -

                ProjectWise integration changes

                -

                The integration with ProjectWise has been changed to use i-models instead of native files for storing models. This change improves the performance and reliability of working with models in ProjectWise.

                -

                i-model support

                - a separate file. You can also save your model as a different file name or location using the save-as option.

                -

                How to download Bentley WaterCAD V8i SELECTseries 2 18?

                -

                If you are interested in downloading Bentley WaterCAD V8i SELECTseries 2 18, you can follow these steps:

                -
                  -
                1. Go to the Bentley website and sign in with your credentials.
                2. -
                3. Go to the Software Downloads page and select WaterCAD from the product list.
                4. -
                5. Select the version 08.11.02.31 from the version list and click on Download.
                6. -
                7. Choose the platform that you want to use (MicroStation, AutoCAD, or stand-alone) and click on Download again.
                8. -
                9. Save the file to your computer and run it to start the installation process.
                10. -
                11. Follow the instructions on the screen to complete the installation.
                12. -
                13. Activate your license using the License Management Tool.
                14. -
                -

                Conclusion

                -

                Bentley WaterCAD V8i SELECTseries 2 18 is a great software for water distribution modeling and analysis. It offers many new features and improvements that can help you design and manage your water systems more efficiently and effectively. You can download and install it easily from the Bentley website and use it from within MicroStation, AutoCAD, or as a stand-alone application. If you want to learn more about Bentley WaterCAD, you can visit the Bentley website or contact their support team.

                -

                FAQs

                -

                Here are some frequently asked questions and answers about Bentley WaterCAD V8i SELECTseries 2 18:

                -
                -
                Q: What are the system requirements for Bentley WaterCAD V8i SELECTseries 2 18?
                -
                A: The minimum system requirements for Bentley WaterCAD V8i SELECTseries 2 18 are:
                -
                  -
                • Operating system: Windows XP SP3, Windows Vista SP1, Windows 7, Windows 8, Windows 10
                • -
                • Processor: Intel Pentium IV or higher
                • -
                • Memory: 512 MB RAM minimum, 1 GB recommended
                • -
                • Disk space: 500 MB free disk space minimum
                • -
                • Display: 1024 x 768 resolution minimum
                • -
                • Internet connection: Required for installation and activation
                • -
                -
                Q: How can I get technical support for Bentley WaterCAD V8i SELECTseries 2 18?
                -
                A: You can get technical support for Bentley WaterCAD V8i SELECTseries 2 18 by:
                -
                  -
                • Contacting your local Bentley office or partner
                • -
                • Submitting a service request online
                • -
                • Calling the toll-free number +1-800-BENTLEY (+1-800-236-8539)
                • -
                • Visiting the Bentley Communities website and joining the OpenFlows forum
                • -
                -
                Q: How can I upgrade from an older version of Bentley WaterCAD to V8i SELECTseries 2 18?
                -
                A: You can upgrade from an older version of Bentley WaterCAD to V8i SELECTseries 2 18 by:
                -
                  -
                • Downloading and installing the new version from the Bentley website
                • -
                • Opening your existing model files in the new version and saving them with a new name or location
                • -
                • Migrating your user data extensions and custom reports using the migration tool
                • -
                -
                Q: How can I learn more about Bentley WaterCAD V8i SELECTseries 2 18?
                -
                A: You can learn more about Bentley WaterCAD V8i SELECTseries 2 18 by:
                -
                  -
                • Reading the user's guide and help files that come with the software
                • -
                • Watching online videos and webinars on the Bentley website or YouTube channel
                • -
                • Taking online courses or attending live training sessions offered by Bentley
                • -
                • Browsing through case studies and articles on the Bentley website or publications
                • -
                -
                Q: How can I share my models with other users or applications?
                -
                A: You can share your models with other users or applications by:
                -
                  -
                • Saving your models as i-models and sending them via email or cloud services
                • -
                • Exporting your models to DXF format and opening them in other CAD applications
                • -
                • Connecting your models to SCADA systems using SCADAConnect feature
                • -
                • Integrating your models with ProjectWise for collaboration and management
                • -
                -

                0a6ba089eb
                -
                -
                \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/ARK Survival Evolved - Download Now and Start Your Journey on Mac OS X.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/ARK Survival Evolved - Download Now and Start Your Journey on Mac OS X.md deleted file mode 100644 index 8c0a9aea8cb536215c7475515c7273c0560c5ea5..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/ARK Survival Evolved - Download Now and Start Your Journey on Mac OS X.md +++ /dev/null @@ -1,140 +0,0 @@ - -

                Ark Survival Evolved Download Mac OS X: How to Play This Amazing Game on Your Mac

                -

                Are you a fan of action-adventure survival games? Do you love exploring a vast open world full of dinosaurs and other prehistoric creatures? Do you want to experience the thrill of taming, breeding, and riding these beasts? If you answered yes to any of these questions, then you should definitely check out Ark Survival Evolved, one of the most popular online games of all time.

                -

                Introduction

                -

                What is Ark Survival Evolved?

                -

                Ark Survival Evolved is a game developed by Studio Wildcard that was released in 2017. The game is set on a mysterious island called Ark, where you play as a survivor who has to hunt, gather, craft, build, and fight to survive. The island is inhabited by over 100 different types of creatures, many of which are based on extinct animals such as dinosaurs, mammoths, and sabertooths. You can tame these creatures using various methods, such as knocking them out and feeding them, or using special items and skills. You can also breed them to create new generations with different traits and abilities. You can use your tamed creatures as mounts, companions, or weapons, depending on their characteristics and your preferences.

                -

                ark survival evolved download mac os x


                Downloadhttps://bltlly.com/2uOrbN



                -

                Why play Ark Survival Evolved on Mac OS X?

                -

                If you own a Mac computer, you might be wondering if you can play Ark Survival Evolved on it. The answer is yes, you can! There are several ways to download and play this game on your Mac OS X device, and we will show you how in this article. Playing Ark Survival Evolved on Mac OS X has many benefits, such as:

                -
                  -
                • You can enjoy the stunning graphics and immersive sound effects of the game on your high-resolution screen and speakers.
                • -
                • You can use your Mac's keyboard and mouse to control your character and interact with the game world more easily and accurately.
                • -
                • You can access the Steam community and features, such as achievements, trading cards, workshops, and more.
                • -
                • You can join thousands of other players online and cooperate or compete with them in various game modes.
                • -
                -

                How to download Ark Survival Evolved for Mac OS X

                -

                Method 1: Use the official OS X version from Steam

                -

                Requirements

                -

                The easiest and most straightforward way to play Ark Survival Evolved on your Mac is to use the official OS X version that is available on Steam. However, this method has some limitations and requirements that you need to be aware of before you proceed. These are:

                -
                  -
                • You need to have a Steam account and the Steam app installed on your Mac.
                • -
                • You need to have at least 20 GB of free space on your hard drive.
                • -
                • You need to have a Mac that meets the minimum system requirements for the game. These are:
                • -
                - - - - - -
                OSOSX 10.9 or higher
                Processor2 GHz equivalent CPU
                Memory4 GB RAM
                GraphicsOpenGL 3 compatible GPU with 1 GB video RAM
                StorageSteps -

                If you have a Mac that meets the requirements and you have enough space on your hard drive, you can follow these steps to download and play Ark Survival Evolved on your Mac:

                -
                  -
                1. Open the Steam app on your Mac and log in to your account.
                2. -
                3. Search for Ark Survival Evolved in the Steam store and click on it.
                4. -
                5. Click on the "Add to Cart" button and proceed to checkout.
                6. -
                7. After you purchase the game, it will appear in your Steam library.
                8. -
                9. Click on the "Install" button and wait for the game to download and install on your Mac.
                10. -
                11. Once the installation is complete, click on the "Play" button and enjoy the game!
                12. -
                -

                Method 2: Use cloud gaming services

                -

                What are cloud gaming services?

                -

                If you don't have a Mac that can run Ark Survival Evolved smoothly, or you don't want to download and install the game on your hard drive, you can use another method to play it on your Mac: cloud gaming services. Cloud gaming services are platforms that allow you to stream games from remote servers to your device over the internet. This means that you don't need to worry about the hardware or storage requirements of the game, as they are handled by the cloud service provider. You only need a stable internet connection and a compatible device to access the game.

                -

                Advantages and disadvantages of cloud gaming

                -

                Cloud gaming has some advantages and disadvantages that you should consider before using it. Some of the advantages are:

                -
                  -
                • You can play games that your device can't run normally, as the cloud service provider has powerful hardware and software.
                • -
                • You can save space on your hard drive, as you don't need to download and install the game.
                • -
                • You can access your games from anywhere, as long as you have an internet connection and a compatible device.
                • -
                • You can enjoy high-quality graphics and sound, as the cloud service provider can optimize the game settings for your device.
                • -
                -

                Some of the disadvantages are:

                -
                  -
                • You need a fast and reliable internet connection, as any lag or interruption can affect your gaming experience.
                • -
                • You may experience some latency or input lag, as there is a delay between your actions and the game's response.
                • -
                • You may have limited control over the game settings, as they are determined by the cloud service provider.
                • -
                • You may have to pay a subscription fee or a per-hour fee to use the cloud service, depending on the provider and the plan you choose.
                • -

                Examples of cloud gaming services for Mac OS X

                -

                There are several cloud gaming services that you can use to play Ark Survival Evolved on your Mac OS X device. Here are some of the most popular ones:

                -
                  -
                • OnLive: OnLive is one of the oldest and best options for cloud gaming. It has a large library of games, including Ark Survival Evolved, that you can stream to your Mac via the OnLive app or browser. You can either pay a monthly subscription fee or a per-hour fee to access the games. You can also use the OnLive Game System, a small console that connects to your TV and controller, to play on a bigger screen.
                • -
                • PlayStation Now: PlayStation Now is Sony's cloud gaming service that lets you stream hundreds of PlayStation games to your Mac via the PS Now app. You can play games from PS2, PS3, and PS4, including Ark Survival Evolved, with a monthly or annual subscription fee. You can also download some PS4 games to your Mac and play them offline. You need a PlayStation account and a compatible controller to use this service.
                • -
                • Vortex: Vortex is a cloud gaming service that focuses exclusively on cloud gaming. It has over 100 games, including Ark Survival Evolved, that you can stream to your Mac via the Vortex app or browser. You can either pay a monthly subscription fee or a per-hour fee to access the games. You can also use the Vortex Box, a small device that connects to your TV and controller, to play on a bigger screen.
                • -
                • Project xCloud: Project xCloud is Microsoft's cloud gaming service that lets you stream Xbox games to your Mac via the Xbox.com/play website. You need an Xbox Game Pass Ultimate subscription, which gives you access to over 100 games, including Ark Survival Evolved, as well as other benefits such as online multiplayer and free perks. You also need a compatible controller and a stable internet connection to use this service.
                • -
                • Blacknut: Blacknut is a cloud gaming service that offers over 500 games, including Ark Survival Evolved, that you can stream to your Mac via the Blacknut app or browser. You can pay a monthly subscription fee or a per-hour fee to access the games. You can also use the Blacknut TV app, which is available on some smart TVs and streaming devices, to play on a bigger screen.
                • -
                • GeForce Now: GeForce Now is Nvidia's cloud gaming service that lets you stream PC games to your Mac via the GeForce Now app or browser. You can either use the free tier, which gives you one-hour sessions and standard access, or the paid tier, which gives you six-hour sessions and priority access. You can also use the GeForce Now app on some smart TVs and streaming devices to play on a bigger screen.
                • -

                Tips and tricks for playing Ark Survival Evolved on Mac OS X

                -

                Now that you know how to download and play Ark Survival Evolved on your Mac OS X device, you might want to learn some tips and tricks to improve your gaming experience. Here are some of them:

                -

                ark survival evolved mac os x system requirements
                -how to play ark survival evolved on mac catalina
                -ark survival evolved mac os x free download
                -ark survival evolved mac os x gameplay
                -ark survival evolved mac os x steam
                -ark survival evolved mac os x update
                -ark survival evolved mac os x mods
                -ark survival evolved mac os x cheats
                -ark survival evolved mac os x review
                -ark survival evolved mac os x performance
                -ark survival evolved mac os x graphics settings
                -ark survival evolved mac os x multiplayer
                -ark survival evolved mac os x offline mode
                -ark survival evolved mac os x controller support
                -ark survival evolved mac os x patch notes
                -ark survival evolved mac os x download size
                -ark survival evolved mac os x best settings
                -ark survival evolved mac os x tips and tricks
                -ark survival evolved mac os x error code 15
                -ark survival evolved mac os x launch options
                -ark survival evolved mac os x server hosting
                -ark survival evolved mac os x crossplay
                -ark survival evolved mac os x genesis part 2
                -ark survival evolved mac os x low end pc
                -ark survival evolved mac os x keyboard and mouse
                -ark survival evolved mac os x vr support
                -ark survival evolved mac os x dlc download
                -ark survival evolved mac os x resolution problem
                -ark survival evolved mac os x save location
                -ark survival evolved mac os x split screen
                -ark survival evolved mac os x no sound fix
                -ark survival evolved mac os x epic games store
                -ark survival evolved mac os x admin commands
                -ark survival evolved mac os x black screen issue
                -ark survival evolved mac os x minimum specs
                -ark survival evolved mac os x steam workshop
                -ark survival evolved mac os x game guide
                -ark survival evolved mac os x taming calculator
                -ark survival evolved mac os x breeding guide
                -ark survival evolved mac os x wiki page

                -

                Adjust the graphics settings

                -

                Ark Survival Evolved is a very demanding game in terms of graphics, and it can cause some performance issues on your Mac, especially if you use the official OS X version. To avoid lag, crashes, or overheating, you should adjust the graphics settings to suit your device's capabilities. You can do this by going to the game's options menu and selecting the graphics tab. You can either choose a preset option, such as low, medium, or high, or customize each setting individually, such as resolution, anti-aliasing, shadows, textures, and more. You should experiment with different settings until you find the optimal balance between quality and performance.

                -

                Use a mouse and keyboard

                -

                Ark Survival Evolved is a game that requires a lot of precision and accuracy, especially when it comes to aiming, shooting, and crafting. While you can use a controller to play the game on your Mac, you might find it easier and more comfortable to use a mouse and keyboard instead. A mouse and keyboard will give you more control over your character's movements and actions, as well as access to more hotkeys and shortcuts. You can also customize the key bindings to suit your preferences by going to the game's options menu and selecting the controls tab.

                -

                Join a multiplayer server

                -

                Ark Survival Evolved is a game that can be played solo or with other players online. While playing solo can be fun and challenging, playing with others can offer more opportunities and benefits. For example, you can team up with other players to form tribes, share resources, build bases, trade items, and fight enemies. You can also compete with other players in various game modes, such as PvP (player versus player), PvE (player versus environment), or PvX (a mix of both). You can join a multiplayer server by going to the game's main menu and selecting the play online option. You can either choose an official server hosted by Studio Wildcard or a unofficial server hosted by other players or communities. You can also filter the servers by region, mode, map, difficulty, and more.

                -

                Conclusion

                -

                Summary of the main points

                -

                In conclusion, Ark Survival Evolved is an amazing game that you can play on your Mac OS X device using various methods. You can either use the official OS X version from Steam, which requires a compatible Mac and enough space on your hard drive, or use cloud gaming services, which only require a stable internet connection and a compatible device. You can also follow some tips and tricks to enhance your gaming experience, such as adjusting the graphics settings, using a mouse and keyboard, and joining a multiplayer server.

                -

                Call to action

                -

                If you are ready to embark on an epic adventure on the island of Ark, don't hesitate any longer and download Ark Survival Evolved for Mac OS X today! You will not regret it!

                -

                Frequently Asked Questions

                -
                  -
                • How much does Ark Survival Evolved cost for Mac OS X?
                • -

                  The official OS X version of Ark Survival Evolved costs $49.99 on Steam. However, you can sometimes find it on sale for a lower price. The cloud gaming services have different pricing plans depending on the provider and the plan you choose.

                  -
                • Can I play Ark Survival Evolved offline on Mac OS X?
                • -

                  You can play Ark Survival Evolved offline on Mac OS X if you use the official OS X version from Steam and download the game to your hard drive. However, you will not be able to access some features such as online multiplayer or updates. You cannot play Ark Survival Evolved offline if you use cloud gaming services.

                  -
                • Can I cross-play Ark Survival Evolved with other platforms on Mac OS X?
                • -

                  You can cross-play Ark Survival Evolved with other platforms on Mac OS X if you use cloud gaming services that support cross-play. For example, PlayStation Now allows you to cross-play with PS4 players, Project xCloud allows you to cross-play with Xbox players, and GeForce Now allows you to cross-play with PC players. However, you cannot cross-play Ark Survival Evolved with other platforms if you use the official OS X version from Steam.

                  -
                • What are the best mods for Ark Survival Evolved on Mac OS X?
                • -

                  Ark Survival Evolved has a vibrant modding community that creates various mods that enhance or change or add to the game. Some of the best mods for Ark Survival Evolved on Mac OS X are:

                  -
                    -
                  • Structures Plus: This mod adds new structures, items, and features that improve the building system and the quality of life in the game. For example, you can use stackable foundations, triangular pieces, glass walls, automatic doors, and more.
                  • -
                  • Classic Flyers: This mod restores the original stats and abilities of the flying creatures in the game, such as speed, stamina, and carry weight. It also adds new features such as breeding, imprinting, and leveling for flyers.
                  • -
                  • Awesome Spyglass: This mod replaces the default spyglass with a more advanced and useful one. You can use it to see detailed information about any creature or object in the game, such as health, level, stats, torpor, inventory, and more.
                  • -
                  • Ark Eternal: This mod adds new creatures, items, and mechanics that make the game more challenging and fun. You can encounter over 600 new creatures with different tiers, abilities, and behaviors. You can also use new items such as potions, saddles, weapons, and more.
                  • -
                  • Primal Fear: This mod adds new creatures, items, and mechanics that make the game more dangerous and exciting. You can encounter over 300 new creatures with different tiers, abilities, and behaviors. You can also use new items such as potions, saddles, weapons, and more.
                  • -
                  -

                  You can find these mods and more on the Steam Workshop or other modding websites. You can install them using the Steam app or a mod manager such as Ark Server Manager.

                  -

                  -

                  Thank you for reading this article on how to download and play Ark Survival Evolved on Mac OS X. I hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/AetherSX2 Turnip APK How to Play PS2 Games on Android with Improved Graphics.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/AetherSX2 Turnip APK How to Play PS2 Games on Android with Improved Graphics.md deleted file mode 100644 index 5e9ad02353bc95d879bf9a2b84071cd2951f6e07..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/AetherSX2 Turnip APK How to Play PS2 Games on Android with Improved Graphics.md +++ /dev/null @@ -1,113 +0,0 @@ -
                  -

                  AetherSX2 Turnip APK Latest Version: The Best PS2 Emulator for Android

                  -

                  Do you miss playing your favorite PS2 games on your Android phone? Do you want to enjoy the classics like God of War, Final Fantasy, and GTA with enhanced graphics and performance? If so, you need to check out the latest version of aethersx2 turnip apk, the best PS2 emulator for Android.

                  -

                  aethersx2 turnip apk latest version


                  Download File 🗸🗸🗸 https://bltlly.com/2uOiD3



                  -

                  AetherSX2 is a PS2 emulator based on the popular PCSX2 emulator for PC. It was developed by Tahlreth, who got permission from the PCSX2 team to use their code and make it compatible with Android devices. AetherSX2 has been in development since late 2021 and has received many updates and improvements since then.

                  -

                  The latest version of aethersx2 turnip apk is a special build that uses Freedreno Turnip drivers, which are open-source Vulkan drivers for Adreno GPUs. These drivers can fix many graphics bugs and problems with emulation, as well as boost performance and stability. AetherSX2 turnip apk is currently the best PS2 emulator for Android, as it can run many games smoothly and with high compatibility.

                  -

                  Features of AetherSX2 Turnip APK

                  -

                  AetherSX2 turnip apk has many features that make it stand out from other PS2 emulators on Android. Some of these features are:

                  -
                    -
                  • System simulation: AetherSX2 can simulate the PS2 hardware and BIOS, which means it can run games from disc images (iso/chd/cso) or from real discs using an OTG adapter.
                  • -
                  • OpenGL, Vulkan, and Software rendering: AetherSX2 supports three different rendering backends, which can affect the graphics quality and performance of games. OpenGL is the default backend, which works well for most games. Vulkan is the recommended backend for devices with Adreno GPUs, as it can improve performance and fix graphical glitches. Software rendering is a fallback option for devices that don't support OpenGL or Vulkan, but it is very slow and has low resolution.
                  • -
                  • Upscaling of games to 1080p and beyond: AetherSX2 can upscale games to higher resolutions than the original PS2, which can enhance the graphics quality and sharpness. Users can choose from several resolution options, ranging from native (480p) to 4K (2160p).
                  • -
                  • Widescreen patches for games without native support: AetherSX2 can apply widescreen patches to games that don't support widescreen mode natively, which can make them look better on modern displays. Users can enable or disable widescreen patches for each game individually.
                  • -
                  • Save states: AetherSX2 can save and load game states at any point during gameplay, which can be useful for skipping cutscenes, saving progress, or retrying difficult sections. Users can have up to 10 save states per game.
                  • -
                  • Touchscreen and bluetooth controller support: AetherSX2 can use touchscreen controls or external controllers to play games. Users can customize the touchscreen layout and sensitivity, as well as map buttons to their preferred controller.
                  • -
                  • Per game settings: AetherSX2 allows users to adjust various settings for each game individually, such as rendering backend, resolution, speed hacks, cheats, language, etc.
                  • -
                  -

                  Requirements for AetherSX2 Turnip APK

                  -

                  AetherSX2 turnip apk has some minimum and recommended requirements for running on Android devices. These are:

                  - - - -
                  Minimum Requirements Recommended Requirements
                  - Android 7.0 or higher - 2 GB of RAM or more - 4 GB of storage or more - OpenGL ES 3.0 or higher - Quad-core CPU or higher - Adreno 5xx GPU or higher- Android 9.0 or higher - 4 GB of RAM or more - 8 GB of storage or more - Vulkan 1.1 or higher - Octa-core CPU or higher - Adreno 6xx GPU or higher
                  -

                  Note that these requirements are only guidelines and may vary depending on the game and the settings used. Some games may run better or worse than others, so users should experiment with different settings to find the optimal configuration for their device.

                  -

                  aethersx2 turnip apk download free
                  -aethersx2 turnip apk android game
                  -aethersx2 turnip apk emulator for ps2
                  -aethersx2 turnip apk bios required
                  -aethersx2 turnip apk vulkan support
                  -aethersx2 turnip apk freedreno drivers
                  -aethersx2 turnip apk update 2023
                  -aethersx2 turnip apk best settings
                  -aethersx2 turnip apk compatible games
                  -aethersx2 turnip apk performance tips
                  -aethersx2 turnip apk no root needed
                  -aethersx2 turnip apk how to install
                  -aethersx2 turnip apk review and rating
                  -aethersx2 turnip apk mod and hack
                  -aethersx2 turnip apk cheats and codes
                  -aethersx2 turnip apk offline and online
                  -aethersx2 turnip apk multiplayer and co-op
                  -aethersx2 turnip apk save and load state
                  -aethersx2 turnip apk widescreen and upscaling
                  -aethersx2 turnip apk touchscreen and controller
                  -aethersx2 turnip apk iso and chd support
                  -aethersx2 turnip apk faq and guide
                  -aethersx2 turnip apk license and disclaimer
                  -aethersx2 turnip apk bugs and fixes
                  -aethersx2 turnip apk features and benefits
                  -aethersx2 turnip apk alternatives and competitors
                  -aethersx2 turnip apk reddit and forum
                  -aethersx2 turnip apk youtube and video
                  -aethersx2 turnip apk website and blog
                  -aethersx2 turnip apk news and updates
                  -aethersx2 turnip apk comparison and benchmark
                  -aethersx2 turnip apk requirements and compatibility
                  -aethersx2 turnip apk feedback and support
                  -aethersx2 turnip apk donation and premium
                  -aethersx2 turnip apk source code and development
                  -aethersx2 turnip apk community and discord
                  -aethersx2 turnip apk tutorial and walkthrough
                  -aethersx2 turnip apk test and trial
                  -aethersx2 turnip apk fun and entertainment
                  -aethersx2 turnip apk quality and reliability

                  -

                  Download and Install AetherSX2 Turnip APK

                  -

                  AetherSX2 turnip apk is not available on the Google Play Store, as it violates the terms of service of PS2 games. Users can download it from the official website of the developer, which is https://aethersx2.com/. Users should always download the latest version of the apk, as it may contain bug fixes and improvements.

                  -

                  To install aethersx2 turnip apk on their Android device, users need to follow these steps:

                  -
                    -
                  1. Enable unknown sources in the security settings of their device. This will allow them to install apps from outside the Google Play Store.
                  2. -
                  3. Download the apk file from the official website and save it to their device.
                  4. -
                  5. Open the apk file and follow the instructions to install it.
                  6. -
                  7. Launch the app and grant it the necessary permissions, such as storage access and microphone access.
                  8. -
                  9. Enjoy playing PS2 games on their Android device.
                  10. -
                  -

                  Reviews of AetherSX2 Turnip APK

                  -

                  AetherSX2 turnip apk has received many positive reviews and feedback from users who have tried it. Here are some of the user reviews from the official website:

                  -
                  "This is amazing! I can play my favorite PS2 games on my phone with great graphics and speed. Thank you so much for this app!" - John
                  -
                  "I love this emulator! It runs so smoothly and has so many options to customize. I can play games that other emulators can't run, like Kingdom Hearts and Shadow of the Colossus." - Lisa
                  -
                  "This is the best PS2 emulator for Android, hands down. It has Vulkan support, which makes a huge difference in performance and compatibility. I can play games like God of War and Metal Gear Solid 3 with no lag or glitches." - Mike
                  -

                  AetherSX2 turnip apk also has some negative reviews and feedback from users who have encountered some problems with it. Here are some of the user reviews from the official website:

                  -
                  "This emulator is good, but it still has some issues. Some games don't work at all, like Gran Turismo 4 and Tekken 5. Some games have graphical errors, like Persona 4 and Silent Hill 2. I hope these issues will be fixed in future updates." - Alex
                  -
                  "This emulator is too demanding for my device. It runs very slow and crashes often. I have a Samsung Galaxy S8 with Android 9.0, but it still can't handle this emulator. I wish there was a way to lower the settings more." - Sara
                  -
                  "This emulator is not compatible with my device. It says that my device does not support Vulkan, even though I have an Adreno GPU. I can't use OpenGL either, because it is too slow and buggy. I can't play any games with this emulator." - Kevin
                  -

                  AetherSX2 turnip apk also has some mixed reviews and feedback from users who have had different experiences with it. Here are some of the user reviews from the official website:

                  -
                  "This emulator is good for some games, but bad for others. Some games run very well, like Final Fantasy X and Resident Evil 4. Some games run very poorly, like Jak and Daxter and Ratchet and Clank. It depends on the game and the settings." - Amy
                  -
                  "This emulator is decent, but it needs more work. It has some nice features, like upscaling and widescreen patches. It also has some annoying features, like ads and pop-ups. It also crashes sometimes and freezes my phone." - Tom
                  -
                  "This emulator is okay, but it could be better. It has potential, but it also has limitations. It can't run some games that PCS X2 can run on PC, like God of War 2 and Shadow of the Colossus. It also can't run some games that other emulators can run on Android, like Dragon Quest 8 and Star Ocean 3. It is a hit or miss emulator." - Leo
                  -

                  Conclusion

                  -

                  AetherSX2 turnip apk is the latest and best PS2 emulator for Android devices. It can run many PS2 games with high graphics and performance, thanks to the use of Freedreno Turnip drivers. It also has many features that enhance the emulation experience, such as upscaling, widescreen patches, save states, and controller support. However, it also has some drawbacks, such as compatibility issues, hardware requirements, and stability problems. Users should try it out for themselves and see if it works for their device and their favorite games.

                  -

                  Here are some tips and tricks for using AetherSX2 turnip apk:

                  -
                    -
                  • Always download the latest version of the apk from the official website, as it may have bug fixes and improvements.
                  • -
                  • Always check the compatibility list on the website before playing a game, as it may have useful information and tips for running it.
                  • -
                  • Always backup your save data and game states before updating or deleting the app, as they may get lost or corrupted.
                  • -
                  • Always experiment with different settings and options for each game, as they may affect the graphics and performance of the game.
                  • -
                  • Always report any bugs or problems to the developer on the website or on social media, as they may help them fix them in future updates.
                  • -
                  -

                  FAQs

                  -

                  Here are some frequently asked questions about AetherSX2 turnip apk:

                  -
                    -
                  1. Q: Is AetherSX2 turnip apk legal?
                  2. -
                  3. A: AetherSX2 turnip apk is legal as long as you own the original PS2 games and BIOS that you are emulating. However, downloading or distributing PS2 games or BIOS without permission is illegal and may get you in trouble.
                  4. -
                  5. Q: How can I get PS2 games and BIOS for AetherSX2 turnip apk?
                  6. -
                  7. A: You can get PS2 games and BIOS by ripping them from your own PS2 discs using a PC or a PS3. You can also get them from online sources, but this is not recommended or endorsed by the developer.
                  8. -
                  9. Q: How can I update AetherSX2 turnip apk?
                  10. -
                  11. A: You can update AetherSX2 turnip apk by downloading the latest version of the apk from the official website and installing it over the previous version. You don't need to uninstall the previous version first.
                  12. -
                  13. Q: How can I uninstall AetherSX2 turnip apk?
                  14. -
                  15. A: You can uninstall AetherSX2 turnip apk by going to your device settings, finding the app, and tapping on uninstall. You can also delete the app data and cache from your device storage.
                  16. -
                  17. Q: How can I contact the developer of AetherSX2 turnip apk?
                  18. -
                  19. A: You can contact the developer of AetherSX2 turnip apk by visiting their website at https://aethersx2.com/, or by following them on social media platforms like Twitter, Facebook, Instagram, YouTube, etc.
                  20. -

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/Contoh-Sop-Kawalan-Keselamatan-Bilik-Server-Pdf-LINK.md b/spaces/tioseFevbu/cartoon-converter/Contoh-Sop-Kawalan-Keselamatan-Bilik-Server-Pdf-LINK.md deleted file mode 100644 index b48a4100ff4421d7d82cd03a5a2a78b49000cb5d..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/Contoh-Sop-Kawalan-Keselamatan-Bilik-Server-Pdf-LINK.md +++ /dev/null @@ -1,43 +0,0 @@ -## Contoh Sop Kawalan Keselamatan Bilik Server Pdf - - - - ![Contoh Sop Kawalan Keselamatan Bilik Server Pdf LINK](https://3.bp.blogspot.com/-jJhT6VBZe9E/UmXX0Kt1cLI/AAAAAAAAAEI/5Bm6xZIZ9XI/s1600/SOPKawalanKeselamatan.jpg) - - - -**CLICK HERE - [https://ditzcosupo.blogspot.com/?d=2tx0pe](https://ditzcosupo.blogspot.com/?d=2tx0pe)** - - - -# Contoh Sop Kawalan Keselamatan Bilik Server Pdf: Garis Panduan Pematuhan Bilik Server Kementerian Kesihatan Malaysia - - - -Bilik server adalah tempat yang menyimpan peralatan ICT yang penting untuk menyokong operasi dan perkhidmatan di fasiliti kesihatan. Bilik server perlu dilindungi daripada ancaman fizikal dan digital yang boleh mengganggu atau merosakkan peralatan dan data yang disimpan di dalamnya. Oleh itu, bilik server perlu mematuhi garis panduan keselamatan yang ditetapkan oleh Kementerian Kesihatan Malaysia (KKM). - - - -Garis panduan pematuhan bilik server KKM adalah dokumen yang mengandungi contoh sop kawalan keselamatan bilik server pdf yang boleh dijadikan rujukan oleh pengurus dan kakitangan ICT di fasiliti kesihatan. Garis panduan ini bertujuan untuk memastikan keselamatan terhadap bilik server bagi semua fasiliti KKM selaras dengan pematuhan terhadap peraturan yang telah ditetapkan dalam Dasar Keselamatan ICT KKM Versi 5.0 – DKICT KKM (7-1-1 Kawalan Keselamatan Fizikal) dalam memperkukuhkan keselamatan dan ketersediaan bilik server di fasiliti KKM. - - - -Garis panduan ini merangkumi aspek-aspek seperti lokasi, saiz, struktur, pencahayaan, pengudaraan, bekalan elektrik, sistem pemadam api, sistem pengawasan, sistem akses, sistem penggera, sistem pembersihan, sistem penyelenggaraan, sistem pemantauan dan sistem dokumentasi bilik server. Garis panduan ini juga menyediakan contoh-contoh borang, senarai semak, label dan papan tanda yang berkaitan dengan kawalan keselamatan bilik server. - - - -Garis panduan ini boleh dimuat turun secara percuma dari laman web rasmi KKM di [sini](https://www2.moh.gov.my/moh/resources/Penerbitan/Garis%20Panduan/Garis%20panduan%20Umum%20%28Awam%29/BPM/GARIS_PANDUAN_PEMATUHAN_BILIK_SERVER_KKM.pdf). Garis panduan ini juga boleh dicetak dan disimpan sebagai rujukan di bilik server. Pengurus dan kakitangan ICT di fasiliti kesihatan perlu membaca dan memahami garis panduan ini serta melaksanakan contoh sop kawalan keselamatan bilik server pdf yang sesuai dengan keperluan dan situasi bilik server masing-masing. - - - -Garis panduan pematuhan bilik server KKM juga menekankan kepentingan untuk mengadakan latihan dan ujian berkala bagi memastikan kesiapan dan kebolehan pengurus dan kakitangan ICT dalam menguruskan bilik server dengan selamat dan cekap. Latihan dan ujian ini meliputi aspek-aspek seperti prosedur operasi standard (SOP), prosedur kecemasan, prosedur pemulihan bencana, prosedur audit dan prosedur penyimpanan rekod. Latihan dan ujian ini boleh dijalankan secara dalaman atau dengan bantuan pihak luar yang berkelayakan. - - - -Garis panduan pematuhan bilik server KKM juga menyarankan untuk mengadakan kerjasama dan komunikasi yang baik antara pengurus dan kakitangan ICT dengan pihak-pihak lain yang terlibat atau berkepentingan dengan bilik server. Pihak-pihak ini termasuklah pengurusan fasiliti kesihatan, unit keselamatan, unit kejuruteraan, unit perolehan, unit kewangan, unit undang-undang, unit kualiti, unit audit, pihak berkuasa tempatan, pembekal perkhidmatan dan vendor peralatan. Kerjasama dan komunikasi ini boleh membantu dalam menyelesaikan sebarang isu atau masalah yang berkaitan dengan bilik server dengan lebih cepat dan efektif. - - - -Garis panduan pematuhan bilik server KKM adalah satu inisiatif yang penting dan bermanfaat bagi meningkatkan tahap keselamatan dan kualiti bilik server di fasiliti kesihatan. Dengan mengikuti garis panduan ini, pengurus dan kakitangan ICT boleh menjaga bilik server dengan lebih profesional dan bertanggungjawab. Ini akan memberi impak positif kepada prestasi dan produktiviti perkhidmatan ICT di fasiliti kesihatan serta kepuasan dan kepercayaan pelanggan. - - dfd1c89656 \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/GibbsCAM 2013 V10500 X64 X86torrent.md b/spaces/tioseFevbu/cartoon-converter/scripts/GibbsCAM 2013 V10500 X64 X86torrent.md deleted file mode 100644 index 131471478bdbc6809bfbf3e8cc5999e98d297ca7..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/GibbsCAM 2013 V10500 X64 X86torrent.md +++ /dev/null @@ -1,27 +0,0 @@ -
                  -``` -

                  GibbsCAM 2013: A Powerful and Versatile CAM Software for CNC Machining

                  -

                  GibbsCAM 2013 is a state-of-the-art computer-aided manufacturing (CAM) software for programming computer numerically controlled (CNC) machine tools. It offers a range of features and options to suit different machining needs and preferences. Whether you are working with 2-axis milling, turning, multi-task machining, 5-axis milling, or complex turbomachinery parts, GibbsCAM 2013 can help you create efficient and accurate toolpaths with ease.

                  -

                  One of the highlights of GibbsCAM 2013 is the new 5-axis multi-blade option, which supports simplified and versatile programming of turbo-machinery parts such as impellers, blisks, and turbines. This option allows you to create toolpaths based on the geometry of the blades, without the need to define complex surfaces or curves. You can also control various parameters such as clearance, lead-in/out, tilt angle, and step-over to optimize the machining process.

                  -

                  GibbsCAM 2013 V10500 X64 X86torrent


                  Download Zip >> https://urlcod.com/2uHynA



                  -

                  Another new feature in GibbsCAM 2013 is the 5-axis porting option, which provides users with a broad selection of capabilities for programming components with variable internal containment features, such as ports and manifolds. This option enables you to create toolpaths that follow the centerline of the port, while avoiding collisions with the walls and other features. You can also adjust the tool orientation, entry/exit points, and smoothing options to achieve the desired surface finish and accuracy.

                  -

                  GibbsCAM 2013 also includes enhancements to the entire suite of software to improve functionality, quality, reliability, and performance. Some of these enhancements are:

                  -
                    -
                  • Improved support for feature-based machining by offering users the versatility to identify, group, and machine features while maintaining total process control.
                  • -
                  • A more powerful "profiler" tool capable of extracting machining boundaries and drives from a model without the need to create, modify, or chain curves.
                  • -
                  • Enhancements to geometry creation, turning, contouring, pocketing, and improved Swiss machining support.
                  • -
                  -

                  GibbsCAM 2013 is compatible with Windows XP / Vista / Seven operating systems and supports multiple languages such as English, Chinese Simplified, Chinese Traditional, Czech, Dutch, Finnish, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish. Turkish.

                  -

                  If you are looking for a robust and powerful CAM software that can handle any CNC machining challenge with ease and efficiency, GibbsCAM 2013 is the right choice for you. You can download GibbsCAM 2013 V10500 X64 X86torrent from various online sources and enjoy its benefits today.

                  -``` - -``` -

                  GibbsCAM 2013 is not only a powerful and versatile CAM software, but also a user-friendly and intuitive one. It has a graphical user interface that was designed for machinists by machinists, resulting in a user environment that is both familiar and efficient. You can easily move between geometry creation, toolpath creation, process visualization/verification, and post processing with GibbsCAM's free-form interaction style. You can also customize the interface to suit your preferences and workflow.

                  -

                  GibbsCAM 2013 also offers seamless integration with various CAD systems, such as SolidWorks, Solid Edge, Autodesk Inventor, CATIA V5, and more. You can import and export models in various formats, such as IGES, STEP, ACIS, Parasolid, STL, and more. You can also use GibbsCAM's powerful geometry creation and editing tools to modify or repair imported models as needed.

                  -

                  GibbsCAM 2013 supports a wide range of CNC machines and controllers, such as Haas, Mazak, Okuma, Siemens, Fanuc, Heidenhain, and more. You can choose from over 11,000 post processors or create your own with GibbsCAM's post processor generator. You can also simulate and verify your toolpaths with GibbsCAM's built-in machine simulation and cut part rendering capabilities.

                  -

                  With GibbsCAM 2013, you can be confident that you are using a CAM software that is reliable, accurate, and efficient. GibbsCAM 2013 has been certified by various CAD vendors and machine tool manufacturers for its quality and compatibility. It has also been tested and proven by thousands of satisfied customers around the world.

                  -

                  -

                  Don't miss this opportunity to take your CNC machining to the next level with GibbsCAM 2013. Download GibbsCAM 2013 V10500 X64 X86torrent today and see for yourself why GibbsCAM is the CAM industry's ease-of-use leader.

                  -```

                  cec2833e83
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/ManichithrathazhuHdMovieDownload720p.md b/spaces/tioseFevbu/cartoon-converter/scripts/ManichithrathazhuHdMovieDownload720p.md deleted file mode 100644 index b560e5d756136ed6c27484a93a47ecfe12dcde9d..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/ManichithrathazhuHdMovieDownload720p.md +++ /dev/null @@ -1,14 +0,0 @@ -
                  -

                  Manichithrathazhu: A Classic Malayalam Horror-Comedy Film

                  -

                  Manichithrathazhu is a 1993 Malayalam film directed by Fazil and starring Mohanlal, Shobana, Suresh Gopi and Vinaya Prasad. The film is based on a true story of a haunted mansion in Kerala, where a young couple moves in and faces paranormal activities. The film is widely regarded as one of the best horror-comedy films in Indian cinema, and won several awards, including the National Film Award for Best Actress for Shobana.

                  -

                  The film was remade in several languages, such as Tamil (Chandramukhi), Hindi (Bhool Bhulaiyaa), Kannada (Apthamitra) and Bengali (Rajmohol). However, none of them could match the original in terms of popularity and critical acclaim. The film is also known for its memorable songs, composed by M.G. Radhakrishnan and written by Bichu Thirumala.

                  -

                  ManichithrathazhuHdMovieDownload720p


                  Download Ziphttps://urlcod.com/2uHxKr



                  -

                  If you are looking for a thrilling and entertaining film to watch, you can stream Manichithrathazhu on Disney+ Hotstar[^1^]. The film is available in HD quality with 720p resolution. You can also download the film offline and watch it anytime you want. Don't miss this classic film that will keep you hooked till the end.

                  The film revolves around the mystery of Nagavalli, a dancer who was killed by her lover Ramanathan, a king of a nearby province. Nagavalli's spirit is believed to haunt the mansion and possess anyone who tries to unravel her secrets. Ganga becomes fascinated by Nagavalli's story and starts behaving like her, much to the horror of everyone. She even tries to kill Nakulan, whom she mistakes for Ramanathan.

                  -

                  Dr. Sunny Joseph (Mohanlal), a psychiatrist and Nakulan's friend, arrives to help Ganga. He soon realizes that Ganga is suffering from multiple personality disorder, and that Nagavalli is one of her alter egos. He also suspects that someone is trying to manipulate Ganga's condition for their own benefit. He sets out to find the truth behind Nagavalli's death and cure Ganga of her disorder.

                  -

                  The film has many twists and turns, as well as comic moments involving Dr. Sunny and his antics. The climax reveals the real culprit behind the mystery and the motive for their actions. The film ends with Ganga being freed from Nagavalli's influence and reuniting with Nakulan.

                  Cast and Crew

                  -

                  Manichithrathazhu boasts of a stellar cast and crew, who have contributed to the success of the film. The film was directed by Fazil, who is known for his family dramas and romantic comedies. He collaborated with three other directors, Sibi Malayil, Priyadarshan and Siddique-Lal, who served as the second-unit directors. The film was written by Madhu Muttam, who based the story on a real-life incident that happened in his ancestral house. The film was produced by Swargachitra Appachan, under his banner Swargachitra.

                  -

                  The film features some of the finest actors of Malayalam cinema, who have delivered memorable performances. Shobana plays the dual role of Ganga and Nagavalli, and won the National Film Award for Best Actress for her portrayal. Mohanlal plays Dr. Sunny Joseph, a psychiatrist and Nakulan's friend, who brings comic relief and suspense to the film. Suresh Gopi plays Nakulan, a modern and rational engineer, who is married to Ganga. Vinaya Prasad plays Sridevi, Nakulan's cousin and ex-fiancée, who is jealous of Ganga. Nedumudi Venu plays Thampi, Nakulan's uncle and the head of the family, who is superstitious and protective of his heritage. Innocent plays Unnithan, Thampi's brother-in-law and a local politician, who provides comic relief. K.P.A.C.Lalitha plays Bhasura, Thampi's sister and Unnithan's wife, who is a caring and supportive aunt. Sudheesh plays Chanthu, a servant boy who is loyal to Ganga. Thilakan plays Brahmadattan Nampoothirippadu, a tantric expert who tries to exorcise Nagavalli's spirit. K.B.Ganesh Kumar plays Dasappan Kutty, a local goon who is hired by Unnithan to scare Ganga. Rudra plays Alli, Unnithan and Bhasura's daughter and Nakulan's cousin. Kuthiravattam Pappu plays Kattuparamban, a caretaker of the mansion. Sridhar plays Mahadevan, a friend of Nakulan and Dr. Sunny.

                  -

                  -

                  The film also features the voices of Bhagyalakshmi and Durga Sundararajan, who dubbed for Shobana as Ganga and Nagavalli respectively. The film has a musical score by Johnson and songs composed by M.G.Radhakrishnan. The lyrics were written by Bichu Thirumala and Madhu Muttam. The songs were sung by K.J.Yesudas, K.S.Chithra and Sujatha Mohan. The cinematography was by Venu Isc, with Anandakuttan and Sunny Joseph as the second-unit cinematographers. The film was edited by T.R.Shekhar. The art direction was by Mani Suchithra and the costume design was by Velayudhan Keezhillam. The makeup department was headed by P.N.Mani.

                  cec2833e83
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Mercury Flatbed Scanner 1200cu Driver FREE Download For Windows 7 29.md b/spaces/tioseFevbu/cartoon-converter/scripts/Mercury Flatbed Scanner 1200cu Driver FREE Download For Windows 7 29.md deleted file mode 100644 index 47e0f0820424abc5a8a2a86df12e29efef790e77..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Mercury Flatbed Scanner 1200cu Driver FREE Download For Windows 7 29.md +++ /dev/null @@ -1,39 +0,0 @@ -
                  -

                  How to Install Mercury Flatbed Scanner 1200cu Driver on Windows 7

                  -

                  If you have a Mercury Flatbed Scanner 1200cu and want to use it on Windows 7, you may encounter some difficulties. The scanner is an old model that does not have a compatible driver for Windows 7. However, there are some ways to make it work on your computer. In this article, we will show you how to install the Mercury Flatbed Scanner 1200cu driver on Windows 7 using different methods.

                  -

                  Mercury Flatbed Scanner 1200cu Driver Download For Windows 7 29


                  Download File - https://urlcod.com/2uHy58



                  -

                  Method 1: Use a Compatible Driver from Another Manufacturer

                  -

                  One of the easiest ways to install the Mercury Flatbed Scanner 1200cu driver on Windows 7 is to use a compatible driver from another manufacturer. For example, you can use the driver for the Mustek BearPaw 1200CU Plus scanner, which has the same hardware as the Mercury scanner. To do this, follow these steps:

                  -
                    -
                  1. Download the Mustek BearPaw 1200CU Plus driver from here. Make sure you choose the correct version for your Windows 7 (32-bit or 64-bit).
                  2. -
                  3. Extract the downloaded file and run the setup.exe file.
                  4. -
                  5. Follow the on-screen instructions to install the driver.
                  6. -
                  7. Restart your computer and connect your scanner to a USB port.
                  8. -
                  9. Windows 7 should recognize your scanner and install it automatically.
                  10. -
                  -

                  You can now use your scanner with any scanning software that supports TWAIN drivers.

                  -

                  Method 2: Use a Generic Scanner Driver

                  -

                  Another way to install the Mercury Flatbed Scanner 1200cu driver on Windows 7 is to use a generic scanner driver that works with most scanners. For example, you can use the VueScan software, which is a scanning program that supports over 6000 scanners. To do this, follow these steps:

                  -

                  -
                    -
                  1. Download VueScan from here. You can use the free trial version or buy the full version.
                  2. -
                  3. Install VueScan on your computer.
                  4. -
                  5. Connect your scanner to a USB port and turn it on.
                  6. -
                  7. Launch VueScan and select your scanner from the list.
                  8. -
                  9. You can now scan your documents or images using VueScan.
                  10. -
                  -

                  VueScan has many features and options that allow you to adjust the scanning quality and settings.

                  -

                  Method 3: Use a Virtual Machine

                  -

                  A third way to install the Mercury Flatbed Scanner 1200cu driver on Windows 7 is to use a virtual machine. A virtual machine is a software that allows you to run another operating system inside your current one. For example, you can use VirtualBox to run Windows XP inside Windows 7. To do this, follow these steps:

                  -
                    -
                  1. Download VirtualBox from here. Install it on your computer.
                  2. -
                  3. Download a Windows XP ISO image from here. This is an archived version of Windows XP that you can use for free.
                  4. -
                  5. Create a new virtual machine in VirtualBox and select Windows XP as the operating system.
                  6. -
                  7. Follow the wizard to configure the virtual machine settings, such as memory, disk space, etc.
                  8. -
                  9. Mount the Windows XP ISO image as a virtual CD-ROM drive in the virtual machine.
                  10. -
                  11. Start the virtual machine and install Windows XP inside it.
                  12. -
                  13. Download the Mercury Flatbed Scanner 1200cu driver from here. Make sure you choose the correct version for Windows XP (32-bit or 64-bit).
                  14. -
                  15. Install the driver inside the virtual machine.
                  16. -
                  17. Connect your scanner to a USB port

                    e93f5a0c3f
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Mikroc Pro For Arm Keygen 601 [PORTABLE].md b/spaces/tioseFevbu/cartoon-converter/scripts/Mikroc Pro For Arm Keygen 601 [PORTABLE].md deleted file mode 100644 index fc4675ca66c2e7b8143232919beebd5d6ccc762a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Mikroc Pro For Arm Keygen 601 [PORTABLE].md +++ /dev/null @@ -1,28 +0,0 @@ - -

                    How to Use mikroC PRO for ARM 601 to Develop Applications for ARM Microcontrollers

                    -

                    mikroC PRO for ARM is a powerful, feature-rich development tool for ARM microcontrollers. It is designed to provide the programmer with the easiest possible solution to developing applications for embedded systems, without compromising performance or control. In this article, we will show you how to use mikroC PRO for ARM 601 to create and debug your projects for various ARM MCUs.

                    -

                    mikroc pro for arm keygen 601


                    DOWNLOAD ››› https://urlcod.com/2uHwTc



                    -

                    What is mikroC PRO for ARM?

                    -

                    mikroC PRO for ARM is a full-featured ANSI C compiler for ARM Cortex-M0, M0+, M3, M4, and M7 microcontrollers. It supports over 1312 ARM MCUs from leading manufacturers, such as STMicroelectronics, NXP, Microchip, Texas Instruments, and more[^1^]. It also comes with a comprehensive set of libraries that cover data acquisition, memory, displays, conversions, communication, and more[^2^]. You can also install and manage third-party libraries using the package manager and library manager.

                    -

                    How to Install mikroC PRO for ARM 601?

                    -

                    To install mikroC PRO for ARM 601, you need to download the setup file from the official website[^1^]. The setup file is about 300 MB in size and requires Windows XP or later operating system. After downloading the file, run it and follow the instructions on the screen. You will need to enter your license key during the installation process. You can also choose the installation directory and the components you want to install.

                    -

                    How to Create a New Project in mikroC PRO for ARM 601?

                    -

                    To create a new project in mikroC PRO for ARM 601, you need to launch the IDE and click on the New Project icon on the toolbar. Alternatively, you can go to File > New Project. A wizard will guide you through the steps of creating a new project. You will need to select the target device, the project name and location, the project type (application or library), and the configuration settings (clock frequency, optimization level, etc.). You can also choose to add some predefined code templates or examples to your project.

                    -

                    -

                    How to Write Code in mikroC PRO for ARM 601?

                    -

                    To write code in mikroC PRO for ARM 601, you can use the built-in code editor that offers many features to help you write better code. Some of these features are:

                    -
                      -
                    • Code and Parameter Assistants: These are pop-up windows that show you the syntax and parameters of functions, variables, constants, etc. as you type.
                    • -
                    • Code Folding: This allows you to collapse or expand blocks of code to improve readability.
                    • -
                    • Syntax Highlighting: This colors different parts of your code according to their meaning and function.
                    • -
                    • Auto Correct: This automatically corrects common typing errors and typos.
                    • -
                    • Code Templates: These are snippets of code that you can insert into your code by typing a shortcut or selecting from a menu.
                    • -
                    • Active Comments: These are comments that can contain images, links, or executable commands.
                    • -
                    -

                    You can also use the Code Explorer window to view and navigate through your project structure, variables, and functions. You can also use the Find and Replace tool to search and modify your code.

                    -

                    How to Compile and Debug Code in mikroC PRO for ARM 601?

                    -

                    To compile your code in mikroC PRO for ARM 601, you can click on the Build icon on the toolbar or press F9. The compiler will check your code for errors and warnings and generate an output file in HEX format that can be programmed into your target device. You can also view detailed reports and graphs about your code size, RAM and ROM usage, assembly listing, calling tree, etc.

                    -

                    To debug your code in mikroC PRO for ARM 601, you can use either the hardware or software debugger. The hardware debugger requires a compatible programmer/debugger device such as mikroProg or Stellaris[^1^]. The software debugger simulates your code execution on your PC. Both debuggers support step-by-step execution, breakpoints, watch variables, stack trace, etc.

                    - 7b8c122e87
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py deleted file mode 100644 index 994668219dd4def6404e0afd3f538b29a0e50f8b..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py +++ /dev/null @@ -1,4649 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -BULGARIAN_LANG_MODEL = { - 63: { # 'e' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 45: { # '\xad' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 31: { # 'А' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 2, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 0, # 'и' - 26: 2, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 32: { # 'Б' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 2, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 35: { # 'В' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 2, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 43: { # 'Г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 37: { # 'Д' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 2, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 44: { # 'Е' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 2, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 0, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 55: { # 'Ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 47: { # 'З' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 40: { # 'И' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 2, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 3, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 59: { # 'Й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 33: { # 'К' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 46: { # 'Л' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 38: { # 'М' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 36: { # 'Н' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 41: { # 'О' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 30: { # 'П' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 2, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 39: { # 'Р' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 28: { # 'С' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 3, # 'А' - 32: 2, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 34: { # 'Т' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 51: { # 'У' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 48: { # 'Ф' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 49: { # 'Х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 53: { # 'Ц' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 50: { # 'Ч' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 54: { # 'Ш' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 57: { # 'Щ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 61: { # 'Ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 60: { # 'Ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 2, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 56: { # 'Я' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 1: { # 'а' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 18: { # 'б' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 3, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 9: { # 'в' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 20: { # 'г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 11: { # 'д' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 3: { # 'е' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 23: { # 'ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 15: { # 'з' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 2: { # 'и' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 26: { # 'й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 12: { # 'к' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 10: { # 'л' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 3, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 14: { # 'м' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 6: { # 'н' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 2, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 4: { # 'о' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 13: { # 'п' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 7: { # 'р' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 3, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 8: { # 'с' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 5: { # 'т' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 19: { # 'у' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 2, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 29: { # 'ф' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 2, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 25: { # 'х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 22: { # 'ц' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 21: { # 'ч' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 27: { # 'ш' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 24: { # 'щ' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 17: { # 'ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 52: { # 'ь' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 42: { # 'ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 16: { # 'я' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 1, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 3, # 'х' - 22: 2, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 58: { # 'є' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 62: { # '№' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_5_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 194, # '\x80' - 129: 195, # '\x81' - 130: 196, # '\x82' - 131: 197, # '\x83' - 132: 198, # '\x84' - 133: 199, # '\x85' - 134: 200, # '\x86' - 135: 201, # '\x87' - 136: 202, # '\x88' - 137: 203, # '\x89' - 138: 204, # '\x8a' - 139: 205, # '\x8b' - 140: 206, # '\x8c' - 141: 207, # '\x8d' - 142: 208, # '\x8e' - 143: 209, # '\x8f' - 144: 210, # '\x90' - 145: 211, # '\x91' - 146: 212, # '\x92' - 147: 213, # '\x93' - 148: 214, # '\x94' - 149: 215, # '\x95' - 150: 216, # '\x96' - 151: 217, # '\x97' - 152: 218, # '\x98' - 153: 219, # '\x99' - 154: 220, # '\x9a' - 155: 221, # '\x9b' - 156: 222, # '\x9c' - 157: 223, # '\x9d' - 158: 224, # '\x9e' - 159: 225, # '\x9f' - 160: 81, # '\xa0' - 161: 226, # 'Ё' - 162: 227, # 'Ђ' - 163: 228, # 'Ѓ' - 164: 229, # 'Є' - 165: 230, # 'Ѕ' - 166: 105, # 'І' - 167: 231, # 'Ї' - 168: 232, # 'Ј' - 169: 233, # 'Љ' - 170: 234, # 'Њ' - 171: 235, # 'Ћ' - 172: 236, # 'Ќ' - 173: 45, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 31, # 'А' - 177: 32, # 'Б' - 178: 35, # 'В' - 179: 43, # 'Г' - 180: 37, # 'Д' - 181: 44, # 'Е' - 182: 55, # 'Ж' - 183: 47, # 'З' - 184: 40, # 'И' - 185: 59, # 'Й' - 186: 33, # 'К' - 187: 46, # 'Л' - 188: 38, # 'М' - 189: 36, # 'Н' - 190: 41, # 'О' - 191: 30, # 'П' - 192: 39, # 'Р' - 193: 28, # 'С' - 194: 34, # 'Т' - 195: 51, # 'У' - 196: 48, # 'Ф' - 197: 49, # 'Х' - 198: 53, # 'Ц' - 199: 50, # 'Ч' - 200: 54, # 'Ш' - 201: 57, # 'Щ' - 202: 61, # 'Ъ' - 203: 239, # 'Ы' - 204: 67, # 'Ь' - 205: 240, # 'Э' - 206: 60, # 'Ю' - 207: 56, # 'Я' - 208: 1, # 'а' - 209: 18, # 'б' - 210: 9, # 'в' - 211: 20, # 'г' - 212: 11, # 'д' - 213: 3, # 'е' - 214: 23, # 'ж' - 215: 15, # 'з' - 216: 2, # 'и' - 217: 26, # 'й' - 218: 12, # 'к' - 219: 10, # 'л' - 220: 14, # 'м' - 221: 6, # 'н' - 222: 4, # 'о' - 223: 13, # 'п' - 224: 7, # 'р' - 225: 8, # 'с' - 226: 5, # 'т' - 227: 19, # 'у' - 228: 29, # 'ф' - 229: 25, # 'х' - 230: 22, # 'ц' - 231: 21, # 'ч' - 232: 27, # 'ш' - 233: 24, # 'щ' - 234: 17, # 'ъ' - 235: 75, # 'ы' - 236: 52, # 'ь' - 237: 241, # 'э' - 238: 42, # 'ю' - 239: 16, # 'я' - 240: 62, # '№' - 241: 242, # 'ё' - 242: 243, # 'ђ' - 243: 244, # 'ѓ' - 244: 58, # 'є' - 245: 245, # 'ѕ' - 246: 98, # 'і' - 247: 246, # 'ї' - 248: 247, # 'ј' - 249: 248, # 'љ' - 250: 249, # 'њ' - 251: 250, # 'ћ' - 252: 251, # 'ќ' - 253: 91, # '§' - 254: 252, # 'ў' - 255: 253, # 'џ' -} - -ISO_8859_5_BULGARIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-5", - language="Bulgarian", - char_to_order_map=ISO_8859_5_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", -) - -WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 206, # 'Ђ' - 129: 207, # 'Ѓ' - 130: 208, # '‚' - 131: 209, # 'ѓ' - 132: 210, # '„' - 133: 211, # '…' - 134: 212, # '†' - 135: 213, # '‡' - 136: 120, # '€' - 137: 214, # '‰' - 138: 215, # 'Љ' - 139: 216, # '‹' - 140: 217, # 'Њ' - 141: 218, # 'Ќ' - 142: 219, # 'Ћ' - 143: 220, # 'Џ' - 144: 221, # 'ђ' - 145: 78, # '‘' - 146: 64, # '’' - 147: 83, # '“' - 148: 121, # '”' - 149: 98, # '•' - 150: 117, # '–' - 151: 105, # '—' - 152: 222, # None - 153: 223, # '™' - 154: 224, # 'љ' - 155: 225, # '›' - 156: 226, # 'њ' - 157: 227, # 'ќ' - 158: 228, # 'ћ' - 159: 229, # 'џ' - 160: 88, # '\xa0' - 161: 230, # 'Ў' - 162: 231, # 'ў' - 163: 232, # 'Ј' - 164: 233, # '¤' - 165: 122, # 'Ґ' - 166: 89, # '¦' - 167: 106, # '§' - 168: 234, # 'Ё' - 169: 235, # '©' - 170: 236, # 'Є' - 171: 237, # '«' - 172: 238, # '¬' - 173: 45, # '\xad' - 174: 239, # '®' - 175: 240, # 'Ї' - 176: 73, # '°' - 177: 80, # '±' - 178: 118, # 'І' - 179: 114, # 'і' - 180: 241, # 'ґ' - 181: 242, # 'µ' - 182: 243, # '¶' - 183: 244, # '·' - 184: 245, # 'ё' - 185: 62, # '№' - 186: 58, # 'є' - 187: 246, # '»' - 188: 247, # 'ј' - 189: 248, # 'Ѕ' - 190: 249, # 'ѕ' - 191: 250, # 'ї' - 192: 31, # 'А' - 193: 32, # 'Б' - 194: 35, # 'В' - 195: 43, # 'Г' - 196: 37, # 'Д' - 197: 44, # 'Е' - 198: 55, # 'Ж' - 199: 47, # 'З' - 200: 40, # 'И' - 201: 59, # 'Й' - 202: 33, # 'К' - 203: 46, # 'Л' - 204: 38, # 'М' - 205: 36, # 'Н' - 206: 41, # 'О' - 207: 30, # 'П' - 208: 39, # 'Р' - 209: 28, # 'С' - 210: 34, # 'Т' - 211: 51, # 'У' - 212: 48, # 'Ф' - 213: 49, # 'Х' - 214: 53, # 'Ц' - 215: 50, # 'Ч' - 216: 54, # 'Ш' - 217: 57, # 'Щ' - 218: 61, # 'Ъ' - 219: 251, # 'Ы' - 220: 67, # 'Ь' - 221: 252, # 'Э' - 222: 60, # 'Ю' - 223: 56, # 'Я' - 224: 1, # 'а' - 225: 18, # 'б' - 226: 9, # 'в' - 227: 20, # 'г' - 228: 11, # 'д' - 229: 3, # 'е' - 230: 23, # 'ж' - 231: 15, # 'з' - 232: 2, # 'и' - 233: 26, # 'й' - 234: 12, # 'к' - 235: 10, # 'л' - 236: 14, # 'м' - 237: 6, # 'н' - 238: 4, # 'о' - 239: 13, # 'п' - 240: 7, # 'р' - 241: 8, # 'с' - 242: 5, # 'т' - 243: 19, # 'у' - 244: 29, # 'ф' - 245: 25, # 'х' - 246: 22, # 'ц' - 247: 21, # 'ч' - 248: 27, # 'ш' - 249: 24, # 'щ' - 250: 17, # 'ъ' - 251: 75, # 'ы' - 252: 52, # 'ь' - 253: 253, # 'э' - 254: 42, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_BULGARIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1251", - language="Bulgarian", - char_to_order_map=WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", -) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/theme.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/theme.py deleted file mode 100644 index bfb3c7f82155ba74b5d2b933c252d6ce80fd059d..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/theme.py +++ /dev/null @@ -1,112 +0,0 @@ -import configparser -from typing import Dict, List, IO, Mapping, Optional - -from .default_styles import DEFAULT_STYLES -from .style import Style, StyleType - - -class Theme: - """A container for style information, used by :class:`~rich.console.Console`. - - Args: - styles (Dict[str, Style], optional): A mapping of style names on to styles. Defaults to None for a theme with no styles. - inherit (bool, optional): Inherit default styles. Defaults to True. - """ - - styles: Dict[str, Style] - - def __init__( - self, styles: Optional[Mapping[str, StyleType]] = None, inherit: bool = True - ): - self.styles = DEFAULT_STYLES.copy() if inherit else {} - if styles is not None: - self.styles.update( - { - name: style if isinstance(style, Style) else Style.parse(style) - for name, style in styles.items() - } - ) - - @property - def config(self) -> str: - """Get contents of a config file for this theme.""" - config = "[styles]\n" + "\n".join( - f"{name} = {style}" for name, style in sorted(self.styles.items()) - ) - return config - - @classmethod - def from_file( - cls, config_file: IO[str], source: Optional[str] = None, inherit: bool = True - ) -> "Theme": - """Load a theme from a text mode file. - - Args: - config_file (IO[str]): An open conf file. - source (str, optional): The filename of the open file. Defaults to None. - inherit (bool, optional): Inherit default styles. Defaults to True. - - Returns: - Theme: A New theme instance. - """ - config = configparser.ConfigParser() - config.read_file(config_file, source=source) - styles = {name: Style.parse(value) for name, value in config.items("styles")} - theme = Theme(styles, inherit=inherit) - return theme - - @classmethod - def read(cls, path: str, inherit: bool = True) -> "Theme": - """Read a theme from a path. - - Args: - path (str): Path to a config file readable by Python configparser module. - inherit (bool, optional): Inherit default styles. Defaults to True. - - Returns: - Theme: A new theme instance. - """ - with open(path, "rt") as config_file: - return cls.from_file(config_file, source=path, inherit=inherit) - - -class ThemeStackError(Exception): - """Base exception for errors related to the theme stack.""" - - -class ThemeStack: - """A stack of themes. - - Args: - theme (Theme): A theme instance - """ - - def __init__(self, theme: Theme) -> None: - self._entries: List[Dict[str, Style]] = [theme.styles] - self.get = self._entries[-1].get - - def push_theme(self, theme: Theme, inherit: bool = True) -> None: - """Push a theme on the top of the stack. - - Args: - theme (Theme): A Theme instance. - inherit (boolean, optional): Inherit styles from current top of stack. - """ - styles: Dict[str, Style] - styles = ( - {**self._entries[-1], **theme.styles} if inherit else theme.styles.copy() - ) - self._entries.append(styles) - self.get = self._entries[-1].get - - def pop_theme(self) -> None: - """Pop (and discard) the top-most theme.""" - if len(self._entries) == 1: - raise ThemeStackError("Unable to pop base theme") - self._entries.pop() - self.get = self._entries[-1].get - - -if __name__ == "__main__": # pragma: no cover - theme = Theme() - print(theme.config) diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/data/datasets/total_text.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/data/datasets/total_text.py deleted file mode 100644 index 390d01c8d8aa88dfb9b5dcdc50c3816a113b942f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/data/datasets/total_text.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -""" -Simple dataset class that wraps a list of path names -""" - -import os - -import numpy as np -import torch -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.structures.segmentation_mask import ( - CharPolygons, - SegmentationCharMask, - SegmentationMask, -) -from PIL import Image, ImageDraw - - -class TotaltextDataset(object): - def __init__(self, use_charann, imgs_dir, gts_dir, transforms=None, ignore_difficult=False): - self.use_charann = use_charann - self.image_lists = [os.path.join(imgs_dir, img) for img in os.listdir(imgs_dir)] - self.gts_dir = gts_dir - self.transforms = transforms - self.min_proposal_size = 2 - self.char_classes = "_0123456789abcdefghijklmnopqrstuvwxyz" - self.vis = False - self.ignore_difficult = ignore_difficult - if self.ignore_difficult and (self.gts_dir is not None) and 'train' in self.gts_dir: - self.image_lists = self.filter_image_lists() - - def filter_image_lists(self): - new_image_lists = [] - for img_path in self.image_lists: - has_positive = False - im_name = os.path.basename(img_path) - gt_path = os.path.join(self.gts_dir, im_name + ".txt") - if not os.path.isfile(gt_path): - gt_path = os.path.join( - self.gts_dir, "gt_" + im_name.split(".")[0] + ".txt" - ) - lines = open(gt_path, 'r').readlines() - for line in lines: - charbbs = [] - strs, loc = self.line2boxes(line) - word = strs[0] - if word == "###": - continue - else: - has_positive = True - if has_positive: - new_image_lists.append(img_path) - return new_image_lists - - def __getitem__(self, item): - im_name = os.path.basename(self.image_lists[item]) - # print(self.image_lists[item]) - img = Image.open(self.image_lists[item]).convert("RGB") - width, height = img.size - if self.gts_dir is not None: - gt_path = os.path.join(self.gts_dir, im_name + ".txt") - words, boxes, charsbbs, segmentations, labels = self.load_gt_from_txt( - gt_path, height, width - ) - if words[0] == "": - use_char_ann = False - else: - use_char_ann = True - if not self.use_charann: - use_char_ann = False - target = BoxList( - boxes[:, :4], img.size, mode="xyxy", use_char_ann=use_char_ann - ) - if self.ignore_difficult: - labels = torch.from_numpy(np.array(labels)) - else: - labels = torch.ones(len(boxes)) - target.add_field("labels", labels) - masks = SegmentationMask(segmentations, img.size) - target.add_field("masks", masks) - char_masks = SegmentationCharMask( - charsbbs, words=words, use_char_ann=use_char_ann, size=img.size, char_num_classes=len(self.char_classes) - ) - target.add_field("char_masks", char_masks) - else: - target = None - if self.transforms is not None: - img, target = self.transforms(img, target) - if self.vis: - new_im = img.numpy().copy().transpose([1, 2, 0]) + [ - 102.9801, - 115.9465, - 122.7717, - ] - new_im = Image.fromarray(new_im.astype(np.uint8)).convert("RGB") - mask = target.extra_fields["masks"].polygons[0].convert("mask") - mask = Image.fromarray((mask.numpy() * 255).astype(np.uint8)).convert("RGB") - if self.use_charann: - m, _ = ( - target.extra_fields["char_masks"] - .chars_boxes[0] - .convert("char_mask") - ) - color = self.creat_color_map(37, 255) - color_map = color[m.numpy().astype(np.uint8)] - char = Image.fromarray(color_map.astype(np.uint8)).convert("RGB") - char = Image.blend(char, new_im, 0.5) - else: - char = new_im - new = Image.blend(char, mask, 0.5) - img_draw = ImageDraw.Draw(new) - for box in target.bbox.numpy(): - box = list(box) - box = box[:2] + [box[2], box[1]] + box[2:] + [box[0], box[3]] + box[:2] - img_draw.line(box, fill=(255, 0, 0), width=2) - new.save("./vis/char_" + im_name) - return img, target, self.image_lists[item] - - def creat_color_map(self, n_class, width): - splits = int(np.ceil(np.power((n_class * 1.0), 1.0 / 3))) - maps = [] - for i in range(splits): - r = int(i * width * 1.0 / (splits - 1)) - for j in range(splits): - g = int(j * width * 1.0 / (splits - 1)) - for k in range(splits - 1): - b = int(k * width * 1.0 / (splits - 1)) - maps.append([r, g, b]) - return np.array(maps) - - def __len__(self): - return len(self.image_lists) - - # def load_gt_from_txt(self, gt_path, height=None, width=None): - # words, boxes, charsboxes, segmentations, labels = [], [], [], [], [] - # lines = open(gt_path).readlines() - # for line in lines: - # charbbs = [] - # strs, loc = self.line2boxes(line) - # word = strs[0] - # if word == "###": - # labels.append(-1) - # continue - # else: - # labels.append(1) - # rect = list(loc[0]) - # min_x = min(rect[::2]) - 1 - # min_y = min(rect[1::2]) - 1 - # max_x = max(rect[::2]) - 1 - # max_y = max(rect[1::2]) - 1 - # box = [min_x, min_y, max_x, max_y] - # segmentations.append([loc[0, :]]) - # tindex = len(boxes) - # boxes.append(box) - # words.append(word) - # c_class = self.char2num(strs[1:]) - # charbb = np.zeros((10,), dtype=np.float32) - # if loc.shape[0] > 1: - # for i in range(1, loc.shape[0]): - # charbb[:8] = loc[i, :] - # charbb[8] = c_class[i - 1] - # charbb[9] = tindex - # charbbs.append(charbb.copy()) - # charsboxes.append(charbbs) - # num_boxes = len(boxes) - # if len(boxes) > 0: - # keep_boxes = np.zeros((num_boxes, 5)) - # keep_boxes[:, :4] = np.array(boxes) - # keep_boxes[:, 4] = range( - # num_boxes - # ) # the 5th column is the box label,same as the 10th column of all charsboxes which belong to the box - # if self.use_charann: - # return words, np.array(keep_boxes), charsboxes, segmentations, labels - # else: - # charbbs = np.zeros((10,), dtype=np.float32) - # for i in range(len(words)): - # charsboxes.append([charbbs]) - # return words, np.array(keep_boxes), charsboxes, segmentations, labels - # else: - # words.append("") - # charbbs = np.zeros((10,), dtype=np.float32) - # return ( - # words, - # np.zeros((1, 5), dtype=np.float32), - # [[charbbs]], - # [[np.zeros((8,), dtype=np.float32)]], - # labels - # ) - - def load_gt_from_txt(self, gt_path, height=None, width=None): - words, boxes, charsboxes, segmentations, labels = [], [], [], [], [] - lines = open(gt_path).readlines() - for line in lines: - charbbs = [] - strs, loc = self.line2boxes(line) - word = strs[0] - if word == "###": - if self.ignore_difficult: - rect = list(loc[0]) - min_x = min(rect[::2]) - 1 - min_y = min(rect[1::2]) - 1 - max_x = max(rect[::2]) - 1 - max_y = max(rect[1::2]) - 1 - box = [min_x, min_y, max_x, max_y] - # segmentations.append([loc[0, :]]) - segmentations.append([[min_x, min_y, max_x, min_y, max_x, max_y, min_x, max_y]]) - tindex = len(boxes) - boxes.append(box) - words.append(word) - labels.append(-1) - charbbs = np.zeros((10,), dtype=np.float32) - if loc.shape[0] > 1: - for i in range(1, loc.shape[0]): - charbb[9] = tindex - charbbs.append(charbb.copy()) - charsboxes.append(charbbs) - else: - continue - else: - rect = list(loc[0]) - min_x = min(rect[::2]) - 1 - min_y = min(rect[1::2]) - 1 - max_x = max(rect[::2]) - 1 - max_y = max(rect[1::2]) - 1 - box = [min_x, min_y, max_x, max_y] - segmentations.append([loc[0, :]]) - tindex = len(boxes) - boxes.append(box) - words.append(word) - labels.append(1) - c_class = self.char2num(strs[1:]) - charbb = np.zeros((10,), dtype=np.float32) - if loc.shape[0] > 1: - for i in range(1, loc.shape[0]): - charbb[:8] = loc[i, :] - charbb[8] = c_class[i - 1] - charbb[9] = tindex - charbbs.append(charbb.copy()) - charsboxes.append(charbbs) - num_boxes = len(boxes) - if len(boxes) > 0: - keep_boxes = np.zeros((num_boxes, 5)) - keep_boxes[:, :4] = np.array(boxes) - keep_boxes[:, 4] = range( - num_boxes - ) - # the 5th column is the box label, - # same as the 10th column of all charsboxes which belong to the box - if self.use_charann: - return words, np.array(keep_boxes), charsboxes, segmentations, labels - else: - charbbs = np.zeros((10,), dtype=np.float32) - if len(charsboxes) == 0: - for _ in range(len(words)): - charsboxes.append([charbbs]) - return words, np.array(keep_boxes), charsboxes, segmentations, labels - else: - words.append("") - charbbs = np.zeros((10,), dtype=np.float32) - return ( - words, - np.zeros((1, 5), dtype=np.float32), - [[charbbs]], - [[np.zeros((8,), dtype=np.float32)]], - [1] - ) - - - def line2boxes(self, line): - parts = line.strip().split(",") - return [parts[-1]], np.array([[float(x) for x in parts[:-1]]]) - - def check_charbbs(self, charbbs): - xmins = np.minimum.reduce( - [charbbs[:, 0], charbbs[:, 2], charbbs[:, 4], charbbs[:, 6]] - ) - xmaxs = np.maximum.reduce( - [charbbs[:, 0], charbbs[:, 2], charbbs[:, 4], charbbs[:, 6]] - ) - ymins = np.minimum.reduce( - [charbbs[:, 1], charbbs[:, 3], charbbs[:, 5], charbbs[:, 7]] - ) - ymaxs = np.maximum.reduce( - [charbbs[:, 1], charbbs[:, 3], charbbs[:, 5], charbbs[:, 7]] - ) - return np.logical_and( - xmaxs - xmins > self.min_proposal_size, - ymaxs - ymins > self.min_proposal_size, - ) - - def check_charbb(self, charbb): - xmins = min(charbb[0], charbb[2], charbb[4], charbb[6]) - xmaxs = max(charbb[0], charbb[2], charbb[4], charbb[6]) - ymins = min(charbb[1], charbb[3], charbb[5], charbb[7]) - ymaxs = max(charbb[1], charbb[3], charbb[5], charbb[7]) - return ( - xmaxs - xmins > self.min_proposal_size - and ymaxs - ymins > self.min_proposal_size - ) - - def char2num(self, chars): - ## chars ['h', 'e', 'l', 'l', 'o'] - nums = [self.char_classes.index(c.lower()) for c in chars] - return nums - - def get_img_info(self, item): - """ - Return the image dimensions for the image, without - loading and pre-processing it - """ - - im_name = os.path.basename(self.image_lists[item]) - img = Image.open(self.image_lists[item]) - width, height = img.size - img_info = {"im_name": im_name, "height": height, "width": width} - return img_info diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_backbones/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_backbones/__init__.py deleted file mode 100644 index ce4596a6a53dbd308963152612dafda2f84e185c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_backbones/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .utils import check_norm_state, is_block, is_norm - -__all__ = ['is_block', 'is_norm', 'check_norm_state'] diff --git a/spaces/trttung1610/musicgen/audiocraft/grids/musicgen/musicgen_base_32khz.py b/spaces/trttung1610/musicgen/audiocraft/grids/musicgen/musicgen_base_32khz.py deleted file mode 100644 index 4e364614537e426f21c18a2c2a9d94b3babce051..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/grids/musicgen/musicgen_base_32khz.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=32, partition=partitions) - launcher.bind_(solver='musicgen/musicgen_base_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - - fsdp = {'autocast': False, 'fsdp.use': True} - medium = {'model/lm/model_scale': 'medium'} - large = {'model/lm/model_scale': 'large'} - - cfg_low = {'classifier_free_guidance.training_dropout': 0.2} - wd_low = {'conditioners.description.t5.word_dropout': 0.2} - - adam = {'optim.optimizer': 'adamw', 'optim.lr': 1e-4} - - launcher.bind_(fsdp) - - launcher.slurm_(gpus=32).bind_(label='32gpus') - with launcher.job_array(): - sub = launcher.bind() - sub() - - launcher.slurm_(gpus=64).bind_(label='64gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(medium, adam) - - launcher.slurm_(gpus=96).bind_(label='96gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(large, cfg_low, wd_low, adam, {'optim.max_norm': 3}) diff --git a/spaces/ttt246/brain/Extension/src/pages/Content/content.styles.css b/spaces/ttt246/brain/Extension/src/pages/Content/content.styles.css deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/tumuyan/demucs/README.md b/spaces/tumuyan/demucs/README.md deleted file mode 100644 index 28e96c4dc363dfb0b60df5b44a07bac96c1a3798..0000000000000000000000000000000000000000 --- a/spaces/tumuyan/demucs/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Demucs to two stems -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/ucalyptus/PTI/configs/__init__.py b/spaces/ucalyptus/PTI/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vivsmouret/Dipl0-pepe-diffuser/README.md b/spaces/vivsmouret/Dipl0-pepe-diffuser/README.md deleted file mode 100644 index 4446a4cbbd9afb27fb420696fc69bf4aeaae6d19..0000000000000000000000000000000000000000 --- a/spaces/vivsmouret/Dipl0-pepe-diffuser/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dipl0 Pepe Diffuser -emoji: 🏃 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vruizext/transformers-xray-classification/README.md b/spaces/vruizext/transformers-xray-classification/README.md deleted file mode 100644 index eaf7248035fdfdf2e7b96c764c9bb217f176317e..0000000000000000000000000000000000000000 --- a/spaces/vruizext/transformers-xray-classification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Edem Xray Classification -emoji: 🐠 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py deleted file mode 100644 index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='PSPHead', - in_channels=64, - in_index=4, - channels=16, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/w601sxs/b1ade-1b/app.py b/spaces/w601sxs/b1ade-1b/app.py deleted file mode 100644 index 5bf2335e1760d1859ed0330a0fb36bee9c5c49e1..0000000000000000000000000000000000000000 --- a/spaces/w601sxs/b1ade-1b/app.py +++ /dev/null @@ -1,149 +0,0 @@ -import gradio as gr -import torch -from peft import PeftModel, PeftConfig, LoraConfig -from transformers import AutoTokenizer, AutoModelForCausalLM -from datasets import load_dataset -from trl import SFTTrainer -# import torch -from transformers import StoppingCriteria, AutoModelForCausalLM, AutoTokenizer, StoppingCriteriaList - - -ref_model = AutoModelForCausalLM.from_pretrained("w601sxs/b1ade-1b", torch_dtype=torch.bfloat16) -ref_model = ref_model -ref_model.eval() - -tokenizer = AutoTokenizer.from_pretrained("w601sxs/b1ade-1b") - - -class KeywordsStoppingCriteria(StoppingCriteria): - def __init__(self, keywords_ids:list): - self.keywords = keywords_ids - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - if input_ids[0][-1] in self.keywords: - return True - return False - - -stop_words = ['>', ' >','> '] -stop_ids = [tokenizer.encode(w)[0] for w in stop_words] -stop_criteria = KeywordsStoppingCriteria(stop_ids) - -import numpy as np - -if tokenizer.pad_token_id is None: - tokenizer.pad_token_id = tokenizer.eos_token_id - ref_model.config.pad_token_id = ref_model.config.eos_token_id - -# Define your color-coding labels; if prob > x, then label = y; Sorted in descending probability order! -probs_to_label = [ - (0.99, "99%"), - (0.95, "95%"), - (0.9, "90%"), - (0.5, "50%"), - (0.1, "10%"), - (0.01, "1%"), - -] -import numpy as np -def get_tokens_and_labels(prompt): - """ - Given the prompt (text), return a list of tuples (decoded_token, label) - """ - inputs = tokenizer([prompt], return_tensors="pt") - outputs = ref_model.generate( - **inputs, - max_new_tokens=1000, - return_dict_in_generate=True, - output_scores=True, - stopping_criteria=StoppingCriteriaList([stop_criteria]) - ) - # Important: don't forget to set `normalize_logits=True` to obtain normalized probabilities (i.e. sum(p) = 1) - transition_scores = ref_model.compute_transition_scores(outputs.sequences, outputs.scores, normalize_logits=True) - transition_proba = np.exp(transition_scores.double().cpu()) - - # print(transition_proba) - # print(inputs) - # We only have scores for the generated tokens, so pop out the prompt tokens - input_length = inputs.input_ids.shape[1] - generated_ids = outputs.sequences[:, input_length:] - - generated_tokens = tokenizer.convert_ids_to_tokens(generated_ids[0]) - - # Important: you might need to find a tokenization character to replace (e.g. "Ġ" for BPE) and get the correct - # spacing into the final output 👼 - if ref_model.config.is_encoder_decoder: - highlighted_out = [] - else: - input_tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0]) - highlighted_out = [(token.replace("▁", " "), None) for token in input_tokens] - # Get the (decoded_token, label) pairs for the generated tokens - for token, proba in zip(generated_tokens, transition_proba[0]): - this_label = None - assert 0. <= proba <= 1.0 - for min_proba, label in probs_to_label: - if proba >= min_proba: - this_label = label - break - highlighted_out.append((token.replace("▁", " "), this_label)) - - return highlighted_out - -import spacy -from spacy import displacy -from spacy.tokens import Span -from spacy.tokens import Doc - -def render_output(context, question): - output = get_tokens_and_labels(f"context:<{context}>\nquestion:<{question}>\nanswer:<") - nlp = spacy.blank("en") - doc = nlp(''.join([a[0] for a in output]).replace('Ġ',' ').replace('Ċ','\n')) - words = [a[0].replace('Ġ',' ').replace('Ċ','\n') for a in output]#[:indices[2]] - doc = Doc(nlp.vocab, words=words) - - doc.spans["sc"]=[] - c = 0 - - for outs in output: - tmpouts = outs[0].replace('Ġ','').replace('Ċ','\n') - # print(c, "to", c+len(tmpouts)," : ", tmpouts) - - if outs[1] is not None: - doc.spans["sc"].append(Span(doc, c, c+1, outs[1] )) - - c+=1 - - # if c>indices[2]-1: - # break - - - options = {'colors' : { - '99%': '#44ce1b', - '95%': '#bbdb44', - '90%': '#f7e379', - '50%': '#fec12a', - '10%': '#f2a134', - '1%': '#e51f1f', - '': '#e51f1f', - }} - - return displacy.render(doc, style="span", options = options) - - -def predict(text): - inputs = tokenizer(text, return_tensors="pt") - with torch.no_grad(): - outputs = ref_model.generate(input_ids=inputs["input_ids"], max_new_tokens=128) - out_text = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0].split("answer:")[-1] - - return out_text.split(text)[-1] - - -demo = gr.Interface( - fn=render_output, - inputs=[gr.Textbox(label='context',value='As an AI assistant provide helpful, accurate and detailed answers to user questions'),gr.Textbox(label='question')], - outputs='html', - examples=[['As an AI assistant provide helpful, accurate and detailed answers to user questions','Given the fact that Inhaling, or breathing in, increases the size of the chest, which decreases air pressure inside the lungs. If Mona is done with a race and her chest contracts, what happens to the amount of air pressure in her lungs increases or decreases?'],['As an AI assistant provide helpful, accurate and detailed answers to user questions','In this task, you\'re given the title of a five-sentence story, the first four sentences, and two options for the fifth sentence as a and b. Your job is to pick the sentence option that seamlessly connects with the rest of the story, indicating your choice as "a" or "b". If both sentences are plausible, pick the one that makes more sense. Title: Missing Radio. Sentence 1: Josh was very sad to find out he could not find his radio. Sentence 2: He searched all day and night. Sentence 3: He even went back to school to find his radio. Sentence 4: Later on, someone turned in his radio to the lost and found. Choices: a. Once he got his new car, Stuart was very happy and relieved. b. Now, James was able to listen to the game on his radio']]) - - -demo.launch() \ No newline at end of file diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-5fc98735.js b/spaces/whitphx/gradio-static-test/dist/assets/index-5fc98735.js deleted file mode 100644 index e5b2fe729d8e5d3f6bcc7e5e0826b8a0e76cabd7..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/index-5fc98735.js +++ /dev/null @@ -1,2 +0,0 @@ -import{L as o}from"./index-46909c92.js";import{s,t as r,L as n,i as P,w as a,f as i,a as Q,b as p}from"./index-1040e6d9.js";import"../lite.js";import"./Blocks-99723874.js";import"./Button-0391b19a.js";import"./BlockLabel-a3ec523d.js";import"./Empty-91947ea3.js";/* empty css */import"./Copy-d654b047.js";import"./Download-35908774.js";const c=s({String:r.string,Number:r.number,"True False":r.bool,PropertyName:r.propertyName,Null:r.null,",":r.separator,"[ ]":r.squareBracket,"{ }":r.brace}),g=o.deserialize({version:14,states:"$bOVQPOOOOQO'#Cb'#CbOnQPO'#CeOvQPO'#CjOOQO'#Cp'#CpQOQPOOOOQO'#Cg'#CgO}QPO'#CfO!SQPO'#CrOOQO,59P,59PO![QPO,59PO!aQPO'#CuOOQO,59U,59UO!iQPO,59UOVQPO,59QOqQPO'#CkO!nQPO,59^OOQO1G.k1G.kOVQPO'#ClO!vQPO,59aOOQO1G.p1G.pOOQO1G.l1G.lOOQO,59V,59VOOQO-E6i-E6iOOQO,59W,59WOOQO-E6j-E6j",stateData:"#O~OcOS~OQSORSOSSOTSOWQO]ROePO~OVXOeUO~O[[O~PVOg^O~Oh_OVfX~OVaO~OhbO[iX~O[dO~Oh_OVfa~OhbO[ia~O",goto:"!kjPPPPPPkPPkqwPPk{!RPPP!XP!ePP!hXSOR^bQWQRf_TVQ_Q`WRg`QcZRicQTOQZRQe^RhbRYQR]R",nodeNames:"⚠ JsonText True False Null Number String } { Object Property PropertyName ] [ Array",maxTerm:25,nodeProps:[["openedBy",7,"{",12,"["],["closedBy",8,"}",13,"]"]],propSources:[c],skippedNodes:[0],repeatNodeCount:2,tokenData:"(p~RaXY!WYZ!W]^!Wpq!Wrs!]|}$i}!O$n!Q!R$w!R![&V![!]&h!}#O&m#P#Q&r#Y#Z&w#b#c'f#h#i'}#o#p(f#q#r(k~!]Oc~~!`Upq!]qr!]rs!rs#O!]#O#P!w#P~!]~!wOe~~!zXrs!]!P!Q!]#O#P!]#U#V!]#Y#Z!]#b#c!]#f#g!]#h#i!]#i#j#g~#jR!Q![#s!c!i#s#T#Z#s~#vR!Q![$P!c!i$P#T#Z$P~$SR!Q![$]!c!i$]#T#Z$]~$`R!Q![!]!c!i!]#T#Z!]~$nOh~~$qQ!Q!R$w!R![&V~$|RT~!O!P%V!g!h%k#X#Y%k~%YP!Q![%]~%bRT~!Q![%]!g!h%k#X#Y%k~%nR{|%w}!O%w!Q![%}~%zP!Q![%}~&SPT~!Q![%}~&[ST~!O!P%V!Q![&V!g!h%k#X#Y%k~&mOg~~&rO]~~&wO[~~&zP#T#U&}~'QP#`#a'T~'WP#g#h'Z~'^P#X#Y'a~'fOR~~'iP#i#j'l~'oP#`#a'r~'uP#`#a'x~'}OS~~(QP#f#g(T~(WP#i#j(Z~(^P#X#Y(a~(fOQ~~(kOW~~(pOV~",tokenizers:[0],topRules:{JsonText:[0,1]},tokenPrec:0}),$=()=>t=>{try{JSON.parse(t.state.doc.toString())}catch(O){if(!(O instanceof SyntaxError))throw O;const e=m(O,t.state.doc);return[{from:e,message:O.message,severity:"error",to:e}]}return[]};function m(t,O){let e;return(e=t.message.match(/at position (\d+)/))?Math.min(+e[1],O.length):(e=t.message.match(/at line (\d+) column (\d+)/))?Math.min(O.line(+e[1]).from+ +e[2]-1,O.length):0}const u=n.define({name:"json",parser:g.configure({props:[P.add({Object:a({except:/^\s*\}/}),Array:a({except:/^\s*\]/})}),i.add({"Object Array":Q})]}),languageData:{closeBrackets:{brackets:["[","{",'"']},indentOnInput:/^\s*[\}\]]$/}});function j(){return new p(u)}export{j as json,u as jsonLanguage,$ as jsonParseLinter}; -//# sourceMappingURL=index-5fc98735.js.map diff --git a/spaces/whitphx/gradio-static-test/dist/assets/wrapper-b7460963-69b64cfb.js b/spaces/whitphx/gradio-static-test/dist/assets/wrapper-b7460963-69b64cfb.js deleted file mode 100644 index 02049f8e8fbbc52f6fd3d807e42bbbe9e811c2f6..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/wrapper-b7460963-69b64cfb.js +++ /dev/null @@ -1,8 +0,0 @@ -import S from"./__vite-browser-external-b25bb000.js";function _t(s){if(s.__esModule)return s;var e=s.default;if(typeof e=="function"){var t=function r(){if(this instanceof r){var i=[null];i.push.apply(i,arguments);var n=Function.bind.apply(e,i);return new n}return e.apply(this,arguments)};t.prototype=e.prototype}else t={};return Object.defineProperty(t,"__esModule",{value:!0}),Object.keys(s).forEach(function(r){var i=Object.getOwnPropertyDescriptor(s,r);Object.defineProperty(t,r,i.get?i:{enumerable:!0,get:function(){return s[r]}})}),t}const{Duplex:pt}=S;function xe(s){s.emit("close")}function mt(){!this.destroyed&&this._writableState.finished&&this.destroy()}function Ke(s){this.removeListener("error",Ke),this.destroy(),this.listenerCount("error")===0&&this.emit("error",s)}function gt(s,e){let t=!0;const r=new pt({...e,autoDestroy:!1,emitClose:!1,objectMode:!1,writableObjectMode:!1});return s.on("message",function(n,o){const l=!o&&r._readableState.objectMode?n.toString():n;r.push(l)||s.pause()}),s.once("error",function(n){r.destroyed||(t=!1,r.destroy(n))}),s.once("close",function(){r.destroyed||r.push(null)}),r._destroy=function(i,n){if(s.readyState===s.CLOSED){n(i),process.nextTick(xe,r);return}let o=!1;s.once("error",function(f){o=!0,n(f)}),s.once("close",function(){o||n(i),process.nextTick(xe,r)}),t&&s.terminate()},r._final=function(i){if(s.readyState===s.CONNECTING){s.once("open",function(){r._final(i)});return}s._socket!==null&&(s._socket._writableState.finished?(i(),r._readableState.endEmitted&&r.destroy()):(s._socket.once("finish",function(){i()}),s.close()))},r._read=function(){s.isPaused&&s.resume()},r._write=function(i,n,o){if(s.readyState===s.CONNECTING){s.once("open",function(){r._write(i,n,o)});return}s.send(i,o)},r.on("end",mt),r.on("error",Ke),r}var yt=gt;const Gs=yt;var L={},vt={get exports(){return L},set exports(s){L=s}},$={BINARY_TYPES:["nodebuffer","arraybuffer","fragments"],EMPTY_BUFFER:Buffer.alloc(0),GUID:"258EAFA5-E914-47DA-95CA-C5AB0DC85B11",kForOnEventAttribute:Symbol("kIsForOnEventAttribute"),kListener:Symbol("kListener"),kStatusCode:Symbol("status-code"),kWebSocket:Symbol("websocket"),NOOP:()=>{}},St,Et;const{EMPTY_BUFFER:bt}=$,ge=Buffer[Symbol.species];function xt(s,e){if(s.length===0)return bt;if(s.length===1)return s[0];const t=Buffer.allocUnsafe(e);let r=0;for(let i=0;i{this.pending--,this[fe]()},this.concurrency=e||1/0,this.jobs=[],this.pending=0}add(e){this.jobs.push(e),this[fe]()}[fe](){if(this.pending!==this.concurrency&&this.jobs.length){const e=this.jobs.shift();this.pending++,e(this[ke])}}};var Ot=wt;const F=S,we=L,Tt=Ot,{kStatusCode:Qe}=$,Ct=Buffer[Symbol.species],Lt=Buffer.from([0,0,255,255]),se=Symbol("permessage-deflate"),w=Symbol("total-length"),z=Symbol("callback"),T=Symbol("buffers"),ee=Symbol("error");let X,Nt=class{constructor(e,t,r){if(this._maxPayload=r|0,this._options=e||{},this._threshold=this._options.threshold!==void 0?this._options.threshold:1024,this._isServer=!!t,this._deflate=null,this._inflate=null,this.params=null,!X){const i=this._options.concurrencyLimit!==void 0?this._options.concurrencyLimit:10;X=new Tt(i)}}static get extensionName(){return"permessage-deflate"}offer(){const e={};return this._options.serverNoContextTakeover&&(e.server_no_context_takeover=!0),this._options.clientNoContextTakeover&&(e.client_no_context_takeover=!0),this._options.serverMaxWindowBits&&(e.server_max_window_bits=this._options.serverMaxWindowBits),this._options.clientMaxWindowBits?e.client_max_window_bits=this._options.clientMaxWindowBits:this._options.clientMaxWindowBits==null&&(e.client_max_window_bits=!0),e}accept(e){return e=this.normalizeParams(e),this.params=this._isServer?this.acceptAsServer(e):this.acceptAsClient(e),this.params}cleanup(){if(this._inflate&&(this._inflate.close(),this._inflate=null),this._deflate){const e=this._deflate[z];this._deflate.close(),this._deflate=null,e&&e(new Error("The deflate stream was closed while data was being processed"))}}acceptAsServer(e){const t=this._options,r=e.find(i=>!(t.serverNoContextTakeover===!1&&i.server_no_context_takeover||i.server_max_window_bits&&(t.serverMaxWindowBits===!1||typeof t.serverMaxWindowBits=="number"&&t.serverMaxWindowBits>i.server_max_window_bits)||typeof t.clientMaxWindowBits=="number"&&!i.client_max_window_bits));if(!r)throw new Error("None of the extension offers can be accepted");return t.serverNoContextTakeover&&(r.server_no_context_takeover=!0),t.clientNoContextTakeover&&(r.client_no_context_takeover=!0),typeof t.serverMaxWindowBits=="number"&&(r.server_max_window_bits=t.serverMaxWindowBits),typeof t.clientMaxWindowBits=="number"?r.client_max_window_bits=t.clientMaxWindowBits:(r.client_max_window_bits===!0||t.clientMaxWindowBits===!1)&&delete r.client_max_window_bits,r}acceptAsClient(e){const t=e[0];if(this._options.clientNoContextTakeover===!1&&t.client_no_context_takeover)throw new Error('Unexpected parameter "client_no_context_takeover"');if(!t.client_max_window_bits)typeof this._options.clientMaxWindowBits=="number"&&(t.client_max_window_bits=this._options.clientMaxWindowBits);else if(this._options.clientMaxWindowBits===!1||typeof this._options.clientMaxWindowBits=="number"&&t.client_max_window_bits>this._options.clientMaxWindowBits)throw new Error('Unexpected or invalid parameter "client_max_window_bits"');return t}normalizeParams(e){return e.forEach(t=>{Object.keys(t).forEach(r=>{let i=t[r];if(i.length>1)throw new Error(`Parameter "${r}" must have only a single value`);if(i=i[0],r==="client_max_window_bits"){if(i!==!0){const n=+i;if(!Number.isInteger(n)||n<8||n>15)throw new TypeError(`Invalid value for parameter "${r}": ${i}`);i=n}else if(!this._isServer)throw new TypeError(`Invalid value for parameter "${r}": ${i}`)}else if(r==="server_max_window_bits"){const n=+i;if(!Number.isInteger(n)||n<8||n>15)throw new TypeError(`Invalid value for parameter "${r}": ${i}`);i=n}else if(r==="client_no_context_takeover"||r==="server_no_context_takeover"){if(i!==!0)throw new TypeError(`Invalid value for parameter "${r}": ${i}`)}else throw new Error(`Unknown parameter "${r}"`);t[r]=i})}),e}decompress(e,t,r){X.add(i=>{this._decompress(e,t,(n,o)=>{i(),r(n,o)})})}compress(e,t,r){X.add(i=>{this._compress(e,t,(n,o)=>{i(),r(n,o)})})}_decompress(e,t,r){const i=this._isServer?"client":"server";if(!this._inflate){const n=`${i}_max_window_bits`,o=typeof this.params[n]!="number"?F.Z_DEFAULT_WINDOWBITS:this.params[n];this._inflate=F.createInflateRaw({...this._options.zlibInflateOptions,windowBits:o}),this._inflate[se]=this,this._inflate[w]=0,this._inflate[T]=[],this._inflate.on("error",Rt),this._inflate.on("data",Je)}this._inflate[z]=r,this._inflate.write(e),t&&this._inflate.write(Lt),this._inflate.flush(()=>{const n=this._inflate[ee];if(n){this._inflate.close(),this._inflate=null,r(n);return}const o=we.concat(this._inflate[T],this._inflate[w]);this._inflate._readableState.endEmitted?(this._inflate.close(),this._inflate=null):(this._inflate[w]=0,this._inflate[T]=[],t&&this.params[`${i}_no_context_takeover`]&&this._inflate.reset()),r(null,o)})}_compress(e,t,r){const i=this._isServer?"server":"client";if(!this._deflate){const n=`${i}_max_window_bits`,o=typeof this.params[n]!="number"?F.Z_DEFAULT_WINDOWBITS:this.params[n];this._deflate=F.createDeflateRaw({...this._options.zlibDeflateOptions,windowBits:o}),this._deflate[w]=0,this._deflate[T]=[],this._deflate.on("data",Pt)}this._deflate[z]=r,this._deflate.write(e),this._deflate.flush(F.Z_SYNC_FLUSH,()=>{if(!this._deflate)return;let n=we.concat(this._deflate[T],this._deflate[w]);t&&(n=new Ct(n.buffer,n.byteOffset,n.length-4)),this._deflate[z]=null,this._deflate[w]=0,this._deflate[T]=[],t&&this.params[`${i}_no_context_takeover`]&&this._deflate.reset(),r(null,n)})}};var ie=Nt;function Pt(s){this[T].push(s),this[w]+=s.length}function Je(s){if(this[w]+=s.length,this[se]._maxPayload<1||this[w]<=this[se]._maxPayload){this[T].push(s);return}this[ee]=new RangeError("Max payload size exceeded"),this[ee].code="WS_ERR_UNSUPPORTED_MESSAGE_LENGTH",this[ee][Qe]=1009,this.removeListener("data",Je),this.reset()}function Rt(s){this[se]._inflate=null,s[Qe]=1007,this[z](s)}var N={},Ut={get exports(){return N},set exports(s){N=s}};const Bt={},$t=Object.freeze(Object.defineProperty({__proto__:null,default:Bt},Symbol.toStringTag,{value:"Module"})),Mt=_t($t);var Oe;const{isUtf8:Te}=S,It=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,1,1,1,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,0];function Dt(s){return s>=1e3&&s<=1014&&s!==1004&&s!==1005&&s!==1006||s>=3e3&&s<=4999}function ve(s){const e=s.length;let t=0;for(;t=e||(s[t+1]&192)!==128||(s[t+2]&192)!==128||s[t]===224&&(s[t+1]&224)===128||s[t]===237&&(s[t+1]&224)===160)return!1;t+=3}else if((s[t]&248)===240){if(t+3>=e||(s[t+1]&192)!==128||(s[t+2]&192)!==128||(s[t+3]&192)!==128||s[t]===240&&(s[t+1]&240)===128||s[t]===244&&s[t+1]>143||s[t]>244)return!1;t+=4}else return!1;return!0}Ut.exports={isValidStatusCode:Dt,isValidUTF8:ve,tokenChars:It};if(Te)Oe=N.isValidUTF8=function(s){return s.length<24?ve(s):Te(s)};else if(!{}.WS_NO_UTF_8_VALIDATE)try{const s=Mt;Oe=N.isValidUTF8=function(e){return e.length<32?ve(e):s(e)}}catch{}const{Writable:Wt}=S,Ce=ie,{BINARY_TYPES:At,EMPTY_BUFFER:Le,kStatusCode:Ft,kWebSocket:jt}=$,{concat:he,toArrayBuffer:Gt,unmask:Vt}=L,{isValidStatusCode:Ht,isValidUTF8:Ne}=N,Z=Buffer[Symbol.species],j=0,Pe=1,Re=2,Ue=3,ce=4,zt=5;let Yt=class extends Wt{constructor(e={}){super(),this._binaryType=e.binaryType||At[0],this._extensions=e.extensions||{},this._isServer=!!e.isServer,this._maxPayload=e.maxPayload|0,this._skipUTF8Validation=!!e.skipUTF8Validation,this[jt]=void 0,this._bufferedBytes=0,this._buffers=[],this._compressed=!1,this._payloadLength=0,this._mask=void 0,this._fragmented=0,this._masked=!1,this._fin=!1,this._opcode=0,this._totalPayloadLength=0,this._messageLength=0,this._fragments=[],this._state=j,this._loop=!1}_write(e,t,r){if(this._opcode===8&&this._state==j)return r();this._bufferedBytes+=e.length,this._buffers.push(e),this.startLoop(r)}consume(e){if(this._bufferedBytes-=e,e===this._buffers[0].length)return this._buffers.shift();if(e=r.length?t.set(this._buffers.shift(),i):(t.set(new Uint8Array(r.buffer,r.byteOffset,e),i),this._buffers[0]=new Z(r.buffer,r.byteOffset+e,r.length-e)),e-=r.length}while(e>0);return t}startLoop(e){let t;this._loop=!0;do switch(this._state){case j:t=this.getInfo();break;case Pe:t=this.getPayloadLength16();break;case Re:t=this.getPayloadLength64();break;case Ue:this.getMask();break;case ce:t=this.getData(e);break;default:this._loop=!1;return}while(this._loop);e(t)}getInfo(){if(this._bufferedBytes<2){this._loop=!1;return}const e=this.consume(2);if(e[0]&48)return this._loop=!1,g(RangeError,"RSV2 and RSV3 must be clear",!0,1002,"WS_ERR_UNEXPECTED_RSV_2_3");const t=(e[0]&64)===64;if(t&&!this._extensions[Ce.extensionName])return this._loop=!1,g(RangeError,"RSV1 must be clear",!0,1002,"WS_ERR_UNEXPECTED_RSV_1");if(this._fin=(e[0]&128)===128,this._opcode=e[0]&15,this._payloadLength=e[1]&127,this._opcode===0){if(t)return this._loop=!1,g(RangeError,"RSV1 must be clear",!0,1002,"WS_ERR_UNEXPECTED_RSV_1");if(!this._fragmented)return this._loop=!1,g(RangeError,"invalid opcode 0",!0,1002,"WS_ERR_INVALID_OPCODE");this._opcode=this._fragmented}else if(this._opcode===1||this._opcode===2){if(this._fragmented)return this._loop=!1,g(RangeError,`invalid opcode ${this._opcode}`,!0,1002,"WS_ERR_INVALID_OPCODE");this._compressed=t}else if(this._opcode>7&&this._opcode<11){if(!this._fin)return this._loop=!1,g(RangeError,"FIN must be set",!0,1002,"WS_ERR_EXPECTED_FIN");if(t)return this._loop=!1,g(RangeError,"RSV1 must be clear",!0,1002,"WS_ERR_UNEXPECTED_RSV_1");if(this._payloadLength>125||this._opcode===8&&this._payloadLength===1)return this._loop=!1,g(RangeError,`invalid payload length ${this._payloadLength}`,!0,1002,"WS_ERR_INVALID_CONTROL_PAYLOAD_LENGTH")}else return this._loop=!1,g(RangeError,`invalid opcode ${this._opcode}`,!0,1002,"WS_ERR_INVALID_OPCODE");if(!this._fin&&!this._fragmented&&(this._fragmented=this._opcode),this._masked=(e[1]&128)===128,this._isServer){if(!this._masked)return this._loop=!1,g(RangeError,"MASK must be set",!0,1002,"WS_ERR_EXPECTED_MASK")}else if(this._masked)return this._loop=!1,g(RangeError,"MASK must be clear",!0,1002,"WS_ERR_UNEXPECTED_MASK");if(this._payloadLength===126)this._state=Pe;else if(this._payloadLength===127)this._state=Re;else return this.haveLength()}getPayloadLength16(){if(this._bufferedBytes<2){this._loop=!1;return}return this._payloadLength=this.consume(2).readUInt16BE(0),this.haveLength()}getPayloadLength64(){if(this._bufferedBytes<8){this._loop=!1;return}const e=this.consume(8),t=e.readUInt32BE(0);return t>Math.pow(2,53-32)-1?(this._loop=!1,g(RangeError,"Unsupported WebSocket frame: payload length > 2^53 - 1",!1,1009,"WS_ERR_UNSUPPORTED_DATA_PAYLOAD_LENGTH")):(this._payloadLength=t*Math.pow(2,32)+e.readUInt32BE(4),this.haveLength())}haveLength(){if(this._payloadLength&&this._opcode<8&&(this._totalPayloadLength+=this._payloadLength,this._totalPayloadLength>this._maxPayload&&this._maxPayload>0))return this._loop=!1,g(RangeError,"Max payload size exceeded",!1,1009,"WS_ERR_UNSUPPORTED_MESSAGE_LENGTH");this._masked?this._state=Ue:this._state=ce}getMask(){if(this._bufferedBytes<4){this._loop=!1;return}this._mask=this.consume(4),this._state=ce}getData(e){let t=Le;if(this._payloadLength){if(this._bufferedBytes7)return this.controlMessage(t);if(this._compressed){this._state=zt,this.decompress(t,e);return}return t.length&&(this._messageLength=this._totalPayloadLength,this._fragments.push(t)),this.dataMessage()}decompress(e,t){this._extensions[Ce.extensionName].decompress(e,this._fin,(i,n)=>{if(i)return t(i);if(n.length){if(this._messageLength+=n.length,this._messageLength>this._maxPayload&&this._maxPayload>0)return t(g(RangeError,"Max payload size exceeded",!1,1009,"WS_ERR_UNSUPPORTED_MESSAGE_LENGTH"));this._fragments.push(n)}const o=this.dataMessage();if(o)return t(o);this.startLoop(t)})}dataMessage(){if(this._fin){const e=this._messageLength,t=this._fragments;if(this._totalPayloadLength=0,this._messageLength=0,this._fragmented=0,this._fragments=[],this._opcode===2){let r;this._binaryType==="nodebuffer"?r=he(t,e):this._binaryType==="arraybuffer"?r=Gt(he(t,e)):r=t,this.emit("message",r,!0)}else{const r=he(t,e);if(!this._skipUTF8Validation&&!Ne(r))return this._loop=!1,g(Error,"invalid UTF-8 sequence",!0,1007,"WS_ERR_INVALID_UTF8");this.emit("message",r,!1)}}this._state=j}controlMessage(e){if(this._opcode===8)if(this._loop=!1,e.length===0)this.emit("conclude",1005,Le),this.end();else{const t=e.readUInt16BE(0);if(!Ht(t))return g(RangeError,`invalid status code ${t}`,!0,1002,"WS_ERR_INVALID_CLOSE_CODE");const r=new Z(e.buffer,e.byteOffset+2,e.length-2);if(!this._skipUTF8Validation&&!Ne(r))return g(Error,"invalid UTF-8 sequence",!0,1007,"WS_ERR_INVALID_UTF8");this.emit("conclude",t,r),this.end()}else this._opcode===9?this.emit("ping",e):this.emit("pong",e);this._state=j}};var et=Yt;function g(s,e,t,r,i){const n=new s(t?`Invalid WebSocket frame: ${e}`:e);return Error.captureStackTrace(n,g),n.code=i,n[Ft]=r,n}const Ys=et,{randomFillSync:qt}=S,Be=ie,{EMPTY_BUFFER:Kt}=$,{isValidStatusCode:Xt}=N,{mask:$e,toBuffer:D}=L,x=Symbol("kByteLength"),Zt=Buffer.alloc(4);let Qt=class U{constructor(e,t,r){this._extensions=t||{},r&&(this._generateMask=r,this._maskBuffer=Buffer.alloc(4)),this._socket=e,this._firstFragment=!0,this._compress=!1,this._bufferedBytes=0,this._deflating=!1,this._queue=[]}static frame(e,t){let r,i=!1,n=2,o=!1;t.mask&&(r=t.maskBuffer||Zt,t.generateMask?t.generateMask(r):qt(r,0,4),o=(r[0]|r[1]|r[2]|r[3])===0,n=6);let l;typeof e=="string"?(!t.mask||o)&&t[x]!==void 0?l=t[x]:(e=Buffer.from(e),l=e.length):(l=e.length,i=t.mask&&t.readOnly&&!o);let f=l;l>=65536?(n+=8,f=127):l>125&&(n+=2,f=126);const a=Buffer.allocUnsafe(i?l+n:n);return a[0]=t.fin?t.opcode|128:t.opcode,t.rsv1&&(a[0]|=64),a[1]=f,f===126?a.writeUInt16BE(l,2):f===127&&(a[2]=a[3]=0,a.writeUIntBE(l,4,6)),t.mask?(a[1]|=128,a[n-4]=r[0],a[n-3]=r[1],a[n-2]=r[2],a[n-1]=r[3],o?[a,e]:i?($e(e,r,a,n,l),[a]):($e(e,r,e,0,l),[a,e])):[a,e]}close(e,t,r,i){let n;if(e===void 0)n=Kt;else{if(typeof e!="number"||!Xt(e))throw new TypeError("First argument must be a valid error code number");if(t===void 0||!t.length)n=Buffer.allocUnsafe(2),n.writeUInt16BE(e,0);else{const l=Buffer.byteLength(t);if(l>123)throw new RangeError("The message must not be greater than 123 bytes");n=Buffer.allocUnsafe(2+l),n.writeUInt16BE(e,0),typeof t=="string"?n.write(t,2):n.set(t,2)}}const o={[x]:n.length,fin:!0,generateMask:this._generateMask,mask:r,maskBuffer:this._maskBuffer,opcode:8,readOnly:!1,rsv1:!1};this._deflating?this.enqueue([this.dispatch,n,!1,o,i]):this.sendFrame(U.frame(n,o),i)}ping(e,t,r){let i,n;if(typeof e=="string"?(i=Buffer.byteLength(e),n=!1):(e=D(e),i=e.length,n=D.readOnly),i>125)throw new RangeError("The data size must not be greater than 125 bytes");const o={[x]:i,fin:!0,generateMask:this._generateMask,mask:t,maskBuffer:this._maskBuffer,opcode:9,readOnly:n,rsv1:!1};this._deflating?this.enqueue([this.dispatch,e,!1,o,r]):this.sendFrame(U.frame(e,o),r)}pong(e,t,r){let i,n;if(typeof e=="string"?(i=Buffer.byteLength(e),n=!1):(e=D(e),i=e.length,n=D.readOnly),i>125)throw new RangeError("The data size must not be greater than 125 bytes");const o={[x]:i,fin:!0,generateMask:this._generateMask,mask:t,maskBuffer:this._maskBuffer,opcode:10,readOnly:n,rsv1:!1};this._deflating?this.enqueue([this.dispatch,e,!1,o,r]):this.sendFrame(U.frame(e,o),r)}send(e,t,r){const i=this._extensions[Be.extensionName];let n=t.binary?2:1,o=t.compress,l,f;if(typeof e=="string"?(l=Buffer.byteLength(e),f=!1):(e=D(e),l=e.length,f=D.readOnly),this._firstFragment?(this._firstFragment=!1,o&&i&&i.params[i._isServer?"server_no_context_takeover":"client_no_context_takeover"]&&(o=l>=i._threshold),this._compress=o):(o=!1,n=0),t.fin&&(this._firstFragment=!0),i){const a={[x]:l,fin:t.fin,generateMask:this._generateMask,mask:t.mask,maskBuffer:this._maskBuffer,opcode:n,readOnly:f,rsv1:o};this._deflating?this.enqueue([this.dispatch,e,this._compress,a,r]):this.dispatch(e,this._compress,a,r)}else this.sendFrame(U.frame(e,{[x]:l,fin:t.fin,generateMask:this._generateMask,mask:t.mask,maskBuffer:this._maskBuffer,opcode:n,readOnly:f,rsv1:!1}),r)}dispatch(e,t,r,i){if(!t){this.sendFrame(U.frame(e,r),i);return}const n=this._extensions[Be.extensionName];this._bufferedBytes+=r[x],this._deflating=!0,n.compress(e,r.fin,(o,l)=>{if(this._socket.destroyed){const f=new Error("The socket was closed while data was being compressed");typeof i=="function"&&i(f);for(let a=0;a{let t=s[e];return Array.isArray(t)||(t=[t]),t.map(r=>[e].concat(Object.keys(r).map(i=>{let n=r[i];return Array.isArray(n)||(n=[n]),n.map(o=>o===!0?i:`${i}=${o}`).join("; ")})).join("; ")).join(", ")}).join(", ")}var st={format:ss,parse:ts};const rs=S,is=S,ns=S,rt=S,os=S,{randomBytes:as,createHash:ls}=S,{URL:de}=S,C=ie,fs=et,hs=tt,{BINARY_TYPES:Ge,EMPTY_BUFFER:J,GUID:cs,kForOnEventAttribute:_e,kListener:us,kStatusCode:ds,kWebSocket:y,NOOP:it}=$,{EventTarget:{addEventListener:_s,removeEventListener:ps}}=es,{format:ms,parse:gs}=st,{toBuffer:ys}=L,vs=30*1e3,nt=Symbol("kAborted"),pe=[8,13],O=["CONNECTING","OPEN","CLOSING","CLOSED"],Ss=/^[!#$%&'*+\-.0-9A-Z^_`|a-z~]+$/;let m=class d extends rs{constructor(e,t,r){super(),this._binaryType=Ge[0],this._closeCode=1006,this._closeFrameReceived=!1,this._closeFrameSent=!1,this._closeMessage=J,this._closeTimer=null,this._extensions={},this._paused=!1,this._protocol="",this._readyState=d.CONNECTING,this._receiver=null,this._sender=null,this._socket=null,e!==null?(this._bufferedAmount=0,this._isServer=!1,this._redirects=0,t===void 0?t=[]:Array.isArray(t)||(typeof t=="object"&&t!==null?(r=t,t=[]):t=[t]),at(this,e,t,r)):this._isServer=!0}get binaryType(){return this._binaryType}set binaryType(e){Ge.includes(e)&&(this._binaryType=e,this._receiver&&(this._receiver._binaryType=e))}get bufferedAmount(){return this._socket?this._socket._writableState.length+this._sender._bufferedBytes:this._bufferedAmount}get extensions(){return Object.keys(this._extensions).join()}get isPaused(){return this._paused}get onclose(){return null}get onerror(){return null}get onopen(){return null}get onmessage(){return null}get protocol(){return this._protocol}get readyState(){return this._readyState}get url(){return this._url}setSocket(e,t,r){const i=new fs({binaryType:this.binaryType,extensions:this._extensions,isServer:this._isServer,maxPayload:r.maxPayload,skipUTF8Validation:r.skipUTF8Validation});this._sender=new hs(e,this._extensions,r.generateMask),this._receiver=i,this._socket=e,i[y]=this,e[y]=this,i.on("conclude",xs),i.on("drain",ks),i.on("error",ws),i.on("message",Os),i.on("ping",Ts),i.on("pong",Cs),e.setTimeout(0),e.setNoDelay(),t.length>0&&e.unshift(t),e.on("close",ft),e.on("data",oe),e.on("end",ht),e.on("error",ct),this._readyState=d.OPEN,this.emit("open")}emitClose(){if(!this._socket){this._readyState=d.CLOSED,this.emit("close",this._closeCode,this._closeMessage);return}this._extensions[C.extensionName]&&this._extensions[C.extensionName].cleanup(),this._receiver.removeAllListeners(),this._readyState=d.CLOSED,this.emit("close",this._closeCode,this._closeMessage)}close(e,t){if(this.readyState!==d.CLOSED){if(this.readyState===d.CONNECTING){const r="WebSocket was closed before the connection was established";b(this,this._req,r);return}if(this.readyState===d.CLOSING){this._closeFrameSent&&(this._closeFrameReceived||this._receiver._writableState.errorEmitted)&&this._socket.end();return}this._readyState=d.CLOSING,this._sender.close(e,t,!this._isServer,r=>{r||(this._closeFrameSent=!0,(this._closeFrameReceived||this._receiver._writableState.errorEmitted)&&this._socket.end())}),this._closeTimer=setTimeout(this._socket.destroy.bind(this._socket),vs)}}pause(){this.readyState===d.CONNECTING||this.readyState===d.CLOSED||(this._paused=!0,this._socket.pause())}ping(e,t,r){if(this.readyState===d.CONNECTING)throw new Error("WebSocket is not open: readyState 0 (CONNECTING)");if(typeof e=="function"?(r=e,e=t=void 0):typeof t=="function"&&(r=t,t=void 0),typeof e=="number"&&(e=e.toString()),this.readyState!==d.OPEN){me(this,e,r);return}t===void 0&&(t=!this._isServer),this._sender.ping(e||J,t,r)}pong(e,t,r){if(this.readyState===d.CONNECTING)throw new Error("WebSocket is not open: readyState 0 (CONNECTING)");if(typeof e=="function"?(r=e,e=t=void 0):typeof t=="function"&&(r=t,t=void 0),typeof e=="number"&&(e=e.toString()),this.readyState!==d.OPEN){me(this,e,r);return}t===void 0&&(t=!this._isServer),this._sender.pong(e||J,t,r)}resume(){this.readyState===d.CONNECTING||this.readyState===d.CLOSED||(this._paused=!1,this._receiver._writableState.needDrain||this._socket.resume())}send(e,t,r){if(this.readyState===d.CONNECTING)throw new Error("WebSocket is not open: readyState 0 (CONNECTING)");if(typeof t=="function"&&(r=t,t={}),typeof e=="number"&&(e=e.toString()),this.readyState!==d.OPEN){me(this,e,r);return}const i={binary:typeof e!="string",mask:!this._isServer,compress:!0,fin:!0,...t};this._extensions[C.extensionName]||(i.compress=!1),this._sender.send(e||J,i,r)}terminate(){if(this.readyState!==d.CLOSED){if(this.readyState===d.CONNECTING){const e="WebSocket was closed before the connection was established";b(this,this._req,e);return}this._socket&&(this._readyState=d.CLOSING,this._socket.destroy())}}};Object.defineProperty(m,"CONNECTING",{enumerable:!0,value:O.indexOf("CONNECTING")});Object.defineProperty(m.prototype,"CONNECTING",{enumerable:!0,value:O.indexOf("CONNECTING")});Object.defineProperty(m,"OPEN",{enumerable:!0,value:O.indexOf("OPEN")});Object.defineProperty(m.prototype,"OPEN",{enumerable:!0,value:O.indexOf("OPEN")});Object.defineProperty(m,"CLOSING",{enumerable:!0,value:O.indexOf("CLOSING")});Object.defineProperty(m.prototype,"CLOSING",{enumerable:!0,value:O.indexOf("CLOSING")});Object.defineProperty(m,"CLOSED",{enumerable:!0,value:O.indexOf("CLOSED")});Object.defineProperty(m.prototype,"CLOSED",{enumerable:!0,value:O.indexOf("CLOSED")});["binaryType","bufferedAmount","extensions","isPaused","protocol","readyState","url"].forEach(s=>{Object.defineProperty(m.prototype,s,{enumerable:!0})});["open","error","close","message"].forEach(s=>{Object.defineProperty(m.prototype,`on${s}`,{enumerable:!0,get(){for(const e of this.listeners(s))if(e[_e])return e[us];return null},set(e){for(const t of this.listeners(s))if(t[_e]){this.removeListener(s,t);break}typeof e=="function"&&this.addEventListener(s,e,{[_e]:!0})}})});m.prototype.addEventListener=_s;m.prototype.removeEventListener=ps;var ot=m;function at(s,e,t,r){const i={protocolVersion:pe[1],maxPayload:104857600,skipUTF8Validation:!1,perMessageDeflate:!0,followRedirects:!1,maxRedirects:10,...r,createConnection:void 0,socketPath:void 0,hostname:void 0,protocol:void 0,timeout:void 0,method:"GET",host:void 0,path:void 0,port:void 0};if(!pe.includes(i.protocolVersion))throw new RangeError(`Unsupported protocol version: ${i.protocolVersion} (supported versions: ${pe.join(", ")})`);let n;if(e instanceof de)n=e,s._url=e.href;else{try{n=new de(e)}catch{throw new SyntaxError(`Invalid URL: ${e}`)}s._url=e}const o=n.protocol==="wss:",l=n.protocol==="ws+unix:";let f;if(n.protocol!=="ws:"&&!o&&!l?f=`The URL's protocol must be one of "ws:", "wss:", or "ws+unix:"`:l&&!n.pathname?f="The URL's pathname is empty":n.hash&&(f="The URL contains a fragment identifier"),f){const u=new SyntaxError(f);if(s._redirects===0)throw u;te(s,u);return}const a=o?443:80,c=as(16).toString("base64"),h=o?is.request:ns.request,p=new Set;let v;if(i.createConnection=o?bs:Es,i.defaultPort=i.defaultPort||a,i.port=n.port||a,i.host=n.hostname.startsWith("[")?n.hostname.slice(1,-1):n.hostname,i.headers={...i.headers,"Sec-WebSocket-Version":i.protocolVersion,"Sec-WebSocket-Key":c,Connection:"Upgrade",Upgrade:"websocket"},i.path=n.pathname+n.search,i.timeout=i.handshakeTimeout,i.perMessageDeflate&&(v=new C(i.perMessageDeflate!==!0?i.perMessageDeflate:{},!1,i.maxPayload),i.headers["Sec-WebSocket-Extensions"]=ms({[C.extensionName]:v.offer()})),t.length){for(const u of t){if(typeof u!="string"||!Ss.test(u)||p.has(u))throw new SyntaxError("An invalid or duplicated subprotocol was specified");p.add(u)}i.headers["Sec-WebSocket-Protocol"]=t.join(",")}if(i.origin&&(i.protocolVersion<13?i.headers["Sec-WebSocket-Origin"]=i.origin:i.headers.Origin=i.origin),(n.username||n.password)&&(i.auth=`${n.username}:${n.password}`),l){const u=i.path.split(":");i.socketPath=u[0],i.path=u[1]}let _;if(i.followRedirects){if(s._redirects===0){s._originalIpc=l,s._originalSecure=o,s._originalHostOrSocketPath=l?i.socketPath:n.host;const u=r&&r.headers;if(r={...r,headers:{}},u)for(const[E,I]of Object.entries(u))r.headers[E.toLowerCase()]=I}else if(s.listenerCount("redirect")===0){const u=l?s._originalIpc?i.socketPath===s._originalHostOrSocketPath:!1:s._originalIpc?!1:n.host===s._originalHostOrSocketPath;(!u||s._originalSecure&&!o)&&(delete i.headers.authorization,delete i.headers.cookie,u||delete i.headers.host,i.auth=void 0)}i.auth&&!r.headers.authorization&&(r.headers.authorization="Basic "+Buffer.from(i.auth).toString("base64")),_=s._req=h(i),s._redirects&&s.emit("redirect",s.url,_)}else _=s._req=h(i);i.timeout&&_.on("timeout",()=>{b(s,_,"Opening handshake has timed out")}),_.on("error",u=>{_===null||_[nt]||(_=s._req=null,te(s,u))}),_.on("response",u=>{const E=u.headers.location,I=u.statusCode;if(E&&i.followRedirects&&I>=300&&I<400){if(++s._redirects>i.maxRedirects){b(s,_,"Maximum redirects exceeded");return}_.abort();let K;try{K=new de(E,e)}catch{const P=new SyntaxError(`Invalid URL: ${E}`);te(s,P);return}at(s,K,t,r)}else s.emit("unexpected-response",_,u)||b(s,_,`Unexpected server response: ${u.statusCode}`)}),_.on("upgrade",(u,E,I)=>{if(s.emit("upgrade",u),s.readyState!==m.CONNECTING)return;if(_=s._req=null,u.headers.upgrade.toLowerCase()!=="websocket"){b(s,E,"Invalid Upgrade header");return}const K=ls("sha1").update(c+cs).digest("base64");if(u.headers["sec-websocket-accept"]!==K){b(s,E,"Invalid Sec-WebSocket-Accept header");return}const A=u.headers["sec-websocket-protocol"];let P;if(A!==void 0?p.size?p.has(A)||(P="Server sent an invalid subprotocol"):P="Server sent a subprotocol but none was requested":p.size&&(P="Server sent no subprotocol"),P){b(s,E,P);return}A&&(s._protocol=A);const Ee=u.headers["sec-websocket-extensions"];if(Ee!==void 0){if(!v){b(s,E,"Server sent a Sec-WebSocket-Extensions header but no extension was requested");return}let ae;try{ae=gs(Ee)}catch{b(s,E,"Invalid Sec-WebSocket-Extensions header");return}const be=Object.keys(ae);if(be.length!==1||be[0]!==C.extensionName){b(s,E,"Server indicated an extension that was not requested");return}try{v.accept(ae[C.extensionName])}catch{b(s,E,"Invalid Sec-WebSocket-Extensions header");return}s._extensions[C.extensionName]=v}s.setSocket(E,I,{generateMask:i.generateMask,maxPayload:i.maxPayload,skipUTF8Validation:i.skipUTF8Validation})}),i.finishRequest?i.finishRequest(_,s):_.end()}function te(s,e){s._readyState=m.CLOSING,s.emit("error",e),s.emitClose()}function Es(s){return s.path=s.socketPath,rt.connect(s)}function bs(s){return s.path=void 0,!s.servername&&s.servername!==""&&(s.servername=rt.isIP(s.host)?"":s.host),os.connect(s)}function b(s,e,t){s._readyState=m.CLOSING;const r=new Error(t);Error.captureStackTrace(r,b),e.setHeader?(e[nt]=!0,e.abort(),e.socket&&!e.socket.destroyed&&e.socket.destroy(),process.nextTick(te,s,r)):(e.destroy(r),e.once("error",s.emit.bind(s,"error")),e.once("close",s.emitClose.bind(s)))}function me(s,e,t){if(e){const r=ys(e).length;s._socket?s._sender._bufferedBytes+=r:s._bufferedAmount+=r}if(t){const r=new Error(`WebSocket is not open: readyState ${s.readyState} (${O[s.readyState]})`);process.nextTick(t,r)}}function xs(s,e){const t=this[y];t._closeFrameReceived=!0,t._closeMessage=e,t._closeCode=s,t._socket[y]!==void 0&&(t._socket.removeListener("data",oe),process.nextTick(lt,t._socket),s===1005?t.close():t.close(s,e))}function ks(){const s=this[y];s.isPaused||s._socket.resume()}function ws(s){const e=this[y];e._socket[y]!==void 0&&(e._socket.removeListener("data",oe),process.nextTick(lt,e._socket),e.close(s[ds])),e.emit("error",s)}function Ve(){this[y].emitClose()}function Os(s,e){this[y].emit("message",s,e)}function Ts(s){const e=this[y];e.pong(s,!e._isServer,it),e.emit("ping",s)}function Cs(s){this[y].emit("pong",s)}function lt(s){s.resume()}function ft(){const s=this[y];this.removeListener("close",ft),this.removeListener("data",oe),this.removeListener("end",ht),s._readyState=m.CLOSING;let e;!this._readableState.endEmitted&&!s._closeFrameReceived&&!s._receiver._writableState.errorEmitted&&(e=s._socket.read())!==null&&s._receiver.write(e),s._receiver.end(),this[y]=void 0,clearTimeout(s._closeTimer),s._receiver._writableState.finished||s._receiver._writableState.errorEmitted?s.emitClose():(s._receiver.on("error",Ve),s._receiver.on("finish",Ve))}function oe(s){this[y]._receiver.write(s)||this.pause()}function ht(){const s=this[y];s._readyState=m.CLOSING,s._receiver.end(),this.end()}function ct(){const s=this[y];this.removeListener("error",ct),this.on("error",it),s&&(s._readyState=m.CLOSING,this.destroy())}const Ks=ot,{tokenChars:Ls}=N;function Ns(s){const e=new Set;let t=-1,r=-1,i=0;for(i;i{const n=re.STATUS_CODES[426];i.writeHead(426,{"Content-Length":n.length,"Content-Type":"text/plain"}),i.end(n)}),this._server.listen(e.port,e.host,e.backlog,t)):e.server&&(this._server=e.server),this._server){const r=this.emit.bind(this,"connection");this._removeListeners=Fs(this._server,{listening:this.emit.bind(this,"listening"),error:this.emit.bind(this,"error"),upgrade:(i,n,o)=>{this.handleUpgrade(i,n,o,r)}})}e.perMessageDeflate===!0&&(e.perMessageDeflate={}),e.clientTracking&&(this.clients=new Set,this._shouldEmitClose=!1),this.options=e,this._state=ze}address(){if(this.options.noServer)throw new Error('The server is operating in "noServer" mode');return this._server?this._server.address():null}close(e){if(this._state===ut){e&&this.once("close",()=>{e(new Error("The server is not running"))}),process.nextTick(H,this);return}if(e&&this.once("close",e),this._state!==Ye)if(this._state=Ye,this.options.noServer||this.options.server)this._server&&(this._removeListeners(),this._removeListeners=this._server=null),this.clients?this.clients.size?this._shouldEmitClose=!0:process.nextTick(H,this):process.nextTick(H,this);else{const t=this._server;this._removeListeners(),this._removeListeners=this._server=null,t.close(()=>{H(this)})}}shouldHandle(e){if(this.options.path){const t=e.url.indexOf("?");if((t!==-1?e.url.slice(0,t):e.url)!==this.options.path)return!1}return!0}handleUpgrade(e,t,r,i){t.on("error",qe);const n=e.headers["sec-websocket-key"],o=+e.headers["sec-websocket-version"];if(e.method!=="GET"){B(this,e,t,405,"Invalid HTTP method");return}if(e.headers.upgrade.toLowerCase()!=="websocket"){B(this,e,t,400,"Invalid Upgrade header");return}if(!n||!Ds.test(n)){B(this,e,t,400,"Missing or invalid Sec-WebSocket-Key header");return}if(o!==8&&o!==13){B(this,e,t,400,"Missing or invalid Sec-WebSocket-Version header");return}if(!this.shouldHandle(e)){Y(t,400);return}const l=e.headers["sec-websocket-protocol"];let f=new Set;if(l!==void 0)try{f=Bs.parse(l)}catch{B(this,e,t,400,"Invalid Sec-WebSocket-Protocol header");return}const a=e.headers["sec-websocket-extensions"],c={};if(this.options.perMessageDeflate&&a!==void 0){const h=new R(this.options.perMessageDeflate,!0,this.options.maxPayload);try{const p=He.parse(a);p[R.extensionName]&&(h.accept(p[R.extensionName]),c[R.extensionName]=h)}catch{B(this,e,t,400,"Invalid or unacceptable Sec-WebSocket-Extensions header");return}}if(this.options.verifyClient){const h={origin:e.headers[`${o===8?"sec-websocket-origin":"origin"}`],secure:!!(e.socket.authorized||e.socket.encrypted),req:e};if(this.options.verifyClient.length===2){this.options.verifyClient(h,(p,v,_,u)=>{if(!p)return Y(t,v||401,_,u);this.completeUpgrade(c,n,f,e,t,r,i)});return}if(!this.options.verifyClient(h))return Y(t,401)}this.completeUpgrade(c,n,f,e,t,r,i)}completeUpgrade(e,t,r,i,n,o,l){if(!n.readable||!n.writable)return n.destroy();if(n[Is])throw new Error("server.handleUpgrade() was called more than once with the same socket, possibly due to a misconfiguration");if(this._state>ze)return Y(n,503);const a=["HTTP/1.1 101 Switching Protocols","Upgrade: websocket","Connection: Upgrade",`Sec-WebSocket-Accept: ${Us("sha1").update(t+Ms).digest("base64")}`],c=new this.options.WebSocket(null);if(r.size){const h=this.options.handleProtocols?this.options.handleProtocols(r,i):r.values().next().value;h&&(a.push(`Sec-WebSocket-Protocol: ${h}`),c._protocol=h)}if(e[R.extensionName]){const h=e[R.extensionName].params,p=He.format({[R.extensionName]:[h]});a.push(`Sec-WebSocket-Extensions: ${p}`),c._extensions=e}this.emit("headers",a,i),n.write(a.concat(`\r -`).join(`\r -`)),n.removeListener("error",qe),c.setSocket(n,o,{maxPayload:this.options.maxPayload,skipUTF8Validation:this.options.skipUTF8Validation}),this.clients&&(this.clients.add(c),c.on("close",()=>{this.clients.delete(c),this._shouldEmitClose&&!this.clients.size&&process.nextTick(H,this)})),l(c,i)}}var As=Ws;function Fs(s,e){for(const t of Object.keys(e))s.on(t,e[t]);return function(){for(const r of Object.keys(e))s.removeListener(r,e[r])}}function H(s){s._state=ut,s.emit("close")}function qe(){this.destroy()}function Y(s,e,t,r){t=t||re.STATUS_CODES[e],r={Connection:"close","Content-Type":"text/html","Content-Length":Buffer.byteLength(t),...r},s.once("finish",s.destroy),s.end(`HTTP/1.1 ${e} ${re.STATUS_CODES[e]}\r -`+Object.keys(r).map(i=>`${i}: ${r[i]}`).join(`\r -`)+`\r -\r -`+t)}function B(s,e,t,r,i){if(s.listenerCount("wsClientError")){const n=new Error(i);Error.captureStackTrace(n,B),s.emit("wsClientError",n,t,e)}else Y(t,r,i)}const Xs=As;export{Ys as Receiver,qs as Sender,Ks as WebSocket,Xs as WebSocketServer,Gs as createWebSocketStream,Ks as default}; -//# sourceMappingURL=wrapper-b7460963-69b64cfb.js.map diff --git a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/datah/__init__.py b/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/datah/__init__.py deleted file mode 100644 index f6d584392958cb43c77c94468c1d5feb053fe60a..0000000000000000000000000000000000000000 --- a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/datah/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .loader import Loader diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/utils/rotation_continuity.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/utils/rotation_continuity.py deleted file mode 100644 index 85602d23f2ea89869df57d4ab82c70c0e46df936..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/utils/rotation_continuity.py +++ /dev/null @@ -1,395 +0,0 @@ -import torch -import torch.nn as nn -from torch.autograd import Variable -import numpy as np - -# Code adapted from the rotation continuity repo (https://github.com/papagina/RotationContinuity) - -#T_poses num*3 -#r_matrix batch*3*3 -def compute_pose_from_rotation_matrix(T_pose, r_matrix): - batch=r_matrix.shape[0] - joint_num = T_pose.shape[0] - r_matrices = r_matrix.view(batch,1, 3,3).expand(batch,joint_num, 3,3).contiguous().view(batch*joint_num,3,3) - src_poses = T_pose.view(1,joint_num,3,1).expand(batch,joint_num,3,1).contiguous().view(batch*joint_num,3,1) - - out_poses = torch.matmul(r_matrices, src_poses) #(batch*joint_num)*3*1 - - return out_poses.view(batch, joint_num, 3) - -# batch*n -def normalize_vector( v, return_mag =False): - batch=v.shape[0] - v_mag = torch.sqrt(v.pow(2).sum(1))# batch - v_mag = torch.max(v_mag, torch.autograd.Variable(torch.FloatTensor([1e-8]).to(v.device))) - v_mag = v_mag.view(batch,1).expand(batch,v.shape[1]) - v = v/v_mag - if(return_mag==True): - return v, v_mag[:,0] - else: - return v - -# u, v batch*n -def cross_product( u, v): - batch = u.shape[0] - #print (u.shape) - #print (v.shape) - i = u[:,1]*v[:,2] - u[:,2]*v[:,1] - j = u[:,2]*v[:,0] - u[:,0]*v[:,2] - k = u[:,0]*v[:,1] - u[:,1]*v[:,0] - - out = torch.cat((i.view(batch,1), j.view(batch,1), k.view(batch,1)),1)#batch*3 - - return out - - -#poses batch*6 -#poses -def compute_rotation_matrix_from_ortho6d(ortho6d): - x_raw = ortho6d[:,0:3]#batch*3 - y_raw = ortho6d[:,3:6]#batch*3 - - x = normalize_vector(x_raw) #batch*3 - z = cross_product(x,y_raw) #batch*3 - z = normalize_vector(z)#batch*3 - y = cross_product(z,x)#batch*3 - - x = x.view(-1,3,1) - y = y.view(-1,3,1) - z = z.view(-1,3,1) - matrix = torch.cat((x,y,z), 2) #batch*3*3 - return matrix - - -#in batch*6 -#out batch*5 -def stereographic_project(a): - dim = a.shape[1] - a = normalize_vector(a) - out = a[:,0:dim-1]/(1-a[:,dim-1]) - return out - - - -#in a batch*5, axis int -def stereographic_unproject(a, axis=None): - """ - Inverse of stereographic projection: increases dimension by one. - """ - batch=a.shape[0] - if axis is None: - axis = a.shape[1] - s2 = torch.pow(a,2).sum(1) #batch - ans = torch.autograd.Variable(torch.zeros(batch, a.shape[1]+1).cuda()) #batch*6 - unproj = 2*a/(s2+1).view(batch,1).repeat(1,a.shape[1]) #batch*5 - if(axis>0): - ans[:,:axis] = unproj[:,:axis] #batch*(axis-0) - ans[:,axis] = (s2-1)/(s2+1) #batch - ans[:,axis+1:] = unproj[:,axis:] #batch*(5-axis) # Note that this is a no-op if the default option (last axis) is used - return ans - - -#a batch*5 -#out batch*3*3 -def compute_rotation_matrix_from_ortho5d(a): - batch = a.shape[0] - proj_scale_np = np.array([np.sqrt(2)+1, np.sqrt(2)+1, np.sqrt(2)]) #3 - proj_scale = torch.autograd.Variable(torch.FloatTensor(proj_scale_np).cuda()).view(1,3).repeat(batch,1) #batch,3 - - u = stereographic_unproject(a[:, 2:5] * proj_scale, axis=0)#batch*4 - norm = torch.sqrt(torch.pow(u[:,1:],2).sum(1)) #batch - u = u/ norm.view(batch,1).repeat(1,u.shape[1]) #batch*4 - b = torch.cat((a[:,0:2], u),1)#batch*6 - matrix = compute_rotation_matrix_from_ortho6d(b) - return matrix - - -#quaternion batch*4 -def compute_rotation_matrix_from_quaternion( quaternion): - batch=quaternion.shape[0] - - - quat = normalize_vector(quaternion).contiguous() - - qw = quat[...,0].contiguous().view(batch, 1) - qx = quat[...,1].contiguous().view(batch, 1) - qy = quat[...,2].contiguous().view(batch, 1) - qz = quat[...,3].contiguous().view(batch, 1) - - # Unit quaternion rotation matrices computatation - xx = qx*qx - yy = qy*qy - zz = qz*qz - xy = qx*qy - xz = qx*qz - yz = qy*qz - xw = qx*qw - yw = qy*qw - zw = qz*qw - - row0 = torch.cat((1-2*yy-2*zz, 2*xy - 2*zw, 2*xz + 2*yw), 1) #batch*3 - row1 = torch.cat((2*xy+ 2*zw, 1-2*xx-2*zz, 2*yz-2*xw ), 1) #batch*3 - row2 = torch.cat((2*xz-2*yw, 2*yz+2*xw, 1-2*xx-2*yy), 1) #batch*3 - - matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch,1,3), row2.view(batch,1,3)),1) #batch*3*3 - - return matrix - -#axisAngle batch*4 angle, x,y,z -def compute_rotation_matrix_from_axisAngle( axisAngle): - batch = axisAngle.shape[0] - - theta = torch.tanh(axisAngle[:,0])*np.pi #[-180, 180] - sin = torch.sin(theta*0.5) - axis = normalize_vector(axisAngle[:,1:4]) #batch*3 - qw = torch.cos(theta*0.5) - qx = axis[:,0]*sin - qy = axis[:,1]*sin - qz = axis[:,2]*sin - - # Unit quaternion rotation matrices computatation - xx = (qx*qx).view(batch,1) - yy = (qy*qy).view(batch,1) - zz = (qz*qz).view(batch,1) - xy = (qx*qy).view(batch,1) - xz = (qx*qz).view(batch,1) - yz = (qy*qz).view(batch,1) - xw = (qx*qw).view(batch,1) - yw = (qy*qw).view(batch,1) - zw = (qz*qw).view(batch,1) - - row0 = torch.cat((1-2*yy-2*zz, 2*xy - 2*zw, 2*xz + 2*yw), 1) #batch*3 - row1 = torch.cat((2*xy+ 2*zw, 1-2*xx-2*zz, 2*yz-2*xw ), 1) #batch*3 - row2 = torch.cat((2*xz-2*yw, 2*yz+2*xw, 1-2*xx-2*yy), 1) #batch*3 - - matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch,1,3), row2.view(batch,1,3)),1) #batch*3*3 - - return matrix - -#axisAngle batch*3 (x,y,z)*theta -def compute_rotation_matrix_from_Rodriguez( rod): - batch = rod.shape[0] - - axis, theta = normalize_vector(rod, return_mag=True) - - sin = torch.sin(theta) - - - qw = torch.cos(theta) - qx = axis[:,0]*sin - qy = axis[:,1]*sin - qz = axis[:,2]*sin - - # Unit quaternion rotation matrices computatation - xx = (qx*qx).view(batch,1) - yy = (qy*qy).view(batch,1) - zz = (qz*qz).view(batch,1) - xy = (qx*qy).view(batch,1) - xz = (qx*qz).view(batch,1) - yz = (qy*qz).view(batch,1) - xw = (qx*qw).view(batch,1) - yw = (qy*qw).view(batch,1) - zw = (qz*qw).view(batch,1) - - row0 = torch.cat((1-2*yy-2*zz, 2*xy - 2*zw, 2*xz + 2*yw), 1) #batch*3 - row1 = torch.cat((2*xy+ 2*zw, 1-2*xx-2*zz, 2*yz-2*xw ), 1) #batch*3 - row2 = torch.cat((2*xz-2*yw, 2*yz+2*xw, 1-2*xx-2*yy), 1) #batch*3 - - matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch,1,3), row2.view(batch,1,3)),1) #batch*3*3 - - return matrix - -#axisAngle batch*3 a,b,c -def compute_rotation_matrix_from_hopf( hopf): - batch = hopf.shape[0] - - theta = (torch.tanh(hopf[:,0])+1.0)*np.pi/2.0 #[0, pi] - phi = (torch.tanh(hopf[:,1])+1.0)*np.pi #[0,2pi) - tao = (torch.tanh(hopf[:,2])+1.0)*np.pi #[0,2pi) - - qw = torch.cos(theta/2)*torch.cos(tao/2) - qx = torch.cos(theta/2)*torch.sin(tao/2) - qy = torch.sin(theta/2)*torch.cos(phi+tao/2) - qz = torch.sin(theta/2)*torch.sin(phi+tao/2) - - # Unit quaternion rotation matrices computatation - xx = (qx*qx).view(batch,1) - yy = (qy*qy).view(batch,1) - zz = (qz*qz).view(batch,1) - xy = (qx*qy).view(batch,1) - xz = (qx*qz).view(batch,1) - yz = (qy*qz).view(batch,1) - xw = (qx*qw).view(batch,1) - yw = (qy*qw).view(batch,1) - zw = (qz*qw).view(batch,1) - - row0 = torch.cat((1-2*yy-2*zz, 2*xy - 2*zw, 2*xz + 2*yw), 1) #batch*3 - row1 = torch.cat((2*xy+ 2*zw, 1-2*xx-2*zz, 2*yz-2*xw ), 1) #batch*3 - row2 = torch.cat((2*xz-2*yw, 2*yz+2*xw, 1-2*xx-2*yy), 1) #batch*3 - - matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch,1,3), row2.view(batch,1,3)),1) #batch*3*3 - - return matrix - - -#euler batch*4 -#output cuda batch*3*3 matrices in the rotation order of XZ'Y'' (intrinsic) or YZX (extrinsic) -def compute_rotation_matrix_from_euler(euler): - batch=euler.shape[0] - - c1=torch.cos(euler[:,0]).view(batch,1)#batch*1 - s1=torch.sin(euler[:,0]).view(batch,1)#batch*1 - c2=torch.cos(euler[:,2]).view(batch,1)#batch*1 - s2=torch.sin(euler[:,2]).view(batch,1)#batch*1 - c3=torch.cos(euler[:,1]).view(batch,1)#batch*1 - s3=torch.sin(euler[:,1]).view(batch,1)#batch*1 - - row1=torch.cat((c2*c3, -s2, c2*s3 ), 1).view(-1,1,3) #batch*1*3 - row2=torch.cat((c1*s2*c3+s1*s3, c1*c2, c1*s2*s3-s1*c3), 1).view(-1,1,3) #batch*1*3 - row3=torch.cat((s1*s2*c3-c1*s3, s1*c2, s1*s2*s3+c1*c3), 1).view(-1,1,3) #batch*1*3 - - matrix = torch.cat((row1, row2, row3), 1) #batch*3*3 - - - return matrix - - -#euler_sin_cos batch*6 -#output cuda batch*3*3 matrices in the rotation order of XZ'Y'' (intrinsic) or YZX (extrinsic) -def compute_rotation_matrix_from_euler_sin_cos(euler_sin_cos): - batch=euler_sin_cos.shape[0] - - s1 = euler_sin_cos[:,0].view(batch,1) - c1 = euler_sin_cos[:,1].view(batch,1) - s2 = euler_sin_cos[:,2].view(batch,1) - c2 = euler_sin_cos[:,3].view(batch,1) - s3 = euler_sin_cos[:,4].view(batch,1) - c3 = euler_sin_cos[:,5].view(batch,1) - - - row1=torch.cat((c2*c3, -s2, c2*s3 ), 1).view(-1,1,3) #batch*1*3 - row2=torch.cat((c1*s2*c3+s1*s3, c1*c2, c1*s2*s3-s1*c3), 1).view(-1,1,3) #batch*1*3 - row3=torch.cat((s1*s2*c3-c1*s3, s1*c2, s1*s2*s3+c1*c3), 1).view(-1,1,3) #batch*1*3 - - matrix = torch.cat((row1, row2, row3), 1) #batch*3*3 - - - return matrix - - -#matrices batch*3*3 -#both matrix are orthogonal rotation matrices -#out theta between 0 to 180 degree batch -def compute_geodesic_distance_from_two_matrices(m1, m2): - batch=m1.shape[0] - m = torch.bmm(m1, m2.transpose(1,2)) #batch*3*3 - - cos = ( m[:,0,0] + m[:,1,1] + m[:,2,2] - 1 )/2 - cos = torch.min(cos, torch.autograd.Variable(torch.ones(batch).cuda()) ) - cos = torch.max(cos, torch.autograd.Variable(torch.ones(batch).cuda())*-1 ) - - - theta = torch.acos(cos) - - #theta = torch.min(theta, 2*np.pi - theta) - - - return theta - - -#matrices batch*3*3 -#both matrix are orthogonal rotation matrices -#out theta between 0 to 180 degree batch -def compute_angle_from_r_matrices(m): - - batch=m.shape[0] - - cos = ( m[:,0,0] + m[:,1,1] + m[:,2,2] - 1 )/2 - cos = torch.min(cos, torch.autograd.Variable(torch.ones(batch).cuda()) ) - cos = torch.max(cos, torch.autograd.Variable(torch.ones(batch).cuda())*-1 ) - - theta = torch.acos(cos) - - return theta - -def get_sampled_rotation_matrices_by_quat(batch): - #quat = torch.autograd.Variable(torch.rand(batch,4).cuda()) - quat = torch.autograd.Variable(torch.randn(batch, 4).cuda()) - matrix = compute_rotation_matrix_from_quaternion(quat) - return matrix - -def get_sampled_rotation_matrices_by_hpof(batch): - - theta = torch.autograd.Variable(torch.FloatTensor(np.random.uniform(0,1, batch)*np.pi).cuda()) #[0, pi] - phi = torch.autograd.Variable(torch.FloatTensor(np.random.uniform(0,2,batch)*np.pi).cuda()) #[0,2pi) - tao = torch.autograd.Variable(torch.FloatTensor(np.random.uniform(0,2,batch)*np.pi).cuda()) #[0,2pi) - - - qw = torch.cos(theta/2)*torch.cos(tao/2) - qx = torch.cos(theta/2)*torch.sin(tao/2) - qy = torch.sin(theta/2)*torch.cos(phi+tao/2) - qz = torch.sin(theta/2)*torch.sin(phi+tao/2) - - # Unit quaternion rotation matrices computatation - xx = (qx*qx).view(batch,1) - yy = (qy*qy).view(batch,1) - zz = (qz*qz).view(batch,1) - xy = (qx*qy).view(batch,1) - xz = (qx*qz).view(batch,1) - yz = (qy*qz).view(batch,1) - xw = (qx*qw).view(batch,1) - yw = (qy*qw).view(batch,1) - zw = (qz*qw).view(batch,1) - - row0 = torch.cat((1-2*yy-2*zz, 2*xy - 2*zw, 2*xz + 2*yw), 1) #batch*3 - row1 = torch.cat((2*xy+ 2*zw, 1-2*xx-2*zz, 2*yz-2*xw ), 1) #batch*3 - row2 = torch.cat((2*xz-2*yw, 2*yz+2*xw, 1-2*xx-2*yy), 1) #batch*3 - - matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch,1,3), row2.view(batch,1,3)),1) #batch*3*3 - - return matrix - -#axisAngle batch*4 angle, x,y,z -def get_sampled_rotation_matrices_by_axisAngle( batch, return_quaternion=False): - - theta = torch.autograd.Variable(torch.FloatTensor(np.random.uniform(-1,1, batch)*np.pi).cuda()) #[0, pi] #[-180, 180] - sin = torch.sin(theta) - axis = torch.autograd.Variable(torch.randn(batch, 3).cuda()) - axis = normalize_vector(axis) #batch*3 - qw = torch.cos(theta) - qx = axis[:,0]*sin - qy = axis[:,1]*sin - qz = axis[:,2]*sin - - quaternion = torch.cat((qw.view(batch,1), qx.view(batch,1), qy.view(batch,1), qz.view(batch,1)), 1 ) - - # Unit quaternion rotation matrices computatation - xx = (qx*qx).view(batch,1) - yy = (qy*qy).view(batch,1) - zz = (qz*qz).view(batch,1) - xy = (qx*qy).view(batch,1) - xz = (qx*qz).view(batch,1) - yz = (qy*qz).view(batch,1) - xw = (qx*qw).view(batch,1) - yw = (qy*qw).view(batch,1) - zw = (qz*qw).view(batch,1) - - row0 = torch.cat((1-2*yy-2*zz, 2*xy - 2*zw, 2*xz + 2*yw), 1) #batch*3 - row1 = torch.cat((2*xy+ 2*zw, 1-2*xx-2*zz, 2*yz-2*xw ), 1) #batch*3 - row2 = torch.cat((2*xz-2*yw, 2*yz+2*xw, 1-2*xx-2*yy), 1) #batch*3 - - matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch,1,3), row2.view(batch,1,3)),1) #batch*3*3 - - if(return_quaternion==True): - return matrix, quaternion - else: - return matrix - - - - - - - - - diff --git a/spaces/wydgg/bingo-wyd-ai/src/lib/bots/bing/sr.ts b/spaces/wydgg/bingo-wyd-ai/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/xdecoder/Demo/__init__.py b/spaces/xdecoder/Demo/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/xdecoder/Instruct-X-Decoder/utils/distributed.py b/spaces/xdecoder/Instruct-X-Decoder/utils/distributed.py deleted file mode 100644 index 521a934de05bca3159bb595cd0ab997ee08dd61a..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/utils/distributed.py +++ /dev/null @@ -1,180 +0,0 @@ -import os -import time -import torch -import pickle -import torch.distributed as dist - - -def init_distributed(opt): - opt['CUDA'] = opt.get('CUDA', True) and torch.cuda.is_available() - if 'OMPI_COMM_WORLD_SIZE' not in os.environ: - # application was started without MPI - # default to single node with single process - opt['env_info'] = 'no MPI' - opt['world_size'] = 1 - opt['local_size'] = 1 - opt['rank'] = 0 - opt['local_rank'] = 0 - opt['master_address'] = '127.0.0.1' - opt['master_port'] = '8673' - else: - # application was started with MPI - # get MPI parameters - opt['world_size'] = int(os.environ['OMPI_COMM_WORLD_SIZE']) - opt['local_size'] = int(os.environ['OMPI_COMM_WORLD_LOCAL_SIZE']) - opt['rank'] = int(os.environ['OMPI_COMM_WORLD_RANK']) - opt['local_rank'] = int(os.environ['OMPI_COMM_WORLD_LOCAL_RANK']) - - # set up device - if not opt['CUDA']: - assert opt['world_size'] == 1, 'multi-GPU training without CUDA is not supported since we use NCCL as communication backend' - opt['device'] = torch.device("cpu") - else: - torch.cuda.set_device(opt['local_rank']) - opt['device'] = torch.device("cuda", opt['local_rank']) - return opt - -def is_main_process(): - rank = 0 - if 'OMPI_COMM_WORLD_SIZE' in os.environ: - rank = int(os.environ['OMPI_COMM_WORLD_RANK']) - - return rank == 0 - -def get_world_size(): - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - -def get_rank(): - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - return dist.get_rank() - - -def synchronize(): - """ - Helper function to synchronize (barrier) among all processes when - using distributed training - """ - if not dist.is_available(): - return - if not dist.is_initialized(): - return - world_size = dist.get_world_size() - rank = dist.get_rank() - if world_size == 1: - return - - def _send_and_wait(r): - if rank == r: - tensor = torch.tensor(0, device="cuda") - else: - tensor = torch.tensor(1, device="cuda") - dist.broadcast(tensor, r) - while tensor.item() == 1: - time.sleep(1) - - _send_and_wait(0) - # now sync on the main process - _send_and_wait(1) - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.IntTensor([tensor.numel()]).to("cuda") - size_list = [torch.IntTensor([0]).to("cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to("cuda")) - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to("cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.reduce(values, dst=0) - if dist.get_rank() == 0 and average: - # only main process gets accumulated, so only divide by - # world_size in this case - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -def broadcast_data(data): - if not torch.distributed.is_initialized(): - return data - rank = dist.get_rank() - if rank == 0: - data_tensor = torch.tensor(data + [0], device="cuda") - else: - data_tensor = torch.tensor(data + [1], device="cuda") - torch.distributed.broadcast(data_tensor, 0) - while data_tensor.cpu().numpy()[-1] == 1: - time.sleep(1) - - return data_tensor.cpu().numpy().tolist()[:-1] - - -def reduce_sum(tensor): - if get_world_size() <= 1: - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - return tensor \ No newline at end of file diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/backbone/registry.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/backbone/registry.py deleted file mode 100644 index 9e19cc8068fff5f5de219c0739594b404d837e00..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/backbone/registry.py +++ /dev/null @@ -1,14 +0,0 @@ -_model_entrypoints = {} - - -def register_backbone(fn): - module_name_split = fn.__module__.split('.') - model_name = module_name_split[-1] - _model_entrypoints[model_name] = fn - return fn - -def model_entrypoints(model_name): - return _model_entrypoints[model_name] - -def is_model(model_name): - return model_name in _model_entrypoints diff --git a/spaces/xiang-wuu/yolov5/utils/loggers/wandb/wandb_utils.py b/spaces/xiang-wuu/yolov5/utils/loggers/wandb/wandb_utils.py deleted file mode 100644 index 04521bf3681ddc8be3db942820725d9061f47f6a..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/utils/loggers/wandb/wandb_utils.py +++ /dev/null @@ -1,577 +0,0 @@ -"""Utilities and tools for tracking runs with Weights & Biases.""" - -import logging -import os -import sys -from contextlib import contextmanager -from pathlib import Path -from typing import Dict - -import yaml -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from utils.dataloaders import LoadImagesAndLabels, img2label_paths -from utils.general import LOGGER, check_dataset, check_file - -try: - import wandb - - assert hasattr(wandb, '__version__') # verify package import not local dir -except (ImportError, AssertionError): - wandb = None - -RANK = int(os.getenv('RANK', -1)) -WANDB_ARTIFACT_PREFIX = 'wandb-artifact://' - - -def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX): - return from_string[len(prefix):] - - -def check_wandb_config_file(data_config_file): - wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path - if Path(wandb_config).is_file(): - return wandb_config - return data_config_file - - -def check_wandb_dataset(data_file): - is_trainset_wandb_artifact = False - is_valset_wandb_artifact = False - if check_file(data_file) and data_file.endswith('.yaml'): - with open(data_file, errors='ignore') as f: - data_dict = yaml.safe_load(f) - is_trainset_wandb_artifact = isinstance(data_dict['train'], - str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX) - is_valset_wandb_artifact = isinstance(data_dict['val'], - str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX) - if is_trainset_wandb_artifact or is_valset_wandb_artifact: - return data_dict - else: - return check_dataset(data_file) - - -def get_run_info(run_path): - run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX)) - run_id = run_path.stem - project = run_path.parent.stem - entity = run_path.parent.parent.stem - model_artifact_name = 'run_' + run_id + '_model' - return entity, project, run_id, model_artifact_name - - -def check_wandb_resume(opt): - process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None - if isinstance(opt.resume, str): - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - if RANK not in [-1, 0]: # For resuming DDP runs - entity, project, run_id, model_artifact_name = get_run_info(opt.resume) - api = wandb.Api() - artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest') - modeldir = artifact.download() - opt.weights = str(Path(modeldir) / "last.pt") - return True - return None - - -def process_wandb_config_ddp_mode(opt): - with open(check_file(opt.data), errors='ignore') as f: - data_dict = yaml.safe_load(f) # data dict - train_dir, val_dir = None, None - if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias) - train_dir = train_artifact.download() - train_path = Path(train_dir) / 'data/images/' - data_dict['train'] = str(train_path) - - if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias) - val_dir = val_artifact.download() - val_path = Path(val_dir) / 'data/images/' - data_dict['val'] = str(val_path) - if train_dir or val_dir: - ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml') - with open(ddp_data_path, 'w') as f: - yaml.safe_dump(data_dict, f) - opt.data = ddp_data_path - - -class WandbLogger(): - """Log training runs, datasets, models, and predictions to Weights & Biases. - - This logger sends information to W&B at wandb.ai. By default, this information - includes hyperparameters, system configuration and metrics, model metrics, - and basic data metrics and analyses. - - By providing additional command line arguments to train.py, datasets, - models and predictions can also be logged. - - For more on how this logger is used, see the Weights & Biases documentation: - https://docs.wandb.com/guides/integrations/yolov5 - """ - - def __init__(self, opt, run_id=None, job_type='Training'): - """ - - Initialize WandbLogger instance - - Upload dataset if opt.upload_dataset is True - - Setup trainig processes if job_type is 'Training' - - arguments: - opt (namespace) -- Commandline arguments for this run - run_id (str) -- Run ID of W&B run to be resumed - job_type (str) -- To set the job_type for this run - - """ - # Pre-training routine -- - self.job_type = job_type - self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run - self.val_artifact, self.train_artifact = None, None - self.train_artifact_path, self.val_artifact_path = None, None - self.result_artifact = None - self.val_table, self.result_table = None, None - self.bbox_media_panel_images = [] - self.val_table_path_map = None - self.max_imgs_to_log = 16 - self.wandb_artifact_data_dict = None - self.data_dict = None - # It's more elegant to stick to 1 wandb.init call, - # but useful config data is overwritten in the WandbLogger's wandb.init call - if isinstance(opt.resume, str): # checks resume from artifact - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - entity, project, run_id, model_artifact_name = get_run_info(opt.resume) - model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name - assert wandb, 'install wandb to resume wandb runs' - # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config - self.wandb_run = wandb.init(id=run_id, - project=project, - entity=entity, - resume='allow', - allow_val_change=True) - opt.resume = model_artifact_name - elif self.wandb: - self.wandb_run = wandb.init(config=opt, - resume="allow", - project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem, - entity=opt.entity, - name=opt.name if opt.name != 'exp' else None, - job_type=job_type, - id=run_id, - allow_val_change=True) if not wandb.run else wandb.run - if self.wandb_run: - if self.job_type == 'Training': - if opt.upload_dataset: - if not opt.resume: - self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt) - - if opt.resume: - # resume from artifact - if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - self.data_dict = dict(self.wandb_run.config.data_dict) - else: # local resume - self.data_dict = check_wandb_dataset(opt.data) - else: - self.data_dict = check_wandb_dataset(opt.data) - self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict - - # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming. - self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict}, allow_val_change=True) - self.setup_training(opt) - - if self.job_type == 'Dataset Creation': - self.wandb_run.config.update({"upload_dataset": True}) - self.data_dict = self.check_and_upload_dataset(opt) - - def check_and_upload_dataset(self, opt): - """ - Check if the dataset format is compatible and upload it as W&B artifact - - arguments: - opt (namespace)-- Commandline arguments for current run - - returns: - Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links. - """ - assert wandb, 'Install wandb to upload dataset' - config_path = self.log_dataset_artifact(opt.data, opt.single_cls, - 'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem) - with open(config_path, errors='ignore') as f: - wandb_data_dict = yaml.safe_load(f) - return wandb_data_dict - - def setup_training(self, opt): - """ - Setup the necessary processes for training YOLO models: - - Attempt to download model checkpoint and dataset artifacts if opt.resume stats with WANDB_ARTIFACT_PREFIX - - Update data_dict, to contain info of previous run if resumed and the paths of dataset artifact if downloaded - - Setup log_dict, initialize bbox_interval - - arguments: - opt (namespace) -- commandline arguments for this run - - """ - self.log_dict, self.current_epoch = {}, 0 - self.bbox_interval = opt.bbox_interval - if isinstance(opt.resume, str): - modeldir, _ = self.download_model_artifact(opt) - if modeldir: - self.weights = Path(modeldir) / "last.pt" - config = self.wandb_run.config - opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str( - self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs,\ - config.hyp, config.imgsz - data_dict = self.data_dict - if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download - self.train_artifact_path, self.train_artifact = self.download_dataset_artifact( - data_dict.get('train'), opt.artifact_alias) - self.val_artifact_path, self.val_artifact = self.download_dataset_artifact( - data_dict.get('val'), opt.artifact_alias) - - if self.train_artifact_path is not None: - train_path = Path(self.train_artifact_path) / 'data/images/' - data_dict['train'] = str(train_path) - if self.val_artifact_path is not None: - val_path = Path(self.val_artifact_path) / 'data/images/' - data_dict['val'] = str(val_path) - - if self.val_artifact is not None: - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - columns = ["epoch", "id", "ground truth", "prediction"] - columns.extend(self.data_dict['names']) - self.result_table = wandb.Table(columns) - self.val_table = self.val_artifact.get("val") - if self.val_table_path_map is None: - self.map_val_table_path() - if opt.bbox_interval == -1: - self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1 - if opt.evolve or opt.noplots: - self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval - train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None - # Update the the data_dict to point to local artifacts dir - if train_from_artifact: - self.data_dict = data_dict - - def download_dataset_artifact(self, path, alias): - """ - download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX - - arguments: - path -- path of the dataset to be used for training - alias (str)-- alias of the artifact to be download/used for training - - returns: - (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset - is found otherwise returns (None, None) - """ - if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX): - artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias) - dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/")) - assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'" - datadir = dataset_artifact.download() - return datadir, dataset_artifact - return None, None - - def download_model_artifact(self, opt): - """ - download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX - - arguments: - opt (namespace) -- Commandline arguments for this run - """ - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest") - assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist' - modeldir = model_artifact.download() - # epochs_trained = model_artifact.metadata.get('epochs_trained') - total_epochs = model_artifact.metadata.get('total_epochs') - is_finished = total_epochs is None - assert not is_finished, 'training is finished, can only resume incomplete runs.' - return modeldir, model_artifact - return None, None - - def log_model(self, path, opt, epoch, fitness_score, best_model=False): - """ - Log the model checkpoint as W&B artifact - - arguments: - path (Path) -- Path of directory containing the checkpoints - opt (namespace) -- Command line arguments for this run - epoch (int) -- Current epoch number - fitness_score (float) -- fitness score for current epoch - best_model (boolean) -- Boolean representing if the current checkpoint is the best yet. - """ - model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', - type='model', - metadata={ - 'original_url': str(path), - 'epochs_trained': epoch + 1, - 'save period': opt.save_period, - 'project': opt.project, - 'total_epochs': opt.epochs, - 'fitness_score': fitness_score}) - model_artifact.add_file(str(path / 'last.pt'), name='last.pt') - wandb.log_artifact(model_artifact, - aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else '']) - LOGGER.info(f"Saving model artifact on epoch {epoch + 1}") - - def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False): - """ - Log the dataset as W&B artifact and return the new data file with W&B links - - arguments: - data_file (str) -- the .yaml file with information about the dataset like - path, classes etc. - single_class (boolean) -- train multi-class data as single-class - project (str) -- project name. Used to construct the artifact path - overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new - file with _wandb postfix. Eg -> data_wandb.yaml - - returns: - the new .yaml file with artifact links. it can be used to start training directly from artifacts - """ - upload_dataset = self.wandb_run.config.upload_dataset - log_val_only = isinstance(upload_dataset, str) and upload_dataset == 'val' - self.data_dict = check_dataset(data_file) # parse and check - data = dict(self.data_dict) - nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names']) - names = {k: v for k, v in enumerate(names)} # to index dictionary - - # log train set - if not log_val_only: - self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(data['train'], rect=True, batch_size=1), - names, - name='train') if data.get('train') else None - if data.get('train'): - data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train') - - self.val_artifact = self.create_dataset_table( - LoadImagesAndLabels(data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None - if data.get('val'): - data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val') - - path = Path(data_file) - # create a _wandb.yaml file with artifacts links if both train and test set are logged - if not log_val_only: - path = (path.stem if overwrite_config else path.stem + '_wandb') + '.yaml' # updated data.yaml path - path = ROOT / 'data' / path - data.pop('download', None) - data.pop('path', None) - with open(path, 'w') as f: - yaml.safe_dump(data, f) - LOGGER.info(f"Created dataset config file {path}") - - if self.job_type == 'Training': # builds correct artifact pipeline graph - if not log_val_only: - self.wandb_run.log_artifact( - self.train_artifact) # calling use_artifact downloads the dataset. NOT NEEDED! - self.wandb_run.use_artifact(self.val_artifact) - self.val_artifact.wait() - self.val_table = self.val_artifact.get('val') - self.map_val_table_path() - else: - self.wandb_run.log_artifact(self.train_artifact) - self.wandb_run.log_artifact(self.val_artifact) - return path - - def map_val_table_path(self): - """ - Map the validation dataset Table like name of file -> it's id in the W&B Table. - Useful for - referencing artifacts for evaluation. - """ - self.val_table_path_map = {} - LOGGER.info("Mapping dataset") - for i, data in enumerate(tqdm(self.val_table.data)): - self.val_table_path_map[data[3]] = data[0] - - def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int, str], name: str = 'dataset'): - """ - Create and return W&B artifact containing W&B Table of the dataset. - - arguments: - dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table - class_to_id -- hash map that maps class ids to labels - name -- name of the artifact - - returns: - dataset artifact to be logged or used - """ - # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging - artifact = wandb.Artifact(name=name, type="dataset") - img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None - img_files = tqdm(dataset.im_files) if not img_files else img_files - for img_file in img_files: - if Path(img_file).is_dir(): - artifact.add_dir(img_file, name='data/images') - labels_path = 'labels'.join(dataset.path.rsplit('images', 1)) - artifact.add_dir(labels_path, name='data/labels') - else: - artifact.add_file(img_file, name='data/images/' + Path(img_file).name) - label_file = Path(img2label_paths([img_file])[0]) - artifact.add_file(str(label_file), name='data/labels/' + - label_file.name) if label_file.exists() else None - table = wandb.Table(columns=["id", "train_image", "Classes", "name"]) - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()]) - for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)): - box_data, img_classes = [], {} - for cls, *xywh in labels[:, 1:].tolist(): - cls = int(cls) - box_data.append({ - "position": { - "middle": [xywh[0], xywh[1]], - "width": xywh[2], - "height": xywh[3]}, - "class_id": cls, - "box_caption": "%s" % (class_to_id[cls])}) - img_classes[cls] = class_to_id[cls] - boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space - table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()), - Path(paths).name) - artifact.add(table, name) - return artifact - - def log_training_progress(self, predn, path, names): - """ - Build evaluation Table. Uses reference from validation dataset table. - - arguments: - predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class] - path (str): local path of the current evaluation image - names (dict(int, str)): hash map that maps class ids to labels - """ - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()]) - box_data = [] - avg_conf_per_class = [0] * len(self.data_dict['names']) - pred_class_count = {} - for *xyxy, conf, cls in predn.tolist(): - if conf >= 0.25: - cls = int(cls) - box_data.append({ - "position": { - "minX": xyxy[0], - "minY": xyxy[1], - "maxX": xyxy[2], - "maxY": xyxy[3]}, - "class_id": cls, - "box_caption": f"{names[cls]} {conf:.3f}", - "scores": { - "class_score": conf}, - "domain": "pixel"}) - avg_conf_per_class[cls] += conf - - if cls in pred_class_count: - pred_class_count[cls] += 1 - else: - pred_class_count[cls] = 1 - - for pred_class in pred_class_count.keys(): - avg_conf_per_class[pred_class] = avg_conf_per_class[pred_class] / pred_class_count[pred_class] - - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - id = self.val_table_path_map[Path(path).name] - self.result_table.add_data(self.current_epoch, id, self.val_table.data[id][1], - wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set), - *avg_conf_per_class) - - def val_one_image(self, pred, predn, path, names, im): - """ - Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel - - arguments: - pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class] - predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class] - path (str): local path of the current evaluation image - """ - if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact - self.log_training_progress(predn, path, names) - - if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0: - if self.current_epoch % self.bbox_interval == 0: - box_data = [{ - "position": { - "minX": xyxy[0], - "minY": xyxy[1], - "maxX": xyxy[2], - "maxY": xyxy[3]}, - "class_id": int(cls), - "box_caption": f"{names[int(cls)]} {conf:.3f}", - "scores": { - "class_score": conf}, - "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()] - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name)) - - def log(self, log_dict): - """ - save the metrics to the logging dictionary - - arguments: - log_dict (Dict) -- metrics/media to be logged in current step - """ - if self.wandb_run: - for key, value in log_dict.items(): - self.log_dict[key] = value - - def end_epoch(self, best_result=False): - """ - commit the log_dict, model artifacts and Tables to W&B and flush the log_dict. - - arguments: - best_result (boolean): Boolean representing if the result of this evaluation is best or not - """ - if self.wandb_run: - with all_logging_disabled(): - if self.bbox_media_panel_images: - self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images - try: - wandb.log(self.log_dict) - except BaseException as e: - LOGGER.info( - f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}" - ) - self.wandb_run.finish() - self.wandb_run = None - - self.log_dict = {} - self.bbox_media_panel_images = [] - if self.result_artifact: - self.result_artifact.add(self.result_table, 'result') - wandb.log_artifact(self.result_artifact, - aliases=[ - 'latest', 'last', 'epoch ' + str(self.current_epoch), - ('best' if best_result else '')]) - - wandb.log({"evaluation": self.result_table}) - columns = ["epoch", "id", "ground truth", "prediction"] - columns.extend(self.data_dict['names']) - self.result_table = wandb.Table(columns) - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - - def finish_run(self): - """ - Log metrics if any and finish the current W&B run - """ - if self.wandb_run: - if self.log_dict: - with all_logging_disabled(): - wandb.log(self.log_dict) - wandb.run.finish() - - -@contextmanager -def all_logging_disabled(highest_level=logging.CRITICAL): - """ source - https://gist.github.com/simon-weber/7853144 - A context manager that will prevent any logging messages triggered during the body from being processed. - :param highest_level: the maximum logging level in use. - This would only need to be changed if a custom level greater than CRITICAL is defined. - """ - previous_level = logging.root.manager.disable - logging.disable(highest_level) - try: - yield - finally: - logging.disable(previous_level) diff --git a/spaces/xiangdy/chatGPT/modules/base_model.py b/spaces/xiangdy/chatGPT/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/xiangdy/chatGPT/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
                  18. {domain_name}
                  19. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
                      \n\n" + "".join(display_append) + "
                    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/xiaoxuezi/spleeter/spleeter/utils/__init__.py b/spaces/xiaoxuezi/spleeter/spleeter/utils/__init__.py deleted file mode 100644 index f2ef6d387c2b576a9fc7854821e0160241e6e6fe..0000000000000000000000000000000000000000 --- a/spaces/xiaoxuezi/spleeter/spleeter/utils/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -#!/usr/bin/env python -# coding: utf8 - -""" This package provides utility function and classes. """ - -__email__ = "spleeter@deezer.com" -__author__ = "Deezer Research" -__license__ = "MIT License" diff --git a/spaces/xl2533/MakeInstruction/self/prompt.py b/spaces/xl2533/MakeInstruction/self/prompt.py deleted file mode 100644 index 8cad371a1ddd85363d10a111f37d1c963364c6a5..0000000000000000000000000000000000000000 --- a/spaces/xl2533/MakeInstruction/self/prompt.py +++ /dev/null @@ -1,37 +0,0 @@ -# -*-coding:utf-8 -*- -import re - -#20个简化成5个 -self_prompt = """你需要想出{n_instruct}个医学相关不同的任务指令。这些任务指令将输入GPT模型,我们将评估GPT模型完成指令的情况。 -以下是要求: -1. 尽量不要在每个指令中重复使用动词,以最大化多样性 -2. 指令的表达形式需要多样化。例如你可以把问题和祈使句结合起来 -3. 指令的类型应该多样化,包括但不限于开放式生成、分类、抽取、问答、文本编辑等等 -4. 指令应该是GPT模型可以完成的任务。例如,指令不能是输出图像或者视频,另一个例子,不要让助手在下午5点叫醒你或设置提醒,因为GPT不能执行任何动作 -5. 指令必须是中文 -6. 指令应该是1到2句话,可以是祈使句或问句。 -7. 你应该为指令生成一个合适的输入。输入字段应该包含为指令提供的一个具体示例。它应该涉及真实的数据,而不应该包含简单的占位符。输入应该提供足够的内容,使指令具有挑战性,但理想情况下不应超过100个单词。 -8. 不是所有的指令都需要输入。例如,当一个指令询问一些一般信息时,“世界上最高的山峰是什么”,就不需要提供具体的上下文。在这种情况下,我们只需在输入字段中放置“<无输入>”。 -9. 输出应该是对指令和输入的合适回应。确保输出少于100个单词。 -{n_instruct}个任务的列表: -{few_shot} -""" - -one_shot_prompt = "###\n{id}. 指令:{instruction}\n{id}. 输入:{input}\n{id}. 输出:{output}\n" - - -def gen_one_shot_prompt(id, instruction, input, output): - instruction = re.sub(r'\s+'," ",instruction).strip().rstrip(":") - input = '<无输入>' if input == '' else input - few_shot = one_shot_prompt.format(id=id, instruction=instruction, input=input, output=output) - return few_shot - - -def gen_few_shot_prompt(instruction_data): - surfix = '###\n' - prompt = '' - for i, data in enumerate(instruction_data): - prompt += gen_one_shot_prompt(i+1, data['instruction'], data['input'], data['output']) - prompt +=surfix - return prompt - diff --git a/spaces/xuxw98/TAPA/scripts/convert_lora_weights.py b/spaces/xuxw98/TAPA/scripts/convert_lora_weights.py deleted file mode 100644 index ad6071e8785973b5ec3d52170fff428355c5cccc..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/scripts/convert_lora_weights.py +++ /dev/null @@ -1,95 +0,0 @@ -import sys -import time -from pathlib import Path -from typing import Optional - -import lightning as L -import torch -import torch.nn as nn - -# support running without installing as a package -wd = Path(__file__).parent.parent.resolve() -sys.path.append(str(wd)) - -from lit_llama import LLaMA -from lit_llama.utils import EmptyInitOnDevice, lazy_load, llama_model_lookup -from lit_llama.lora import lora - -def del_lora_state_dict(model: nn.Module): - base_model_dict = model.state_dict() - key_to_delete = [k for k in base_model_dict if "lora_" in k] - for del_key in key_to_delete: - del base_model_dict[del_key] - return base_model_dict - - -def lora_model_lookup(checkpoint: dict) -> int: - """Returns the LoRA rank from the adapter checkpoint. - - """ - return checkpoint["transformer.h.0.attn.c_attn.lora_B"].shape[1] - - -def main( - accelerator: str = "auto", - lora_path: Optional[Path] = None, - checkpoint_path: Optional[Path] = None, - dtype: str = "bfloat16", -) -> None: - """Merges lora weights to base model. - - Args: - accelerator: The hardware to run on. Possible choices are: - ``"cpu"``, ``"cuda"``, ``"mps"``, ``"gpu"``, ``"tpu"``, ``"auto"``. - lora_path: Path to the checkpoint with trained LoRA weights, which are the output of - `finetune_lora.py`. - checkpoint_path: The checkpoint path to load. - dtype: `torch.dtype` to work with - """ - if not lora_path: - lora_path = Path("out/lora/alpaca/lit-llama-lora-finetuned.pth") - if not checkpoint_path: - checkpoint_path = Path(f"./checkpoints/lit-llama/7B/lit-llama.pth") - - assert lora_path.is_file() - assert checkpoint_path.is_file() - - fabric = L.Fabric(accelerator=accelerator, devices=1) - - dt = getattr(torch, dtype, None) - if not isinstance(dt, torch.dtype): - raise ValueError(f"{dtype} is not a valid dtype.") - dtype = dt - - print("Loading model ...", file=sys.stderr) - t0 = time.time() - - with (lazy_load(checkpoint_path) as pretrained_checkpoint, - lazy_load(lora_path) as lora_checkpoint): - name = llama_model_lookup(pretrained_checkpoint) - rank = lora_model_lookup(lora_checkpoint) - - with EmptyInitOnDevice( - device=fabric.device, dtype=dtype - ), lora(r=rank, alpha=16, dropout=0.05, enabled=True): - model = LLaMA.from_name(name) - - # 1. Load the pretrained weights - model.load_state_dict(pretrained_checkpoint, strict=False) - # 2. Load the fine-tuned lora weights - model.load_state_dict(lora_checkpoint, strict=False) - - print(f"Time to load model: {time.time() - t0:.02f} seconds.", file=sys.stderr) - - model.eval() - base_model_dict = del_lora_state_dict(model) - save_path = lora_path.with_stem(f"{lora_path.stem}-lora-merged-weights") - print("Saving LoRA to base model weights ...") - torch.save(base_model_dict, save_path) - print(f"Model saved at {save_path}") - - -if __name__ == "__main__": - from jsonargparse import CLI - - CLI(main) \ No newline at end of file diff --git a/spaces/xxccc/gpt-academic/request_llm/bridge_stackclaude.py b/spaces/xxccc/gpt-academic/request_llm/bridge_stackclaude.py deleted file mode 100644 index c674a8bfe9d022b6e2b6359e5327b47596a53c68..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/request_llm/bridge_stackclaude.py +++ /dev/null @@ -1,275 +0,0 @@ -from .bridge_newbing import preprocess_newbing_out, preprocess_newbing_out_simple -from multiprocessing import Process, Pipe -from toolbox import update_ui, get_conf, trimmed_format_exc -import threading -import importlib -import logging -import time -from toolbox import get_conf -import asyncio -load_message = "正在加载Claude组件,请稍候..." - -try: - """ - ======================================================================== - 第一部分:Slack API Client - https://github.com/yokonsan/claude-in-slack-api - ======================================================================== - """ - - from slack_sdk.errors import SlackApiError - from slack_sdk.web.async_client import AsyncWebClient - - class SlackClient(AsyncWebClient): - """SlackClient类用于与Slack API进行交互,实现消息发送、接收等功能。 - - 属性: - - CHANNEL_ID:str类型,表示频道ID。 - - 方法: - - open_channel():异步方法。通过调用conversations_open方法打开一个频道,并将返回的频道ID保存在属性CHANNEL_ID中。 - - chat(text: str):异步方法。向已打开的频道发送一条文本消息。 - - get_slack_messages():异步方法。获取已打开频道的最新消息并返回消息列表,目前不支持历史消息查询。 - - get_reply():异步方法。循环监听已打开频道的消息,如果收到"Typing…_"结尾的消息说明Claude还在继续输出,否则结束循环。 - - """ - CHANNEL_ID = None - - async def open_channel(self): - response = await self.conversations_open(users=get_conf('SLACK_CLAUDE_BOT_ID')[0]) - self.CHANNEL_ID = response["channel"]["id"] - - async def chat(self, text): - if not self.CHANNEL_ID: - raise Exception("Channel not found.") - - resp = await self.chat_postMessage(channel=self.CHANNEL_ID, text=text) - self.LAST_TS = resp["ts"] - - async def get_slack_messages(self): - try: - # TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题 - resp = await self.conversations_history(channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1) - msg = [msg for msg in resp["messages"] - if msg.get("user") == get_conf('SLACK_CLAUDE_BOT_ID')[0]] - return msg - except (SlackApiError, KeyError) as e: - raise RuntimeError(f"获取Slack消息失败。") - - async def get_reply(self): - while True: - slack_msgs = await self.get_slack_messages() - if len(slack_msgs) == 0: - await asyncio.sleep(0.5) - continue - - msg = slack_msgs[-1] - if msg["text"].endswith("Typing…_"): - yield False, msg["text"] - else: - yield True, msg["text"] - break -except: - pass - -""" -======================================================================== -第二部分:子进程Worker(调用主体) -======================================================================== -""" - - -class ClaudeHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.claude_model = None - self.info = "" - self.success = True - self.local_history = [] - self.check_dependency() - if self.success: - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - self.success = False - import slack_sdk - self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。" - self.success = True - except: - self.info = "缺少的依赖,如果要使用Claude,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_slackclaude.txt`安装Claude的依赖,然后重启程序。" - self.success = False - - def ready(self): - return self.claude_model is not None - - async def async_run(self): - await self.claude_model.open_channel() - while True: - # 等待 - kwargs = self.child.recv() - question = kwargs['query'] - history = kwargs['history'] - - # 开始问问题 - prompt = "" - - # 问题 - prompt += question - print('question:', prompt) - - # 提交 - await self.claude_model.chat(prompt) - - # 获取回复 - async for final, response in self.claude_model.get_reply(): - if not final: - print(response) - self.child.send(str(response)) - else: - # 防止丢失最后一条消息 - slack_msgs = await self.claude_model.get_slack_messages() - last_msg = slack_msgs[-1]["text"] if slack_msgs and len(slack_msgs) > 0 else "" - if last_msg: - self.child.send(last_msg) - print('-------- receive final ---------') - self.child.send('[Finish]') - - def run(self): - """ - 这个函数运行在子进程 - """ - # 第一次运行,加载参数 - self.success = False - self.local_history = [] - if (self.claude_model is None) or (not self.success): - # 代理设置 - proxies, = get_conf('proxies') - if proxies is None: - self.proxies_https = None - else: - self.proxies_https = proxies['https'] - - try: - SLACK_CLAUDE_USER_TOKEN, = get_conf('SLACK_CLAUDE_USER_TOKEN') - self.claude_model = SlackClient(token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https) - print('Claude组件初始化成功。') - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Claude组件。{tb_str}') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Claude组件。") - - self.success = True - try: - # 进入任务等待状态 - asyncio.run(self.async_run()) - except Exception: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] Claude失败 {tb_str}.') - self.child.send('[Fail]') - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - """ - 这个函数运行在主进程 - """ - self.threadLock.acquire() - self.parent.send(kwargs) # 发送请求到子进程 - while True: - res = self.parent.recv() # 等待Claude回复的片段 - if res == '[Finish]': - break # 结束 - elif res == '[Fail]': - self.success = False - break - else: - yield res # Claude回复的片段 - self.threadLock.release() - - -""" -======================================================================== -第三部分:主进程统一调用函数接口 -======================================================================== -""" -global claude_handle -claude_handle = None - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global claude_handle - if (claude_handle is None) or (not claude_handle.success): - claude_handle = ClaudeHandle() - observe_window[0] = load_message + "\n\n" + claude_handle.info - if not claude_handle.success: - error = claude_handle.info - claude_handle = None - raise RuntimeError(error) - - # 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]]) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - observe_window[0] = "[Local Message]: 等待Claude响应中 ..." - for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - observe_window[0] = preprocess_newbing_out_simple(response) - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return preprocess_newbing_out_simple(response) - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "[Local Message]: 等待Claude响应中 ...")) - - global claude_handle - if (claude_handle is None) or (not claude_handle.success): - claude_handle = ClaudeHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + claude_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not claude_handle.success: - claude_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: - inputs = core_functional[additional_fn]["PreProcess"]( - inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + \ - inputs + core_functional[additional_fn]["Suffix"] - - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]]) - - chatbot[-1] = (inputs, "[Local Message]: 等待Claude响应中 ...") - response = "[Local Message]: 等待Claude响应中 ..." - yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt): - chatbot[-1] = (inputs, preprocess_newbing_out(response)) - yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - if response == "[Local Message]: 等待Claude响应中 ...": - response = "[Local Message]: Claude响应异常,请刷新界面重试 ..." - history.extend([inputs, response]) - logging.info(f'[raw_input] {inputs}') - logging.info(f'[response] {response}') - yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。") diff --git a/spaces/xxccc/gpt-academic/request_llm/edge_gpt_free.py b/spaces/xxccc/gpt-academic/request_llm/edge_gpt_free.py deleted file mode 100644 index ef6187379c470b0f325d50d7642cfc95b933f1ef..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/request_llm/edge_gpt_free.py +++ /dev/null @@ -1,1112 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -""" -Main.py -""" - -import argparse -import asyncio -import json -import os -import random -import re -import ssl -import sys -import time -import uuid -from enum import Enum -from pathlib import Path -from typing import Generator -from typing import Literal -from typing import Optional -from typing import Union - -import aiohttp -import certifi -import httpx -from prompt_toolkit import PromptSession -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory -from prompt_toolkit.completion import WordCompleter -from prompt_toolkit.history import InMemoryHistory -from prompt_toolkit.key_binding import KeyBindings -from rich.live import Live -from rich.markdown import Markdown - -DELIMITER = "\x1e" - - -# Generate random IP between range 13.104.0.0/14 -FORWARDED_IP = ( - f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" -) - -HEADERS = { - "accept": "application/json", - "accept-language": "en-US,en;q=0.9", - "content-type": "application/json", - "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"109.0.1518.78"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": "", - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "x-ms-client-request-id": str(uuid.uuid4()), - "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32", - "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx", - "Referrer-Policy": "origin-when-cross-origin", - "x-forwarded-for": FORWARDED_IP, -} - -HEADERS_INIT_CONVER = { - "authority": "edgeservices.bing.com", - "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", - "accept-language": "en-US,en;q=0.9", - "cache-control": "max-age=0", - "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"110.0.1587.69"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": '""', - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "document", - "sec-fetch-mode": "navigate", - "sec-fetch-site": "none", - "sec-fetch-user": "?1", - "upgrade-insecure-requests": "1", - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69", - "x-edge-shopping-flag": "1", - "x-forwarded-for": FORWARDED_IP, -} - -ssl_context = ssl.create_default_context() -ssl_context.load_verify_locations(certifi.where()) - - -class NotAllowedToAccess(Exception): - pass - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3imaginative", - "travelansgnd", - "dv3sugg", - "clgalileo", - "gencontentv3", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "h3precise", - "clgalileo", - "nojbfedge", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] - - -def _append_identifier(msg: dict) -> str: - """ - Appends special character to end of message to identify end of message - """ - # Convert dict to json string - return json.dumps(msg, ensure_ascii=False) + DELIMITER - - -def _get_ran_hex(length: int = 32) -> str: - """ - Returns random hex string - """ - return "".join(random.choice("0123456789abcdef") for _ in range(length)) - - -class _ChatHubRequest: - """ - Request object for ChatHub - """ - - def __init__( - self, - conversation_signature: str, - client_id: str, - conversation_id: str, - invocation_id: int = 0, - ) -> None: - self.struct: dict = {} - - self.client_id: str = client_id - self.conversation_id: str = conversation_id - self.conversation_signature: str = conversation_signature - self.invocation_id: int = invocation_id - - def update( - self, - prompt: str, - conversation_style: CONVERSATION_STYLE_TYPE, - options = None, - webpage_context = None, - search_result = False, - ) -> None: - """ - Updates request object - """ - if options is None: - options = [ - "deepleo", - "enable_debug_commands", - "disable_emoji_spoken_text", - "enablemm", - ] - if conversation_style: - if not isinstance(conversation_style, ConversationStyle): - conversation_style = getattr(ConversationStyle, conversation_style) - options = conversation_style.value - self.struct = { - "arguments": [ - { - "source": "cib", - "optionsSets": options, - "allowedMessageTypes": [ - "Chat", - "Disengaged", - "AdsQuery", - "SemanticSerp", - "GenerateContentQuery", - "SearchQuery", - ], - "sliceIds": [ - "chk1cf", - "nopreloadsscf", - "winlongmsg2tf", - "perfimpcomb", - "sugdivdis", - "sydnoinputt", - "wpcssopt", - "wintone2tf", - "0404sydicnbs0", - "405suggbs0", - "scctl", - "330uaugs0", - "0329resp", - "udscahrfon", - "udstrblm5", - "404e2ewrt", - "408nodedups0", - "403tvlansgnd", - ], - "traceId": _get_ran_hex(32), - "isStartOfSession": self.invocation_id == 0, - "message": { - "author": "user", - "inputMethod": "Keyboard", - "text": prompt, - "messageType": "Chat", - }, - "conversationSignature": self.conversation_signature, - "participant": { - "id": self.client_id, - }, - "conversationId": self.conversation_id, - }, - ], - "invocationId": str(self.invocation_id), - "target": "chat", - "type": 4, - } - if search_result: - have_search_result = [ - "InternalSearchQuery", - "InternalSearchResult", - "InternalLoaderMessage", - "RenderCardRequest", - ] - self.struct["arguments"][0]["allowedMessageTypes"] += have_search_result - if webpage_context: - self.struct["arguments"][0]["previousMessages"] = [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----", - }, - ] - self.invocation_id += 1 - - -class _Conversation: - """ - Conversation API - """ - - def __init__( - self, - proxy = None, - async_mode = False, - cookies = None, - ) -> None: - if async_mode: - return - self.struct: dict = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - self.session = httpx.Client( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - ) - if cookies: - for cookie in cookies: - self.session.cookies.set(cookie["name"], cookie["value"]) - # Send GET request - response = self.session.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = self.session.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = _Conversation(async_mode=True) - self.struct = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - transport = httpx.AsyncHTTPTransport(retries=10) - # Convert cookie format to httpx format - formatted_cookies = None - if cookies: - formatted_cookies = httpx.Cookies() - for cookie in cookies: - formatted_cookies.set(cookie["name"], cookie["value"]) - async with httpx.AsyncClient( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - transport=transport, - cookies=formatted_cookies, - ) as client: - # Send GET request - response = await client.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = await client.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - return self - - -class _ChatHub: - """ - Chat API - """ - - def __init__( - self, - conversation: _Conversation, - proxy = None, - cookies = None, - ) -> None: - self.session = None - self.wss = None - self.request: _ChatHubRequest - self.loop: bool - self.task: asyncio.Task - self.request = _ChatHubRequest( - conversation_signature=conversation.struct["conversationSignature"], - client_id=conversation.struct["clientId"], - conversation_id=conversation.struct["conversationId"], - ) - self.cookies = cookies - self.proxy: str = proxy - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - timeout = aiohttp.ClientTimeout(total=30) - self.session = aiohttp.ClientSession(timeout=timeout) - - if self.wss and not self.wss.closed: - await self.wss.close() - # Check if websocket is closed - self.wss = await self.session.ws_connect( - wss_link, - headers=HEADERS, - ssl=ssl_context, - proxy=self.proxy, - autoping=False, - ) - await self._initial_handshake() - if self.request.invocation_id == 0: - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ) - else: - async with httpx.AsyncClient() as client: - response = await client.post( - "https://sydney.bing.com/sydney/UpdateConversation/", - json={ - "messages": [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - }, - ], - "conversationId": self.request.conversation_id, - "source": "cib", - "traceId": _get_ran_hex(32), - "participant": {"id": self.request.client_id}, - "conversationSignature": self.request.conversation_signature, - }, - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Update web page context failed") - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - ) - # Send request - await self.wss.send_str(_append_identifier(self.request.struct)) - final = False - draw = False - resp_txt = "" - result_text = "" - resp_txt_no_link = "" - while not final: - msg = await self.wss.receive() - objects = msg.data.split(DELIMITER) - for obj in objects: - if obj is None or not obj: - continue - response = json.loads(obj) - if response.get("type") != 2 and raw: - yield False, response - elif response.get("type") == 1 and response["arguments"][0].get( - "messages", - ): - if not draw: - if ( - response["arguments"][0]["messages"][0].get("messageType") - == "GenerateContentQuery" - ): - async with ImageGenAsync("", True) as image_generator: - images = await image_generator.get_images( - response["arguments"][0]["messages"][0]["text"], - ) - for i, image in enumerate(images): - resp_txt = resp_txt + f"\n![image{i}]({image})" - draw = True - if ( - response["arguments"][0]["messages"][0]["contentOrigin"] - != "Apology" - ) and not draw: - resp_txt = result_text + response["arguments"][0][ - "messages" - ][0]["adaptiveCards"][0]["body"][0].get("text", "") - resp_txt_no_link = result_text + response["arguments"][0][ - "messages" - ][0].get("text", "") - if response["arguments"][0]["messages"][0].get( - "messageType", - ): - resp_txt = ( - resp_txt - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - result_text = ( - result_text - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - yield False, resp_txt - - elif response.get("type") == 2: - if response["item"]["result"].get("error"): - await self.close() - raise Exception( - f"{response['item']['result']['value']}: {response['item']['result']['message']}", - ) - if draw: - cache = response["item"]["messages"][1]["adaptiveCards"][0][ - "body" - ][0]["text"] - response["item"]["messages"][1]["adaptiveCards"][0]["body"][0][ - "text" - ] = (cache + resp_txt) - if ( - response["item"]["messages"][-1]["contentOrigin"] == "Apology" - and resp_txt - ): - response["item"]["messages"][-1]["text"] = resp_txt_no_link - response["item"]["messages"][-1]["adaptiveCards"][0]["body"][0][ - "text" - ] = resp_txt - print( - "Preserved the message from being deleted", - file=sys.stderr, - ) - final = True - await self.close() - yield True, response - - async def _initial_handshake(self) -> None: - await self.wss.send_str(_append_identifier({"protocol": "json", "version": 1})) - await self.wss.receive() - - async def close(self) -> None: - """ - Close the connection - """ - if self.wss and not self.wss.closed: - await self.wss.close() - if self.session and not self.session.closed: - await self.session.close() - - -class Chatbot: - """ - Combines everything to make it seamless - """ - - def __init__( - self, - proxy = None, - cookies = None, - ) -> None: - self.proxy = proxy - self.chat_hub: _ChatHub = _ChatHub( - _Conversation(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = Chatbot.__new__(Chatbot) - self.proxy = proxy - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - return self - - async def ask( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> dict: - """ - Ask a question to the bot - """ - async for final, response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - if final: - return response - await self.chat_hub.wss.close() - return {} - - async def ask_stream( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - async for response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - raw=raw, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - yield response - - async def close(self) -> None: - """ - Close the connection - """ - await self.chat_hub.close() - - async def reset(self) -> None: - """ - Reset the conversation - """ - await self.close() - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy), - proxy=self.proxy, - cookies=self.chat_hub.cookies, - ) - - -async def _get_input_async( - session: PromptSession = None, - completer: WordCompleter = None, -) -> str: - """ - Multiline input function. - """ - return await session.prompt_async( - completer=completer, - multiline=True, - auto_suggest=AutoSuggestFromHistory(), - ) - - -def _create_session() -> PromptSession: - kb = KeyBindings() - - @kb.add("enter") - def _(event): - buffer_text = event.current_buffer.text - if buffer_text.startswith("!"): - event.current_buffer.validate_and_handle() - else: - event.current_buffer.insert_text("\n") - - @kb.add("escape") - def _(event): - if event.current_buffer.complete_state: - # event.current_buffer.cancel_completion() - event.current_buffer.text = "" - - return PromptSession(key_bindings=kb, history=InMemoryHistory()) - - -def _create_completer(commands: list, pattern_str: str = "$"): - return WordCompleter(words=commands, pattern=re.compile(pattern_str)) - - -async def async_main(args: argparse.Namespace) -> None: - """ - Main function - """ - print("Initializing...") - print("Enter `alt+enter` or `escape+enter` to send a message") - # Read and parse cookies - cookies = None - if args.cookie_file: - cookies = json.loads(open(args.cookie_file, encoding="utf-8").read()) - bot = await Chatbot.create(proxy=args.proxy, cookies=cookies) - session = _create_session() - completer = _create_completer(["!help", "!exit", "!reset"]) - initial_prompt = args.prompt - - while True: - print("\nYou:") - if initial_prompt: - question = initial_prompt - print(question) - initial_prompt = None - else: - question = ( - input() - if args.enter_once - else await _get_input_async(session=session, completer=completer) - ) - print() - if question == "!exit": - break - if question == "!help": - print( - """ - !help - Show this help message - !exit - Exit the program - !reset - Reset the conversation - """, - ) - continue - if question == "!reset": - await bot.reset() - continue - print("Bot:") - if args.no_stream: - print( - ( - await bot.ask( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ) - )["item"]["messages"][1]["adaptiveCards"][0]["body"][0]["text"], - ) - else: - wrote = 0 - if args.rich: - md = Markdown("") - with Live(md, auto_refresh=False) as live: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if wrote > len(response): - print(md) - print(Markdown("***Bing revoked the response.***")) - wrote = len(response) - md = Markdown(response) - live.update(md, refresh=True) - else: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if not wrote: - print(response, end="", flush=True) - else: - print(response[wrote:], end="", flush=True) - wrote = len(response) - print() - await bot.close() - - -def main() -> None: - print( - """ - EdgeGPT - A demo of reverse engineering the Bing GPT chatbot - Repo: github.com/acheong08/EdgeGPT - By: Antonio Cheong - - !help for help - - Type !exit to exit - """, - ) - parser = argparse.ArgumentParser() - parser.add_argument("--enter-once", action="store_true") - parser.add_argument("--no-stream", action="store_true") - parser.add_argument("--rich", action="store_true") - parser.add_argument( - "--proxy", - help="Proxy URL (e.g. socks5://127.0.0.1:1080)", - type=str, - ) - parser.add_argument( - "--wss-link", - help="WSS URL(e.g. wss://sydney.bing.com/sydney/ChatHub)", - type=str, - default="wss://sydney.bing.com/sydney/ChatHub", - ) - parser.add_argument( - "--style", - choices=["creative", "balanced", "precise"], - default="balanced", - ) - parser.add_argument( - "--prompt", - type=str, - default="", - required=False, - help="prompt to start with", - ) - parser.add_argument( - "--cookie-file", - type=str, - default="", - required=False, - help="path to cookie file", - ) - args = parser.parse_args() - asyncio.run(async_main(args)) - - -class Cookie: - """ - Convenience class for Bing Cookie files, data, and configuration. This Class - is updated dynamically by the Query class to allow cycling through >1 - cookie/credentials file e.g. when daily request limits (current 200 per - account per day) are exceeded. - """ - - current_file_index = 0 - dirpath = Path("./").resolve() - search_pattern = "bing_cookies_*.json" - ignore_files = set() - - @classmethod - def fetch_default(cls, path=None): - from selenium import webdriver - from selenium.webdriver.common.by import By - - driver = webdriver.Edge() - driver.get("https://bing.com/chat") - time.sleep(5) - xpath = '//button[@id="bnp_btn_accept"]' - driver.find_element(By.XPATH, xpath).click() - time.sleep(2) - xpath = '//a[@id="codexPrimaryButton"]' - driver.find_element(By.XPATH, xpath).click() - if path is None: - path = Path("./bing_cookies__default.json") - # Double underscore ensures this file is first when sorted - cookies = driver.get_cookies() - Path(path).write_text(json.dumps(cookies, indent=4), encoding="utf-8") - # Path again in case supplied path is: str - print(f"Cookies saved to: {path}") - driver.quit() - - @classmethod - def files(cls): - """Return a sorted list of all cookie files matching .search_pattern""" - all_files = set(cls.dirpath.glob(cls.search_pattern)) - return sorted(list(all_files - cls.ignore_files)) - - @classmethod - def import_data(cls): - """ - Read the active cookie file and populate the following attributes: - - .current_filepath - .current_data - .image_token - """ - try: - cls.current_filepath = cls.files()[cls.current_file_index] - except IndexError: - print( - "> Please set Cookie.current_filepath to a valid cookie file, then run Cookie.import_data()", - ) - return - print(f"> Importing cookies from: {cls.current_filepath.name}") - with open(cls.current_filepath, encoding="utf-8") as file: - cls.current_data = json.load(file) - cls.image_token = [x for x in cls.current_data if x.get("name") == "_U"] - cls.image_token = cls.image_token[0].get("value") - - @classmethod - def import_next(cls): - """ - Cycle through to the next cookies file. Import it. Mark the previous - file to be ignored for the remainder of the current session. - """ - cls.ignore_files.add(cls.current_filepath) - if Cookie.current_file_index >= len(cls.files()): - Cookie.current_file_index = 0 - Cookie.import_data() - - -class Query: - """ - A convenience class that wraps around EdgeGPT.Chatbot to encapsulate input, - config, and output all together. Relies on Cookie class for authentication - """ - - def __init__( - self, - prompt, - style="precise", - content_type="text", - cookie_file=0, - echo=True, - echo_prompt=False, - ): - """ - Arguments: - - prompt: Text to enter into Bing Chat - style: creative, balanced, or precise - content_type: "text" for Bing Chat; "image" for Dall-e - cookie_file: Path, filepath string, or index (int) to list of cookie paths - echo: Print something to confirm request made - echo_prompt: Print confirmation of the evaluated prompt - """ - self.index = [] - self.request_count = {} - self.image_dirpath = Path("./").resolve() - Cookie.import_data() - self.index += [self] - self.prompt = prompt - files = Cookie.files() - if isinstance(cookie_file, int): - index = cookie_file if cookie_file < len(files) else 0 - else: - if not isinstance(cookie_file, (str, Path)): - message = "'cookie_file' must be an int, str, or Path object" - raise TypeError(message) - cookie_file = Path(cookie_file) - if cookie_file in files(): # Supplied filepath IS in Cookie.dirpath - index = files.index(cookie_file) - else: # Supplied filepath is NOT in Cookie.dirpath - if cookie_file.is_file(): - Cookie.dirpath = cookie_file.parent.resolve() - if cookie_file.is_dir(): - Cookie.dirpath = cookie_file.resolve() - index = 0 - Cookie.current_file_index = index - if content_type == "text": - self.style = style - self.log_and_send_query(echo, echo_prompt) - if content_type == "image": - self.create_image() - - def log_and_send_query(self, echo, echo_prompt): - self.response = asyncio.run(self.send_to_bing(echo, echo_prompt)) - name = str(Cookie.current_filepath.name) - if not self.request_count.get(name): - self.request_count[name] = 1 - else: - self.request_count[name] += 1 - - def create_image(self): - image_generator = ImageGen(Cookie.image_token) - image_generator.save_images( - image_generator.get_images(self.prompt), - output_dir=self.image_dirpath, - ) - - async def send_to_bing(self, echo=True, echo_prompt=False): - """Creat, submit, then close a Chatbot instance. Return the response""" - retries = len(Cookie.files()) - while retries: - try: - bot = await Chatbot.create() - if echo_prompt: - print(f"> {self.prompt=}") - if echo: - print("> Waiting for response...") - if self.style.lower() not in "creative balanced precise".split(): - self.style = "precise" - response = await bot.ask( - prompt=self.prompt, - conversation_style=getattr(ConversationStyle, self.style), - # wss_link="wss://sydney.bing.com/sydney/ChatHub" - # What other values can this parameter take? It seems to be optional - ) - return response - except KeyError: - print( - f"> KeyError [{Cookie.current_filepath.name} may have exceeded the daily limit]", - ) - Cookie.import_next() - retries -= 1 - finally: - await bot.close() - - @property - def output(self): - """The response from a completed Chatbot request""" - return self.response["item"]["messages"][1]["text"] - - @property - def sources(self): - """The source names and details parsed from a completed Chatbot request""" - return self.response["item"]["messages"][1]["sourceAttributions"] - - @property - def sources_dict(self): - """The source names and details as a dictionary""" - sources_dict = {} - name = "providerDisplayName" - url = "seeMoreUrl" - for source in self.sources: - if name in source.keys() and url in source.keys(): - sources_dict[source[name]] = source[url] - else: - continue - return sources_dict - - @property - def code(self): - """Extract and join any snippets of Python code in the response""" - code_blocks = self.output.split("```")[1:-1:2] - code_blocks = ["\n".join(x.splitlines()[1:]) for x in code_blocks] - return "\n\n".join(code_blocks) - - @property - def languages(self): - """Extract all programming languages given in code blocks""" - code_blocks = self.output.split("```")[1:-1:2] - return {x.splitlines()[0] for x in code_blocks} - - @property - def suggestions(self): - """Follow-on questions suggested by the Chatbot""" - return [ - x["text"] - for x in self.response["item"]["messages"][1]["suggestedResponses"] - ] - - def __repr__(self): - return f"" - - def __str__(self): - return self.output - - -class ImageQuery(Query): - def __init__(self, prompt, **kwargs): - kwargs.update({"content_type": "image"}) - super().__init__(prompt, **kwargs) - - def __repr__(self): - return f"" - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/yanli01/wrwj/modules/llama_func.py b/spaces/yanli01/wrwj/modules/llama_func.py deleted file mode 100644 index 9f4f799882b4e7c34aa8df815ebeb90ed822ba46..0000000000000000000000000000000000000000 --- a/spaces/yanli01/wrwj/modules/llama_func.py +++ /dev/null @@ -1,137 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, 'rb') as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_raw = excel_to_string(filepath) - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" " -): - from langchain.chat_models import ChatOpenAI - from llama_index import GPTSimpleVectorIndex, ServiceContext - - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/yerfor/SyntaSpeech/utils/nn/schedulers.py b/spaces/yerfor/SyntaSpeech/utils/nn/schedulers.py deleted file mode 100644 index c91969dd8e01a8342488e060592700f3957c3651..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/utils/nn/schedulers.py +++ /dev/null @@ -1,57 +0,0 @@ -class NoneSchedule(object): - def __init__(self, optimizer, lr): - self.optimizer = optimizer - self.constant_lr = lr - self.step(0) - - def step(self, num_updates): - self.lr = self.constant_lr - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - def get_lr(self): - return self.optimizer.param_groups[0]['lr'] - - def get_last_lr(self): - return self.get_lr() - - -class RSQRTSchedule(NoneSchedule): - def __init__(self, optimizer, lr, warmup_updates, hidden_size): - self.optimizer = optimizer - self.constant_lr = lr - self.warmup_updates = warmup_updates - self.hidden_size = hidden_size - self.lr = lr - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5 - rsqrt_hidden = self.hidden_size ** -0.5 - self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - -class WarmupSchedule(NoneSchedule): - def __init__(self, optimizer, lr, warmup_updates): - self.optimizer = optimizer - self.constant_lr = self.lr = lr - self.warmup_updates = warmup_updates - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - self.lr = max(constant_lr * warmup, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convbert/modeling_tf_convbert.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convbert/modeling_tf_convbert.py deleted file mode 100644 index 4beb01cb78b0acc655ecc063cf2dca35801fc4f2..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convbert/modeling_tf_convbert.py +++ /dev/null @@ -1,1254 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 ConvBERT model.""" - - -from __future__ import annotations - -from typing import Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import ( - TFBaseModelOutput, - TFMaskedLMOutput, - TFMultipleChoiceModelOutput, - TFQuestionAnsweringModelOutput, - TFSequenceClassifierOutput, - TFTokenClassifierOutput, -) -from ...modeling_tf_utils import ( - TFMaskedLanguageModelingLoss, - TFModelInputType, - TFMultipleChoiceLoss, - TFPreTrainedModel, - TFQuestionAnsweringLoss, - TFSequenceClassificationLoss, - TFSequenceSummary, - TFTokenClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax -from ...utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, -) -from .configuration_convbert import ConvBertConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "YituTech/conv-bert-base" -_CONFIG_FOR_DOC = "ConvBertConfig" - -TF_CONVBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "YituTech/conv-bert-base", - "YituTech/conv-bert-medium-small", - "YituTech/conv-bert-small", - # See all ConvBERT models at https://huggingface.co/models?filter=convbert -] - - -# Copied from transformers.models.albert.modeling_tf_albert.TFAlbertEmbeddings with Albert->ConvBert -class TFConvBertEmbeddings(tf.keras.layers.Layer): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, config: ConvBertConfig, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.embedding_size = config.embedding_size - self.max_position_embeddings = config.max_position_embeddings - self.initializer_range = config.initializer_range - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def build(self, input_shape: tf.TensorShape): - with tf.name_scope("word_embeddings"): - self.weight = self.add_weight( - name="weight", - shape=[self.config.vocab_size, self.embedding_size], - initializer=get_initializer(self.initializer_range), - ) - - with tf.name_scope("token_type_embeddings"): - self.token_type_embeddings = self.add_weight( - name="embeddings", - shape=[self.config.type_vocab_size, self.embedding_size], - initializer=get_initializer(self.initializer_range), - ) - - with tf.name_scope("position_embeddings"): - self.position_embeddings = self.add_weight( - name="embeddings", - shape=[self.max_position_embeddings, self.embedding_size], - initializer=get_initializer(self.initializer_range), - ) - - super().build(input_shape) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertEmbeddings.call - def call( - self, - input_ids: tf.Tensor = None, - position_ids: tf.Tensor = None, - token_type_ids: tf.Tensor = None, - inputs_embeds: tf.Tensor = None, - past_key_values_length=0, - training: bool = False, - ) -> tf.Tensor: - """ - Applies embedding based on inputs tensor. - - Returns: - final_embeddings (`tf.Tensor`): output embedding tensor. - """ - if input_ids is None and inputs_embeds is None: - raise ValueError("Need to provide either `input_ids` or `input_embeds`.") - - if input_ids is not None: - check_embeddings_within_bounds(input_ids, self.config.vocab_size) - inputs_embeds = tf.gather(params=self.weight, indices=input_ids) - - input_shape = shape_list(inputs_embeds)[:-1] - - if token_type_ids is None: - token_type_ids = tf.fill(dims=input_shape, value=0) - - if position_ids is None: - position_ids = tf.expand_dims( - tf.range(start=past_key_values_length, limit=input_shape[1] + past_key_values_length), axis=0 - ) - - position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids) - token_type_embeds = tf.gather(params=self.token_type_embeddings, indices=token_type_ids) - final_embeddings = inputs_embeds + position_embeds + token_type_embeds - final_embeddings = self.LayerNorm(inputs=final_embeddings) - final_embeddings = self.dropout(inputs=final_embeddings, training=training) - - return final_embeddings - - -class TFConvBertSelfAttention(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - new_num_attention_heads = int(config.num_attention_heads / config.head_ratio) - if new_num_attention_heads < 1: - self.head_ratio = config.num_attention_heads - num_attention_heads = 1 - else: - num_attention_heads = new_num_attention_heads - self.head_ratio = config.head_ratio - - self.num_attention_heads = num_attention_heads - self.conv_kernel_size = config.conv_kernel_size - - if config.hidden_size % self.num_attention_heads != 0: - raise ValueError("hidden_size should be divisible by num_attention_heads") - - self.attention_head_size = config.hidden_size // config.num_attention_heads - self.all_head_size = self.num_attention_heads * self.attention_head_size - self.query = tf.keras.layers.Dense( - self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" - ) - self.key = tf.keras.layers.Dense( - self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" - ) - self.value = tf.keras.layers.Dense( - self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" - ) - - self.key_conv_attn_layer = tf.keras.layers.SeparableConv1D( - self.all_head_size, - self.conv_kernel_size, - padding="same", - activation=None, - depthwise_initializer=get_initializer(1 / self.conv_kernel_size), - pointwise_initializer=get_initializer(config.initializer_range), - name="key_conv_attn_layer", - ) - - self.conv_kernel_layer = tf.keras.layers.Dense( - self.num_attention_heads * self.conv_kernel_size, - activation=None, - name="conv_kernel_layer", - kernel_initializer=get_initializer(config.initializer_range), - ) - - self.conv_out_layer = tf.keras.layers.Dense( - self.all_head_size, - activation=None, - name="conv_out_layer", - kernel_initializer=get_initializer(config.initializer_range), - ) - - self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) - - def transpose_for_scores(self, x, batch_size): - # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] - x = tf.reshape(x, (batch_size, -1, self.num_attention_heads, self.attention_head_size)) - return tf.transpose(x, perm=[0, 2, 1, 3]) - - def call(self, hidden_states, attention_mask, head_mask, output_attentions, training=False): - batch_size = shape_list(hidden_states)[0] - mixed_query_layer = self.query(hidden_states) - mixed_key_layer = self.key(hidden_states) - mixed_value_layer = self.value(hidden_states) - - mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states) - - query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) - key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) - conv_attn_layer = tf.multiply(mixed_key_conv_attn_layer, mixed_query_layer) - - conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) - conv_kernel_layer = tf.reshape(conv_kernel_layer, [-1, self.conv_kernel_size, 1]) - conv_kernel_layer = stable_softmax(conv_kernel_layer, axis=1) - - paddings = tf.constant( - [ - [ - 0, - 0, - ], - [int((self.conv_kernel_size - 1) / 2), int((self.conv_kernel_size - 1) / 2)], - [0, 0], - ] - ) - - conv_out_layer = self.conv_out_layer(hidden_states) - conv_out_layer = tf.reshape(conv_out_layer, [batch_size, -1, self.all_head_size]) - conv_out_layer = tf.pad(conv_out_layer, paddings, "CONSTANT") - - unfold_conv_out_layer = tf.stack( - [ - tf.slice(conv_out_layer, [0, i, 0], [batch_size, shape_list(mixed_query_layer)[1], self.all_head_size]) - for i in range(self.conv_kernel_size) - ], - axis=-1, - ) - - conv_out_layer = tf.reshape(unfold_conv_out_layer, [-1, self.attention_head_size, self.conv_kernel_size]) - - conv_out_layer = tf.matmul(conv_out_layer, conv_kernel_layer) - conv_out_layer = tf.reshape(conv_out_layer, [-1, self.all_head_size]) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = tf.matmul( - query_layer, key_layer, transpose_b=True - ) # (batch size, num_heads, seq_len_q, seq_len_k) - dk = tf.cast(shape_list(key_layer)[-1], attention_scores.dtype) # scale attention_scores - attention_scores = attention_scores / tf.math.sqrt(dk) - - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in TFBertModel call() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = stable_softmax(attention_scores, axis=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs, training=training) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - value_layer = tf.reshape( - mixed_value_layer, [batch_size, -1, self.num_attention_heads, self.attention_head_size] - ) - value_layer = tf.transpose(value_layer, [0, 2, 1, 3]) - - context_layer = tf.matmul(attention_probs, value_layer) - context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3]) - - conv_out = tf.reshape(conv_out_layer, [batch_size, -1, self.num_attention_heads, self.attention_head_size]) - context_layer = tf.concat([context_layer, conv_out], 2) - context_layer = tf.reshape( - context_layer, (batch_size, -1, self.head_ratio * self.all_head_size) - ) # (batch_size, seq_len_q, all_head_size) - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - return outputs - - -class TFConvBertSelfOutput(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - - def call(self, hidden_states, input_tensor, training=False): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, training=training) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - - return hidden_states - - -class TFConvBertAttention(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.self_attention = TFConvBertSelfAttention(config, name="self") - self.dense_output = TFConvBertSelfOutput(config, name="output") - - def prune_heads(self, heads): - raise NotImplementedError - - def call(self, input_tensor, attention_mask, head_mask, output_attentions, training=False): - self_outputs = self.self_attention( - input_tensor, attention_mask, head_mask, output_attentions, training=training - ) - attention_output = self.dense_output(self_outputs[0], input_tensor, training=training) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - - return outputs - - -class GroupedLinearLayer(tf.keras.layers.Layer): - def __init__(self, input_size, output_size, num_groups, kernel_initializer, **kwargs): - super().__init__(**kwargs) - self.input_size = input_size - self.output_size = output_size - self.num_groups = num_groups - self.kernel_initializer = kernel_initializer - self.group_in_dim = self.input_size // self.num_groups - self.group_out_dim = self.output_size // self.num_groups - - def build(self, input_shape=None): - self.kernel = self.add_weight( - "kernel", - shape=[self.group_out_dim, self.group_in_dim, self.num_groups], - initializer=self.kernel_initializer, - trainable=True, - ) - - self.bias = self.add_weight( - "bias", shape=[self.output_size], initializer=self.kernel_initializer, dtype=self.dtype, trainable=True - ) - super().build(input_shape) - - def call(self, hidden_states): - batch_size = shape_list(hidden_states)[0] - x = tf.transpose(tf.reshape(hidden_states, [-1, self.num_groups, self.group_in_dim]), [1, 0, 2]) - x = tf.matmul(x, tf.transpose(self.kernel, [2, 1, 0])) - x = tf.transpose(x, [1, 0, 2]) - x = tf.reshape(x, [batch_size, -1, self.output_size]) - x = tf.nn.bias_add(value=x, bias=self.bias) - return x - - -class TFConvBertIntermediate(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - if config.num_groups == 1: - self.dense = tf.keras.layers.Dense( - config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - else: - self.dense = GroupedLinearLayer( - config.hidden_size, - config.intermediate_size, - num_groups=config.num_groups, - kernel_initializer=get_initializer(config.initializer_range), - name="dense", - ) - - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = get_tf_activation(config.hidden_act) - else: - self.intermediate_act_fn = config.hidden_act - - def call(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - - return hidden_states - - -class TFConvBertOutput(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - if config.num_groups == 1: - self.dense = tf.keras.layers.Dense( - config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - else: - self.dense = GroupedLinearLayer( - config.intermediate_size, - config.hidden_size, - num_groups=config.num_groups, - kernel_initializer=get_initializer(config.initializer_range), - name="dense", - ) - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - - def call(self, hidden_states, input_tensor, training=False): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, training=training) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - - return hidden_states - - -class TFConvBertLayer(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.attention = TFConvBertAttention(config, name="attention") - self.intermediate = TFConvBertIntermediate(config, name="intermediate") - self.bert_output = TFConvBertOutput(config, name="output") - - def call(self, hidden_states, attention_mask, head_mask, output_attentions, training=False): - attention_outputs = self.attention( - hidden_states, attention_mask, head_mask, output_attentions, training=training - ) - attention_output = attention_outputs[0] - intermediate_output = self.intermediate(attention_output) - layer_output = self.bert_output(intermediate_output, attention_output, training=training) - outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them - - return outputs - - -class TFConvBertEncoder(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.layer = [TFConvBertLayer(config, name=f"layer_._{i}") for i in range(config.num_hidden_layers)] - - def call( - self, - hidden_states, - attention_mask, - head_mask, - output_attentions, - output_hidden_states, - return_dict, - training=False, - ): - all_hidden_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_outputs = layer_module( - hidden_states, attention_mask, head_mask[i], output_attentions, training=training - ) - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - # Add last layer - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - - return TFBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class TFConvBertPredictionHeadTransform(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - config.embedding_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - - if isinstance(config.hidden_act, str): - self.transform_act_fn = get_tf_activation(config.hidden_act) - else: - self.transform_act_fn = config.hidden_act - - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - - def call(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - - return hidden_states - - -@keras_serializable -class TFConvBertMainLayer(tf.keras.layers.Layer): - config_class = ConvBertConfig - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.embeddings = TFConvBertEmbeddings(config, name="embeddings") - - if config.embedding_size != config.hidden_size: - self.embeddings_project = tf.keras.layers.Dense(config.hidden_size, name="embeddings_project") - - self.encoder = TFConvBertEncoder(config, name="encoder") - self.config = config - - def get_input_embeddings(self): - return self.embeddings - - def set_input_embeddings(self, value): - self.embeddings.weight = value - self.embeddings.vocab_size = value.shape[0] - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - raise NotImplementedError - - def get_extended_attention_mask(self, attention_mask, input_shape, dtype): - if attention_mask is None: - attention_mask = tf.fill(input_shape, 1) - - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - extended_attention_mask = tf.reshape(attention_mask, (input_shape[0], 1, 1, input_shape[1])) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = tf.cast(extended_attention_mask, dtype) - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - - return extended_attention_mask - - def get_head_mask(self, head_mask): - if head_mask is not None: - raise NotImplementedError - else: - head_mask = [None] * self.config.num_hidden_layers - - return head_mask - - @unpack_inputs - def call( - self, - input_ids=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - training=False, - ): - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = shape_list(input_ids) - elif inputs_embeds is not None: - input_shape = shape_list(inputs_embeds)[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if attention_mask is None: - attention_mask = tf.fill(input_shape, 1) - - if token_type_ids is None: - token_type_ids = tf.fill(input_shape, 0) - - hidden_states = self.embeddings(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) - extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, hidden_states.dtype) - head_mask = self.get_head_mask(head_mask) - - if hasattr(self, "embeddings_project"): - hidden_states = self.embeddings_project(hidden_states, training=training) - - hidden_states = self.encoder( - hidden_states, - extended_attention_mask, - head_mask, - output_attentions, - output_hidden_states, - return_dict, - training=training, - ) - - return hidden_states - - -class TFConvBertPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = ConvBertConfig - base_model_prefix = "convbert" - - -CONVBERT_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Args: - config ([`ConvBertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CONVBERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`Numpy array` or `tf.Tensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`Numpy array` or `tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`tf.Tensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False`): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -@add_start_docstrings( - "The bare ConvBERT Model transformer outputting raw hidden-states without any specific head on top.", - CONVBERT_START_DOCSTRING, -) -class TFConvBertModel(TFConvBertPreTrainedModel): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.convbert = TFConvBertMainLayer(config, name="convbert") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: Optional[Union[np.array, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.array, tf.Tensor]] = None, - position_ids: Optional[Union[np.array, tf.Tensor]] = None, - head_mask: Optional[Union[np.array, tf.Tensor]] = None, - inputs_embeds: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: - outputs = self.convbert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - -class TFConvBertMaskedLMHead(tf.keras.layers.Layer): - def __init__(self, config, input_embeddings, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.embedding_size = config.embedding_size - self.input_embeddings = input_embeddings - - def build(self, input_shape): - self.bias = self.add_weight(shape=(self.config.vocab_size,), initializer="zeros", trainable=True, name="bias") - - super().build(input_shape) - - def get_output_embeddings(self): - return self.input_embeddings - - def set_output_embeddings(self, value): - self.input_embeddings.weight = value - self.input_embeddings.vocab_size = shape_list(value)[0] - - def get_bias(self): - return {"bias": self.bias} - - def set_bias(self, value): - self.bias = value["bias"] - self.config.vocab_size = shape_list(value["bias"])[0] - - def call(self, hidden_states): - seq_length = shape_list(tensor=hidden_states)[1] - hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.embedding_size]) - hidden_states = tf.matmul(a=hidden_states, b=self.input_embeddings.weight, transpose_b=True) - hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.config.vocab_size]) - hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.bias) - - return hidden_states - - -class TFConvBertGeneratorPredictions(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dense = tf.keras.layers.Dense(config.embedding_size, name="dense") - - def call(self, generator_hidden_states, training=False): - hidden_states = self.dense(generator_hidden_states) - hidden_states = get_tf_activation("gelu")(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - - return hidden_states - - -@add_start_docstrings("""ConvBERT Model with a `language modeling` head on top.""", CONVBERT_START_DOCSTRING) -class TFConvBertForMaskedLM(TFConvBertPreTrainedModel, TFMaskedLanguageModelingLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, **kwargs) - - self.config = config - self.convbert = TFConvBertMainLayer(config, name="convbert") - self.generator_predictions = TFConvBertGeneratorPredictions(config, name="generator_predictions") - - if isinstance(config.hidden_act, str): - self.activation = get_tf_activation(config.hidden_act) - else: - self.activation = config.hidden_act - - self.generator_lm_head = TFConvBertMaskedLMHead(config, self.convbert.embeddings, name="generator_lm_head") - - def get_lm_head(self): - return self.generator_lm_head - - def get_prefix_bias_name(self): - return self.name + "/" + self.generator_lm_head.name - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFMaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFMaskedLMOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - """ - generator_hidden_states = self.convbert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - generator_sequence_output = generator_hidden_states[0] - prediction_scores = self.generator_predictions(generator_sequence_output, training=training) - prediction_scores = self.generator_lm_head(prediction_scores, training=training) - loss = None if labels is None else self.hf_compute_loss(labels, prediction_scores) - - if not return_dict: - output = (prediction_scores,) + generator_hidden_states[1:] - - return ((loss,) + output) if loss is not None else output - - return TFMaskedLMOutput( - loss=loss, - logits=prediction_scores, - hidden_states=generator_hidden_states.hidden_states, - attentions=generator_hidden_states.attentions, - ) - - -class TFConvBertClassificationHead(tf.keras.layers.Layer): - """Head for sentence-level classification tasks.""" - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = tf.keras.layers.Dropout(classifier_dropout) - self.out_proj = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="out_proj" - ) - - self.config = config - - def call(self, hidden_states, **kwargs): - x = hidden_states[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = get_tf_activation(self.config.hidden_act)(x) - x = self.dropout(x) - x = self.out_proj(x) - - return x - - -@add_start_docstrings( - """ - ConvBERT Model transformer with a sequence classification/regression head on top e.g., for GLUE tasks. - """, - CONVBERT_START_DOCSTRING, -) -class TFConvBertForSequenceClassification(TFConvBertPreTrainedModel, TFSequenceClassificationLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.num_labels = config.num_labels - self.convbert = TFConvBertMainLayer(config, name="convbert") - self.classifier = TFConvBertClassificationHead(config, name="classifier") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFSequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFSequenceClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - outputs = self.convbert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - logits = self.classifier(outputs[0], training=training) - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[1:] - - return ((loss,) + output) if loss is not None else output - - return TFSequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - ConvBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - CONVBERT_START_DOCSTRING, -) -class TFConvBertForMultipleChoice(TFConvBertPreTrainedModel, TFMultipleChoiceLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.convbert = TFConvBertMainLayer(config, name="convbert") - self.sequence_summary = TFSequenceSummary( - config, initializer_range=config.initializer_range, name="sequence_summary" - ) - self.classifier = tf.keras.layers.Dense( - 1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward( - CONVBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length") - ) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFMultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFMultipleChoiceModelOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above) - """ - if input_ids is not None: - num_choices = shape_list(input_ids)[1] - seq_length = shape_list(input_ids)[2] - else: - num_choices = shape_list(inputs_embeds)[1] - seq_length = shape_list(inputs_embeds)[2] - - flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None - flat_attention_mask = tf.reshape(attention_mask, (-1, seq_length)) if attention_mask is not None else None - flat_token_type_ids = tf.reshape(token_type_ids, (-1, seq_length)) if token_type_ids is not None else None - flat_position_ids = tf.reshape(position_ids, (-1, seq_length)) if position_ids is not None else None - flat_inputs_embeds = ( - tf.reshape(inputs_embeds, (-1, seq_length, shape_list(inputs_embeds)[3])) - if inputs_embeds is not None - else None - ) - outputs = self.convbert( - flat_input_ids, - flat_attention_mask, - flat_token_type_ids, - flat_position_ids, - head_mask, - flat_inputs_embeds, - output_attentions, - output_hidden_states, - return_dict=return_dict, - training=training, - ) - logits = self.sequence_summary(outputs[0], training=training) - logits = self.classifier(logits) - reshaped_logits = tf.reshape(logits, (-1, num_choices)) - loss = None if labels is None else self.hf_compute_loss(labels, reshaped_logits) - - if not return_dict: - output = (reshaped_logits,) + outputs[1:] - - return ((loss,) + output) if loss is not None else output - - return TFMultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - ConvBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - CONVBERT_START_DOCSTRING, -) -class TFConvBertForTokenClassification(TFConvBertPreTrainedModel, TFTokenClassificationLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - self.convbert = TFConvBertMainLayer(config, name="convbert") - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = tf.keras.layers.Dropout(classifier_dropout) - self.classifier = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFTokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFTokenClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - outputs = self.convbert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - sequence_output = self.dropout(sequence_output, training=training) - logits = self.classifier(sequence_output) - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFTokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - ConvBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layer on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - CONVBERT_START_DOCSTRING, -) -class TFConvBertForQuestionAnswering(TFConvBertPreTrainedModel, TFQuestionAnsweringLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - self.convbert = TFConvBertMainLayer(config, name="convbert") - self.qa_outputs = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(CONVBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFQuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - start_positions: tf.Tensor | None = None, - end_positions: tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFQuestionAnsweringModelOutput]: - r""" - start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - outputs = self.convbert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = tf.split(logits, 2, axis=-1) - start_logits = tf.squeeze(start_logits, axis=-1) - end_logits = tf.squeeze(end_logits, axis=-1) - loss = None - - if start_positions is not None and end_positions is not None: - labels = {"start_position": start_positions} - labels["end_position"] = end_positions - loss = self.hf_compute_loss(labels, (start_logits, end_logits)) - - if not return_dict: - output = (start_logits, end_logits) + outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFQuestionAnsweringModelOutput( - loss=loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py deleted file mode 100644 index e00de4ad28fd81483c9e1161394b7b508fdad91f..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py +++ /dev/null @@ -1,419 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import functools -import io -import struct -import types -import torch - -from detectron2.modeling import meta_arch -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads import keypoint_head -from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes - -from .c10 import Caffe2Compatible -from .caffe2_patch import ROIHeadsPatcher, patch_generalized_rcnn -from .shared import ( - alias, - check_set_pb_arg, - get_pb_arg_floats, - get_pb_arg_valf, - get_pb_arg_vali, - get_pb_arg_vals, - mock_torch_nn_functional_interpolate, -) - - -def assemble_rcnn_outputs_by_name(image_sizes, tensor_outputs, force_mask_on=False): - """ - A function to assemble caffe2 model's outputs (i.e. Dict[str, Tensor]) - to detectron2's format (i.e. list of Instances instance). - This only works when the model follows the Caffe2 detectron's naming convention. - - Args: - image_sizes (List[List[int, int]]): [H, W] of every image. - tensor_outputs (Dict[str, Tensor]): external_output to its tensor. - - force_mask_on (Bool): if true, the it make sure there'll be pred_masks even - if the mask is not found from tensor_outputs (usually due to model crash) - """ - - results = [Instances(image_size) for image_size in image_sizes] - - batch_splits = tensor_outputs.get("batch_splits", None) - if batch_splits: - raise NotImplementedError() - assert len(image_sizes) == 1 - result = results[0] - - bbox_nms = tensor_outputs["bbox_nms"] - score_nms = tensor_outputs["score_nms"] - class_nms = tensor_outputs["class_nms"] - # Detection will always success because Conv support 0-batch - assert bbox_nms is not None - assert score_nms is not None - assert class_nms is not None - if bbox_nms.shape[1] == 5: - result.pred_boxes = RotatedBoxes(bbox_nms) - else: - result.pred_boxes = Boxes(bbox_nms) - result.scores = score_nms - result.pred_classes = class_nms.to(torch.int64) - - mask_fcn_probs = tensor_outputs.get("mask_fcn_probs", None) - if mask_fcn_probs is not None: - # finish the mask pred - mask_probs_pred = mask_fcn_probs - num_masks = mask_probs_pred.shape[0] - class_pred = result.pred_classes - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = mask_probs_pred[indices, class_pred][:, None] - result.pred_masks = mask_probs_pred - elif force_mask_on: - # NOTE: there's no way to know the height/width of mask here, it won't be - # used anyway when batch size is 0, so just set them to 0. - result.pred_masks = torch.zeros([0, 1, 0, 0], dtype=torch.uint8) - - keypoints_out = tensor_outputs.get("keypoints_out", None) - kps_score = tensor_outputs.get("kps_score", None) - if keypoints_out is not None: - # keypoints_out: [N, 4, #kypoints], where 4 is in order of (x, y, score, prob) - keypoints_tensor = keypoints_out - # NOTE: it's possible that prob is not calculated if "should_output_softmax" - # is set to False in HeatmapMaxKeypoint, so just using raw score, seems - # it doesn't affect mAP. TODO: check more carefully. - keypoint_xyp = keypoints_tensor.transpose(1, 2)[:, :, [0, 1, 2]] - result.pred_keypoints = keypoint_xyp - elif kps_score is not None: - # keypoint heatmap to sparse data structure - pred_keypoint_logits = kps_score - keypoint_head.keypoint_rcnn_inference(pred_keypoint_logits, [result]) - - return results - - -def _cast_to_f32(f64): - return struct.unpack("f", struct.pack("f", f64))[0] - - -def set_caffe2_compatible_tensor_mode(model, enable=True): - def _fn(m): - if isinstance(m, Caffe2Compatible): - m.tensor_mode = enable - - model.apply(_fn) - - -def convert_batched_inputs_to_c2_format(batched_inputs, size_divisibility, device): - """ - See get_caffe2_inputs() below. - """ - assert all(isinstance(x, dict) for x in batched_inputs) - assert all(x["image"].dim() == 3 for x in batched_inputs) - - images = [x["image"] for x in batched_inputs] - images = ImageList.from_tensors(images, size_divisibility) - - im_info = [] - for input_per_image, image_size in zip(batched_inputs, images.image_sizes): - target_height = input_per_image.get("height", image_size[0]) - target_width = input_per_image.get("width", image_size[1]) # noqa - # NOTE: The scale inside im_info is kept as convention and for providing - # post-processing information if further processing is needed. For - # current Caffe2 model definitions that don't include post-processing inside - # the model, this number is not used. - # NOTE: There can be a slight difference between width and height - # scales, using a single number can results in numerical difference - # compared with D2's post-processing. - scale = target_height / image_size[0] - im_info.append([image_size[0], image_size[1], scale]) - im_info = torch.Tensor(im_info) - - return images.tensor.to(device), im_info.to(device) - - -class Caffe2MetaArch(Caffe2Compatible, torch.nn.Module): - """ - Base class for caffe2-compatible implementation of a meta architecture. - The forward is traceable and its traced graph can be converted to caffe2 - graph through ONNX. - """ - - def __init__(self, cfg, torch_model): - """ - Args: - cfg (CfgNode): - torch_model (nn.Module): the detectron2 model (meta_arch) to be - converted. - """ - super().__init__() - self._wrapped_model = torch_model - self.eval() - set_caffe2_compatible_tensor_mode(self, True) - - def get_caffe2_inputs(self, batched_inputs): - """ - Convert pytorch-style structured inputs to caffe2-style inputs that - are tuples of tensors. - - Args: - batched_inputs (list[dict]): inputs to a detectron2 model - in its standard format. Each dict has "image" (CHW tensor), and optionally - "height" and "width". - - Returns: - tuple[Tensor]: - tuple of tensors that will be the inputs to the - :meth:`forward` method. For existing models, the first - is an NCHW tensor (padded and batched); the second is - a im_info Nx3 tensor, where the rows are - (height, width, unused legacy parameter) - """ - return convert_batched_inputs_to_c2_format( - batched_inputs, - self._wrapped_model.backbone.size_divisibility, - self._wrapped_model.device, - ) - - def encode_additional_info(self, predict_net, init_net): - """ - Save extra metadata that will be used by inference in the output protobuf. - """ - pass - - def forward(self, inputs): - """ - Run the forward in caffe2-style. It has to use caffe2-compatible ops - and the method will be used for tracing. - - Args: - inputs (tuple[Tensor]): inputs defined by :meth:`get_caffe2_input`. - They will be the inputs of the converted caffe2 graph. - - Returns: - tuple[Tensor]: output tensors. They will be the outputs of the - converted caffe2 graph. - """ - raise NotImplementedError - - def _caffe2_preprocess_image(self, inputs): - """ - Caffe2 implementation of preprocess_image, which is called inside each MetaArch's forward. - It normalizes the input images, and the final caffe2 graph assumes the - inputs have been batched already. - """ - data, im_info = inputs - data = alias(data, "data") - im_info = alias(im_info, "im_info") - mean, std = self._wrapped_model.pixel_mean, self._wrapped_model.pixel_std - normalized_data = (data - mean) / std - normalized_data = alias(normalized_data, "normalized_data") - - # Pack (data, im_info) into ImageList which is recognized by self.inference. - images = ImageList(tensor=normalized_data, image_sizes=im_info) - return images - - @staticmethod - def get_outputs_converter(predict_net, init_net): - """ - Creates a function that converts outputs of the caffe2 model to - detectron2's standard format. - The function uses information in `predict_net` and `init_net` that are - available at inferene time. Therefore the function logic can be used in inference. - - The returned function has the following signature: - - def convert(batched_inputs, c2_inputs, c2_results) -> detectron2_outputs - - Where - - * batched_inputs (list[dict]): the original input format of the meta arch - * c2_inputs (tuple[Tensor]): the caffe2 inputs. - * c2_results (dict[str, Tensor]): the caffe2 output format, - corresponding to the outputs of the :meth:`forward` function. - * detectron2_outputs: the original output format of the meta arch. - - This function can be used to compare the outputs of the original meta arch and - the converted caffe2 graph. - - Returns: - callable: a callable of the above signature. - """ - raise NotImplementedError - - -class Caffe2GeneralizedRCNN(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.GeneralizedRCNN) - torch_model = patch_generalized_rcnn(torch_model) - super().__init__(cfg, torch_model) - - try: - use_heatmap_max_keypoint = cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT - except AttributeError: - use_heatmap_max_keypoint = False - self.roi_heads_patcher = ROIHeadsPatcher( - self._wrapped_model.roi_heads, use_heatmap_max_keypoint - ) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"GeneralizedRCNN") - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - if not self.tensor_mode: - return self._wrapped_model.inference(inputs) - images = self._caffe2_preprocess_image(inputs) - features = self._wrapped_model.backbone(images.tensor) - proposals, _ = self._wrapped_model.proposal_generator(images, features) - with self.roi_heads_patcher.mock_roi_heads(): - detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals) - return tuple(detector_results[0].flatten()) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - results = assemble_rcnn_outputs_by_name(image_sizes, c2_results) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -class Caffe2RetinaNet(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.RetinaNet) - super().__init__(cfg, torch_model) - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - assert self.tensor_mode - images = self._caffe2_preprocess_image(inputs) - - # explicitly return the images sizes to avoid removing "im_info" by ONNX - # since it's not used in the forward path - return_tensors = [images.image_sizes] - - features = self._wrapped_model.backbone(images.tensor) - features = [features[f] for f in self._wrapped_model.head_in_features] - for i, feature_i in enumerate(features): - features[i] = alias(feature_i, "feature_{}".format(i), is_backward=True) - return_tensors.append(features[i]) - - pred_logits, pred_anchor_deltas = self._wrapped_model.head(features) - for i, (box_cls_i, box_delta_i) in enumerate(zip(pred_logits, pred_anchor_deltas)): - return_tensors.append(alias(box_cls_i, "box_cls_{}".format(i))) - return_tensors.append(alias(box_delta_i, "box_delta_{}".format(i))) - - return tuple(return_tensors) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"RetinaNet") - - # Inference parameters: - check_set_pb_arg( - predict_net, "score_threshold", "f", _cast_to_f32(self._wrapped_model.test_score_thresh) - ) - check_set_pb_arg( - predict_net, "topk_candidates", "i", self._wrapped_model.test_topk_candidates - ) - check_set_pb_arg( - predict_net, "nms_threshold", "f", _cast_to_f32(self._wrapped_model.test_nms_thresh) - ) - check_set_pb_arg( - predict_net, - "max_detections_per_image", - "i", - self._wrapped_model.max_detections_per_image, - ) - - check_set_pb_arg( - predict_net, - "bbox_reg_weights", - "floats", - [_cast_to_f32(w) for w in self._wrapped_model.box2box_transform.weights], - ) - self._encode_anchor_generator_cfg(predict_net) - - def _encode_anchor_generator_cfg(self, predict_net): - # serialize anchor_generator for future use - serialized_anchor_generator = io.BytesIO() - torch.save(self._wrapped_model.anchor_generator, serialized_anchor_generator) - # Ideally we can put anchor generating inside the model, then we don't - # need to store this information. - bytes = serialized_anchor_generator.getvalue() - check_set_pb_arg(predict_net, "serialized_anchor_generator", "s", bytes) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - self = types.SimpleNamespace() - serialized_anchor_generator = io.BytesIO( - get_pb_arg_vals(predict_net, "serialized_anchor_generator", None) - ) - self.anchor_generator = torch.load(serialized_anchor_generator) - bbox_reg_weights = get_pb_arg_floats(predict_net, "bbox_reg_weights", None) - self.box2box_transform = Box2BoxTransform(weights=tuple(bbox_reg_weights)) - self.test_score_thresh = get_pb_arg_valf(predict_net, "score_threshold", None) - self.test_topk_candidates = get_pb_arg_vali(predict_net, "topk_candidates", None) - self.test_nms_thresh = get_pb_arg_valf(predict_net, "nms_threshold", None) - self.max_detections_per_image = get_pb_arg_vali( - predict_net, "max_detections_per_image", None - ) - - # hack to reuse inference code from RetinaNet - for meth in [ - "forward_inference", - "inference_single_image", - "_transpose_dense_predictions", - "_decode_multi_level_predictions", - "_decode_per_level_predictions", - ]: - setattr(self, meth, functools.partial(getattr(meta_arch.RetinaNet, meth), self)) - - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - dummy_images = ImageList( - torch.randn( - ( - len(im_info), - 3, - ) - + tuple(image_sizes[0]) - ), - image_sizes, - ) - - num_features = len([x for x in c2_results.keys() if x.startswith("box_cls_")]) - pred_logits = [c2_results["box_cls_{}".format(i)] for i in range(num_features)] - pred_anchor_deltas = [c2_results["box_delta_{}".format(i)] for i in range(num_features)] - - # For each feature level, feature should have the same batch size and - # spatial dimension as the box_cls and box_delta. - dummy_features = [x.clone()[:, 0:0, :, :] for x in pred_logits] - # self.num_classess can be inferred - self.num_classes = pred_logits[0].shape[1] // (pred_anchor_deltas[0].shape[1] // 4) - - results = self.forward_inference( - dummy_images, dummy_features, [pred_logits, pred_anchor_deltas] - ) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -META_ARCH_CAFFE2_EXPORT_TYPE_MAP = { - "GeneralizedRCNN": Caffe2GeneralizedRCNN, - "RetinaNet": Caffe2RetinaNet, -} diff --git a/spaces/ypchang/Variance_Reduction-European_call_option-volatility_K-3D/app.py b/spaces/ypchang/Variance_Reduction-European_call_option-volatility_K-3D/app.py deleted file mode 100644 index d36a51ee8c9a5e6f4521a63c31d605fe8afc4fa9..0000000000000000000000000000000000000000 --- a/spaces/ypchang/Variance_Reduction-European_call_option-volatility_K-3D/app.py +++ /dev/null @@ -1,144 +0,0 @@ -# 不使用 for 迴圈。 - -import numpy as np -from scipy import stats -import plotly.graph_objs as go - -def Black_Scholes(S0, K, r, T, sigma, option_type): - d1 = (np.log(S0/K)+(r+sigma**2/2)*T)/(sigma*np.sqrt(T)) - d2 = d1-sigma*np.sqrt(T) - if option_type == "call": - return S0*stats.norm.cdf(d1)-K*np.exp(-r*T)*stats.norm.cdf(d2) - if option_type == "put": - return K*np.exp(-r*T)*stats.norm.cdf(-d2)-S0*stats.norm.cdf(-d1) - -def European_call_simulation(S0, K, r, T, sigma, Z, moment_matching="False", empirical_martingale="False"): - if Z.ndim == 1: - Z = Z.reshape(-1, 1) - - if moment_matching == "True": - Z = (Z-np.mean(Z, axis=0))/np.std(Z, axis=0) - - ST = S0*np.exp((r-0.5*sigma**2)*T+sigma*np.sqrt(T)*Z) - - if empirical_martingale == "True": - ST = ST/np.mean(np.exp(-r*T)*ST, axis=0)*S0 - """ - # check 對所有 j (重複實驗) ST[:,j] 是否滿足 empirical martingale 性質: - np.mean(np.exp(-r*T)*ST, axis=0) - """ - - payoff = np.maximum(ST-K, 0) - prices = np.exp(-r*T)*payoff - return prices - -def plot_European(S0, K_L, K_U, num_Ks, r, T, sigma_L, sigma_U, num_sigmas, n, random_type, seed, reset_seed, moment_matching, empirical_martingale): - S0 = np.float64(S0) - K_L = np.float64(K_L) - K_U = np.float64(K_U) - num_Ks = int(num_Ks) - r = np.float64(r) - T = np.float64(T) - sigma_L = np.float64(sigma_L) - sigma_U = np.float64(sigma_U) - num_sigmas = int(num_sigmas) - - n = int(n) - seed = int(seed) - - Ks = np.linspace(K_L, K_U, num_Ks) - sigmas = np.linspace(sigma_L, sigma_U, num_sigmas) - - prices_true = np.zeros((num_Ks, num_sigmas)) - prices_mean = np.zeros((num_Ks, num_sigmas)) - prices_std = np.zeros((num_Ks, num_sigmas)) - - np.random.seed(seed) - Sobol_seq = stats.qmc.Sobol(d=1, scramble=True, seed=seed) - - for i in range(num_Ks): - for j in range(num_sigmas): - prices_true[i,j] = Black_Scholes(S0, Ks[i], r, T, sigmas[j], option_type="call") - - if reset_seed == "亂數固定": - np.random.seed(seed) - Sobol_seq = stats.qmc.Sobol(d=1, scramble=True, seed=seed) - - if random_type == "pseudo": - u = np.random.rand(n) - z = stats.norm.ppf(u) - # pseudo-random numbers - - if random_type == "quasi": - u = Sobol_seq.random(n=n) - z = stats.norm.ppf(u) - # quasi-random numbers - - if random_type == "np.random.randn": - z = np.random.randn(n) - # pseudo-random numbers - - if random_type == "np.random.normal": - z = np.random.normal(loc=0.0, scale=1.0, size=n) - # pseudo-random numbers - - z = z[np.isfinite(z)] - """ - random_type = "quasi" - => - z 有可能為 inf - => - z = z[np.isfinite(z)] 刪除 inf 資料。 - - google => np.array drop inf => - filter (remove) nan, inf from numpy array - GitHub Gist => - https://gist.github.com/korakot/9103824e49af7477769d1312e0cf0a88 - """ - prices = European_call_simulation(S0, Ks[i], r, T, sigmas[j], z, moment_matching, empirical_martingale) - prices_mean[i,j] = np.mean(prices) - prices_std[i,j] = np.std(prices)/np.sqrt(len(prices)) - - fig = go.Figure(data=[go.Surface(z=prices_mean, x=Ks, y=sigmas)]) - fig.update_layout(title="Surface plot of call option price", - scene=dict(xaxis_title="K", - yaxis_title="sigma", - zaxis_title="option price"), - width=650, - height=550, - margin=dict(l=40, r=20, b=10, t=50)) - - return fig - - -#%% -# https://www.machinelearningnuggets.com/gradio-tutorial/ -import gradio as gr - -S0 = gr.Textbox(value="100", label="S0") # initial stock price -K_L = gr.Textbox(value="80", label="low bound of K") # low bound of strike price -K_U = gr.Textbox(value="120", label="upper bound of K") # upper bound of strike price -num_Ks = gr.Textbox(value="50", label="number of Ks") # number of strike prices -r = gr.Textbox(value="0.02", label="r") # risk-free interest rate -T = gr.Textbox(value="0.5", label="T") # time to maturity -sigma_L = gr.Textbox(value="0.01", label="low bound of sigma") # low bound of volatility -sigma_U = gr.Textbox(value="0.6", label="upper bound of sigma") # upper bound of volatility -num_sigmas = gr.Textbox(value="50", label="number of sigmas") # number of volatilities -n = gr.Textbox(value="1000", label="n") # number of simulations -random_type = gr.Radio(choices=["pseudo", "quasi", "np.random.randn", "np.random.normal"], - value="pseudo", - label="random_type") -seed = gr.Textbox(value="123457", label="seed") -reset_seed = gr.Radio(choices=["亂數不固定", "亂數固定"], value="亂數不固定", label="reset_seed") -moment_matching = gr.Radio(choices=["False", "True"], value="False", label="moment_matching") -empirical_martingale = gr.Radio(choices=["False", "True"], value="False", label="empirical_martingale") - -inputs = [S0, K_L, K_U, num_Ks, r, T, sigma_L, sigma_U, num_sigmas, n, random_type, seed, reset_seed, moment_matching, empirical_martingale] -outputs = [gr.Plot()] -interface = gr.Interface(fn=plot_European, - inputs=inputs, - outputs=outputs, - title="European call option") - -interface.launch() -# share=True 一定要寫,瀏覽器才可以看到結果, -# 但發佈到 Hugging Face 伺服器,則 share=True 不能寫。 diff --git a/spaces/ysharma/LLaVA_v1/llava/constants.py b/spaces/ysharma/LLaVA_v1/llava/constants.py deleted file mode 100644 index be8cf0204969a6c973f442b383d8e425d684e826..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/llava/constants.py +++ /dev/null @@ -1,12 +0,0 @@ -CONTROLLER_HEART_BEAT_EXPIRATION = 30 -WORKER_HEART_BEAT_INTERVAL = 15 - -LOGDIR = "." - -# Model Constants -IGNORE_INDEX = -100 -IMAGE_TOKEN_INDEX = -200 -DEFAULT_IMAGE_TOKEN = "" -DEFAULT_IMAGE_PATCH_TOKEN = "" -DEFAULT_IM_START_TOKEN = "" -DEFAULT_IM_END_TOKEN = "" diff --git a/spaces/ywqisok/ysyy/text/symbols.py b/spaces/ywqisok/ysyy/text/symbols.py deleted file mode 100644 index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000 --- a/spaces/ywqisok/ysyy/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/zideliu/styledrop/timm/models/layers/classifier.py b/spaces/zideliu/styledrop/timm/models/layers/classifier.py deleted file mode 100644 index 89fe545819dd19f9aee6fbc12ba59c38e0ca1079..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/layers/classifier.py +++ /dev/null @@ -1,43 +0,0 @@ -""" Classifier head and layer factory - -Hacked together by / Copyright 2020 Ross Wightman -""" -from torch import nn as nn -from torch.nn import functional as F - -from .adaptive_avgmax_pool import SelectAdaptivePool2d -from .linear import Linear - - -def create_classifier(num_features, num_classes, pool_type='avg', use_conv=False): - flatten = not use_conv # flatten when we use a Linear layer after pooling - if not pool_type: - assert num_classes == 0 or use_conv,\ - 'Pooling can only be disabled if classifier is also removed or conv classifier is used' - flatten = False # disable flattening if pooling is pass-through (no pooling) - global_pool = SelectAdaptivePool2d(pool_type=pool_type, flatten=flatten) - num_pooled_features = num_features * global_pool.feat_mult() - if num_classes <= 0: - fc = nn.Identity() # pass-through (no classifier) - elif use_conv: - fc = nn.Conv2d(num_pooled_features, num_classes, 1, bias=True) - else: - # NOTE: using my Linear wrapper that fixes AMP + torchscript casting issue - fc = Linear(num_pooled_features, num_classes, bias=True) - return global_pool, fc - - -class ClassifierHead(nn.Module): - """Classifier head w/ configurable global pooling and dropout.""" - - def __init__(self, in_chs, num_classes, pool_type='avg', drop_rate=0.): - super(ClassifierHead, self).__init__() - self.drop_rate = drop_rate - self.global_pool, self.fc = create_classifier(in_chs, num_classes, pool_type=pool_type) - - def forward(self, x): - x = self.global_pool(x) - if self.drop_rate: - x = F.dropout(x, p=float(self.drop_rate), training=self.training) - x = self.fc(x) - return x