diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4rabet App for PC How to Get It and Why You Need It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4rabet App for PC How to Get It and Why You Need It.md deleted file mode 100644 index 1e9f3860cda79d20b74e5828ae06fb7b58735e0e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4rabet App for PC How to Get It and Why You Need It.md +++ /dev/null @@ -1,27 +0,0 @@ -
-

How to Download 4rabet App for PC and Enjoy Online Betting and Casino

-

4rabet is a popular online platform that offers sports betting and casino games for Indian players. It has a user-friendly interface, a wide range of markets and events, attractive bonuses and promotions, and convenient payment methods. However, if you want to enjoy 4rabet on your PC, you may face some difficulties. The official website does not have a separate app for PC, but only for Android and iOS devices. So, how can you download 4rabet app for PC and access all its features?

-

4rabet app download for pc


DOWNLOADhttps://byltly.com/2uKv3C



-

In this article, we will show you how to download 4rabet app for PC using an emulator software that allows you to run mobile apps on your computer. We will also explain why you should use 4rabet app for PC instead of the website version and what benefits you can get from it.

-

Why Use 4rabet App for PC?

-

Using 4rabet app for PC has several advantages over using the website version. Here are some of them:

- -

How to Download 4rabet App for PC?

-

Downloading 4rabet app for PC is easy and straightforward. Here are the steps to follow:

-

-
    -
  1. Go to r/4raBet and browse through the posts. Look for posts that have a download link or a Github link in the title or the content.
  2. -
  3. Click on the link and check the source. Make sure it is from a reputable website or a verified developer. Avoid links that are from unknown or suspicious sources.
  4. -
  5. Download the file and scan it with your antivirus program before opening it. Make sure it is free of malware or viruses.
  6. -
  7. Install or run the file according to the instructions provided by the developer or the user who posted the link.
  8. -
  9. Enjoy using 4rabet app on your PC!
  10. -
-

Conclusion

-

In this article, we have shown you how to download 4rabet app for PC and why you should use it instead of the website version. Downloading 4rabet app for PC can give you access to the latest version of the app, feedback and advice from other users, additional plugins or extensions that enhance its functionality, and a better user experience and performance on your PC. However, you should always be careful and cautious when downloading anything from the internet. Make sure you check the source, scan the file, and follow the instructions properly.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack AutoCAD Mechanical 2018 Crack LINK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack AutoCAD Mechanical 2018 Crack LINK.md deleted file mode 100644 index b7e8a7fc422a37e9c0c4af4021497fac47537980..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack AutoCAD Mechanical 2018 Crack LINK.md +++ /dev/null @@ -1,131 +0,0 @@ -
-

How to Crack AutoCAD Mechanical 2018

-

If you are a mechanical engineer or a designer who needs a powerful and versatile software for creating and editing mechanical drawings, you might have heard of AutoCAD Mechanical 2018. This is one of the most popular and widely used applications in the field of mechanical design and engineering. But what if you don't have enough money to buy a license for this software? Is there a way to use it for free without compromising its quality and functionality? The answer is yes, you can crack AutoCAD Mechanical 2018 and enjoy its full features without paying a dime. In this article, we will show you how to do that in a few simple steps. But before we get into that, let's first understand what AutoCAD Mechanical 2018 is and why you might want to crack it.

-

crack AutoCAD Mechanical 2018 crack


Download Filehttps://byltly.com/2uKvzx



-

What is AutoCAD Mechanical 2018?

-

AutoCAD Mechanical 2018 is a software that is designed specifically for mechanical engineering and design. It is part of the Autodesk family of products, which are known for their high-quality and innovative solutions for various industries. AutoCAD Mechanical 2018 includes all the features and functions of AutoCAD, plus a comprehensive library of standards-based parts and tools for automating common mechanical drawing tasks. With AutoCAD Mechanical 2018, you can:

- -

AutoCAD Mechanical 2018 is compatible with Windows 7, Windows 8.1, and Windows 10 operating systems. It also supports both 32-bit and 64-bit architectures.

-

Features and benefits of AutoCAD Mechanical 2018

-

Some of the main features and benefits of AutoCAD Mechanical 2018 are:

- -

System requirements for AutoCAD Mechanical 2018

-

The minimum system requirements for running AutoCAD Mechanical 2018 are:

- - - - - - - -
Processor1 GHz or faster
Memory4 GB (32-bit) or 8 GB (64-bit)
Disk space6 GB
Display1360 x 768 resolution with True Color
Graphics cardWindows display adapter capable of DirectX®9 or DirectX®11 compliant card recommended
Internet connectionNecessary for installation and activation
-

Why do you need to crack AutoCAD Mechanical 2018?

-

If you are wondering why you need to crack AutoCAD Mechanical 2018, there are two main reasons: cost and convenience. Let's explain them in more detail.

-

The advantages of cracking AutoCAD Mechanical 2018

-

The first reason why you might want to crack AutoCAD Mechanical 2018 is cost. As you may know, this software is not cheap. According to the official website of Autodesk, the price of a single-user license for one year is $1,610. That means you have to pay this amount every year if you want to keep using the software. If you want to buy a perpetual license, which means you can use the software forever without paying annual fees, the price is even higher: $4,195. That's a lot of money for most people, especially if you are a student or a freelancer who doesn't have a stable income source. By cracking AutoCAD Mechanical 2018, you can save yourself from these expenses and use the software for free.

-

The risks of cracking AutoCAD Mechanical 2018

-

The second reason why you might want to crack AutoCAD Mechanical 2018 is convenience. As you may know, this software requires an internet connection for installation and activation. That means you have to connect your computer to the internet every time you want to install or activate the software. This can be inconvenient if you don't have access to a reliable internet connection or if you want to use the software offline. By cracking AutoCAD Mechanical 2018, you can bypass this requirement and use the software offline without any hassle.

-

However, before you decide to crack AutoCAD Mechanical 2018, you should also be aware of the risks involved. Cracking any software is illegal and unethical. You are violating the terms and conditions of Autodesk by doing so. You are also exposing your computer to potential viruses and malware that may come with the crack file. You are also losing access to some features and services that are only available for licensed users, such as updates, support, cloud storage, etc. You are also risking legal actions from Autodesk if they find out that you are using their software illegally. Therefore, we do not recommend or endorse cracking AutoCAD Mechanical 2018 or any other software. We are only providing this information for educational purposes only.

-

How to crack AutoCAD Mechanical 2018 for free
-AutoCAD Mechanical 2018 crack download link
-AutoCAD Mechanical 2018 activation code generator
-AutoCAD Mechanical 2018 license key crack
-AutoCAD Mechanical 2018 serial number crack
-AutoCAD Mechanical 2018 keygen crack
-AutoCAD Mechanical 2018 patch crack
-AutoCAD Mechanical 2018 full version crack
-AutoCAD Mechanical 2018 offline installer crack
-AutoCAD Mechanical 2018 xforce crack
-AutoCAD Mechanical 2018 torrent crack
-AutoCAD Mechanical 2018 direct download crack
-AutoCAD Mechanical 2018 portable crack
-AutoCAD Mechanical 2018 cracked by team os
-AutoCAD Mechanical 2018 cracked by cgpersia
-AutoCAD Mechanical 2018 cracked by core x
-AutoCAD Mechanical 2018 cracked by r2r
-AutoCAD Mechanical 2018 cracked by skidrow
-AutoCAD Mechanical 2018 cracked by codex
-AutoCAD Mechanical 2018 cracked by reloaded
-AutoCAD Mechanical 2018 cracked by plaza
-AutoCAD Mechanical 2018 cracked by hoodlum
-AutoCAD Mechanical 2018 cracked by razor1911
-AutoCAD Mechanical 2018 cracked by fitgirl
-AutoCAD Mechanical 2018 cracked by dodi
-AutoCAD Mechanical 2018 crack fix
-AutoCAD Mechanical 2018 crack only
-AutoCAD Mechanical 2018 crack reddit
-AutoCAD Mechanical 2018 crack youtube
-AutoCAD Mechanical 2018 crack forum
-AutoCAD Mechanical 2018 crack blogspot
-AutoCAD Mechanical 2018 crack quora
-AutoCAD Mechanical 2018 crack medium
-AutoCAD Mechanical 2018 crack github
-AutoCAD Mechanical 2018 crack stackoverflow
-AutoCAD Mechanical 2018 crack tutorial
-AutoCAD Mechanical 2018 crack guide
-AutoCAD Mechanical 2018 crack instructions
-AutoCAD Mechanical 2018 crack tips and tricks
-AutoCAD Mechanical 2018 crack review
-AutoCAD Mechanical 2018 crack comparison
-AutoCAD Mechanical 2018 crack alternatives
-AutoCAD Mechanical 2018 crack pros and cons
-AutoCAD Mechanical 2018 crack benefits and drawbacks
-AutoCAD Mechanical 2018 crack features and specifications
-AutoCAD Mechanical 2018 crack system requirements
-AutoCAD Mechanical 2018 crack installation steps
-AutoCAD Mechanical 2018 crack troubleshooting steps
-AutoCAD Mechanical 2018 crack support and helpdesk

-

How to crack AutoCAD Mechanical 2018 step by step

-

If you still want to proceed with cracking AutoCAD Mechanical 2018 despite the risks involved, here are the steps that you need to follow:

-

Step 1: Download the software and the crack file

-

The first step is to download the software and the crack file from reliable sources. You can find many websites that offer these files online, but be careful not to download any files that contain viruses or malware. One of the websites that we found that offers these files is https://iggtech.com/download-x-force-2018/. This website provides both the software installer (in ISO format) and the crack file (in ZIP format) for various Autodesk products, including AutoCAD Mechanical 2018. You can download these files by clicking on their respective links on this website.

-

Step 2: Install the software and disable the internet connection

-

The second step is to install the software on your computer. To do this, you need to mount or extract the ISO file that contains the software installer using an appropriate tool (such as WinRAR or Daemon Tools). Then run setup.exe file from within this folder. Follow the instructions on screen until you reach the product key page. On this page, you need to enter the product key for AutoCAD Mechanical 2018, which is 206J1. You can find the product key for other Autodesk products on the same website. After entering the product key, click Next and follow the rest of the instructions until the installation is complete. The next step is to disable your internet connection. This is important to prevent the software from contacting Autodesk servers and verifying your license. You can do this by unplugging your ethernet cable, turning off your Wi-Fi, or disabling your network adapter from the Control Panel.

Step 3: Run the crack file and generate the product key

-

The third step is to run the crack file that you downloaded in step 1. This file is called X-force 2018 and it is a keygen that can generate product keys for all Autodesk products. To run this file, you need to extract it from the ZIP archive using an appropriate tool (such as WinRAR or 7-Zip). Then right-click on it and choose Run as administrator. You should see a window like this:

- -X-force 2018 window -

On this window, you need to select AutoCAD Mechanical 2018 from the drop-down menu and click on Generate. This will create a product key that you will use to activate the software. Copy this product key and keep it somewhere safe.

-

Step 4: Activate the software with the product key

-

The fourth step is to activate the software with the product key that you generated in step 3. To do this, you need to launch AutoCAD Mechanical 2018 on your computer. You should see a window like this:

- -AutoCAD Mechanical 2018 activation window -

On this window, you need to click on Enter a Serial Number and then click on I Agree. You should see another window like this:

- -AutoCAD Mechanical 2018 serial number window -

On this window, you need to enter any serial number that consists of six groups of four digits each. For example, you can enter 666-69696969 or 111-11111111. Then you need to enter the product key that you copied in step 3. After entering these values, click Next and then click on Close.

-

Step 5: Enjoy the full version of AutoCAD Mechanical 2018

-

The final step is to enjoy the full version of AutoCAD Mechanical 2018 without any limitations or restrictions. You can now use all the features and functions of this software for your mechanical design and engineering projects. You can also update the software if there are any available updates from Autodesk.

-

Conclusion

-

In this article, we have shown you how to crack AutoCAD Mechanical 2018 and use it for free without paying any fees or subscriptions. We have also explained what AutoCAD Mechanical 2018 is and why you might want to crack it. However, we have also warned you about the risks and consequences of cracking any software, which are illegal and unethical. Therefore, we do not recommend or endorse cracking AutoCAD Mechanical 2018 or any other software. We are only providing this information for educational purposes only.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Imyfone Ibypasser Cracked.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Imyfone Ibypasser Cracked.md deleted file mode 100644 index 523cf2d6a3f80a21caeb8e04973298fbb6c61b6a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Imyfone Ibypasser Cracked.md +++ /dev/null @@ -1,18 +0,0 @@ - -

How to Download and Use iMyFone iBypasser Cracked Version

-

iMyFone iBypasser is a software that can help you bypass iCloud activation lock on your iPhone, iPad, or iPod touch. It can also remove screen lock, Apple ID, and MDM lock from your iOS devices. However, iMyFone iBypasser is not a free software, and you need to pay a fee to use it for one device.

-

download imyfone ibypasser cracked


Download ✓✓✓ https://byltly.com/2uKvrO



-

If you want to use iMyFone iBypasser for free, you might be tempted to download and use a cracked version of it from the internet. However, this is not a good idea, as cracked versions of iMyFone iBypasser can contain malware, viruses, or spyware that can harm your computer or your iOS devices. Moreover, cracked versions of iMyFone iBypasser can also have compatibility issues, bugs, or errors that can affect your bypassing process.

-

Therefore, the best way to use iMyFone iBypasser for free is to download and use the official trial version of it from the official website. The trial version of iMyFone iBypasser allows you to check if your device is supported and preview the bypassing result before you pay. After that, you can either buy a license or uninstall it from your computer.

-

Here are the steps to download and use the trial version of iMyFone iBypasser:

-

-
    -
  1. Go to https://www.imyfone.com/bypass-activation-lock/ and click on the "Free Trial" button.
  2. -
  3. Save the file to your computer and run it to start the installation process.
  4. -
  5. Follow the instructions on the screen and complete the installation.
  6. -
  7. Launch iMyFone iBypasser and connect your iOS device to your computer with a USB cable.
  8. -
  9. Follow the steps on the software to check if your device is supported and preview the bypassing result.
  10. -
-

Note: Do not use any crack, patch, keygen, or serial number to activate iMyFone iBypasser, as they can damage your computer or your iOS devices. Always use the official version of iMyFone iBypasser from the official website.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With BEST Full Level.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With BEST Full Level.md deleted file mode 100644 index f6782ef3f67edb3143c5fd56d51978367b1b71d5..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With BEST Full Level.md +++ /dev/null @@ -1,39 +0,0 @@ -
-

How to Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With Full Level

-

If you are a fan of Marvel comics and video games, you might have played Marvel Ultimate Alliance 2, a role-playing game that lets you create your own team of superheroes and villains. But if you want to enjoy the game to the fullest, you might want to unlock all the characters and level them up to their maximum potential. That's where a save game file comes in handy.

-

A save game file is a data file that stores your progress and settings in a video game. By downloading and using a save game file, you can skip the tedious process of unlocking and leveling up characters, and jump right into the action. In this article, we will show you how to download save game Marvel Ultimate Alliance 2 pc all character unlock with full level, and how to use it on your computer.

-

Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With Full Level


DOWNLOADhttps://byltly.com/2uKwhv



-

Step 1: Download the Save Game File

-

The first step is to find and download a save game file that suits your needs. There are many websites that offer save game files for various video games, but not all of them are reliable or safe. You should always scan any file you download with an antivirus software before opening it.

-

One of the websites that we recommend for downloading save game files is SaveGameFiles.com. This website has a large collection of save game files for various platforms and games, including Marvel Ultimate Alliance 2. You can browse the files by category or search for them by name.

-

To download the save game file for Marvel Ultimate Alliance 2 pc all character unlock with full level, follow these steps:

-
    -
  1. Go to https://www.savegamefiles.com/pc/marvel-ultimate-alliance-2/.
  2. -
  3. Scroll down and find the file named "Marvel Ultimate Alliance 2 - All Characters Unlocked + Max Level".
  4. -
  5. Click on the "Download" button next to the file name.
  6. -
  7. Wait for the file to be downloaded to your computer. The file size is about 1 MB.
  8. -
  9. Extract the file from the zip archive using a program like WinRAR or 7-Zip.
  10. -
  11. You should see a folder named "Marvel Ultimate Alliance 2" with two subfolders named "Data" and "Save".
  12. -
-

Step 2: Backup Your Original Save Game File

-

Before you use the downloaded save game file, you should backup your original save game file in case something goes wrong or you want to revert to your previous progress. To backup your original save game file, follow these steps:

-
    -
  1. Go to the folder where your Marvel Ultimate Alliance 2 game is installed on your computer. The default location is C:\Program Files (x86)\Activision\Marvel - Ultimate Alliance 2.
  2. -
  3. Find and open the folder named "Data".
  4. -
  5. Copy the folder named "Save" and paste it somewhere safe on your computer, such as your desktop or an external drive.
  6. -
  7. Rename the copied folder to something like "Save Backup" or "Original Save".
  8. -
-

Step 3: Replace Your Original Save Game File with the Downloaded One

-

Now that you have backed up your original save game file, you can replace it with the downloaded one. To do this, follow these steps:

-
    -
  1. Go back to the folder where you extracted the downloaded save game file.
  2. -
  3. Copy the folder named "Marvel Ultimate Alliance 2".
  4. -
  5. Go back to the folder where your Marvel Ultimate Alliance 2 game is installed on your computer.
  6. -
  7. Paste the copied folder and overwrite the existing one.
  8. -
  9. You should see a message asking if you want to replace the files in the destination. Click on "Yes" or "Replace All".
  10. -
-

Step 4: Enjoy Your New Save Game File

-

Congratulations! You have successfully downloaded and used a save game file for Marvel Ultimate Alliance 2 pc all character unlock with full level. Now

-

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 7 Crack File Download How to Save Money and Ruin Your Computer.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 7 Crack File Download How to Save Money and Ruin Your Computer.md deleted file mode 100644 index adc435954a1575a807911f1d9e95c9fe84f6cb58..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 7 Crack File Download How to Save Money and Ruin Your Computer.md +++ /dev/null @@ -1,23 +0,0 @@ -
-

How to Download and Install Edius 7 Crack File for Free

-

Edius 7 is a powerful video editing software that can handle multiple formats and resolutions. It offers a range of features and tools to create professional-looking videos with ease. However, Edius 7 is not a free software and requires a license key to activate it. If you want to use Edius 7 without paying for it, you might be tempted to download and install a crack file from the internet. But is it safe and legal to do so? In this article, we will explain what a crack file is, how it works, and what are the risks and consequences of using it.

-

edius 7 crack file download


Downloadhttps://byltly.com/2uKvrR



-

What is a crack file?

-

A crack file is a modified version of an original software file that bypasses or removes the security features that prevent unauthorized use. A crack file can be an executable file, a patch, a keygen, or a serial number generator. A crack file is usually created by hackers or crackers who want to break the protection of a software and distribute it for free or for profit.

-

How does a crack file work?

-

A crack file works by altering the code or data of the original software file in order to disable or fool the activation process. For example, a crack file might replace the original license key verification function with a fake one that always returns true, or it might change the expiration date of the trial version to never expire. A crack file can also inject malicious code into the software that can harm your computer or steal your personal information.

-

What are the risks and consequences of using a crack file?

-

Using a crack file is not only illegal but also dangerous. Here are some of the risks and consequences of using a crack file:

-

- -

How to download and install Edius 7 legally?

-

The best way to download and install Edius 7 is to purchase it from the official website or an authorized reseller. You will get a genuine license key that will activate your software and grant you access to all its features and benefits. You will also get regular updates, technical support, and customer service from the software developer. You will also avoid any legal or ethical issues that might arise from using a crack file.

-

To purchase Edius 7, visit https://www.edius.net/ and choose the edition that suits your needs. You can also download a free trial version for 30 days before you buy. Follow the instructions on the website to complete your order and download your software. Once you have downloaded Edius 7, run the installer and enter your license key when prompted. Enjoy editing your videos with Edius 7!

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/chat.py b/spaces/1line/AutoGPT/autogpt/chat.py deleted file mode 100644 index 1f6bca96eb216c667656b50f131006b83c681065..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/chat.py +++ /dev/null @@ -1,175 +0,0 @@ -import time - -from openai.error import RateLimitError - -from autogpt import token_counter -from autogpt.config import Config -from autogpt.llm_utils import create_chat_completion -from autogpt.logs import logger - -cfg = Config() - - -def create_chat_message(role, content): - """ - Create a chat message with the given role and content. - - Args: - role (str): The role of the message sender, e.g., "system", "user", or "assistant". - content (str): The content of the message. - - Returns: - dict: A dictionary containing the role and content of the message. - """ - return {"role": role, "content": content} - - -def generate_context(prompt, relevant_memory, full_message_history, model): - current_context = [ - create_chat_message("system", prompt), - create_chat_message( - "system", f"The current time and date is {time.strftime('%c')}" - ), - create_chat_message( - "system", - f"This reminds you of these events from your past:\n{relevant_memory}\n\n", - ), - ] - - # Add messages from the full message history until we reach the token limit - next_message_to_add_index = len(full_message_history) - 1 - insertion_index = len(current_context) - # Count the currently used tokens - current_tokens_used = token_counter.count_message_tokens(current_context, model) - return ( - next_message_to_add_index, - current_tokens_used, - insertion_index, - current_context, - ) - - -# TODO: Change debug from hardcode to argument -def chat_with_ai( - prompt, user_input, full_message_history, permanent_memory, token_limit -): - """Interact with the OpenAI API, sending the prompt, user input, message history, - and permanent memory.""" - while True: - try: - """ - Interact with the OpenAI API, sending the prompt, user input, - message history, and permanent memory. - - Args: - prompt (str): The prompt explaining the rules to the AI. - user_input (str): The input from the user. - full_message_history (list): The list of all messages sent between the - user and the AI. - permanent_memory (Obj): The memory object containing the permanent - memory. - token_limit (int): The maximum number of tokens allowed in the API call. - - Returns: - str: The AI's response. - """ - model = cfg.fast_llm_model # TODO: Change model from hardcode to argument - # Reserve 1000 tokens for the response - - logger.debug(f"Token limit: {token_limit}") - send_token_limit = token_limit - 1000 - - relevant_memory = ( - "" - if len(full_message_history) == 0 - else permanent_memory.get_relevant(str(full_message_history[-9:]), 10) - ) - - logger.debug(f"Memory Stats: {permanent_memory.get_stats()}") - - ( - next_message_to_add_index, - current_tokens_used, - insertion_index, - current_context, - ) = generate_context(prompt, relevant_memory, full_message_history, model) - - while current_tokens_used > 2500: - # remove memories until we are under 2500 tokens - relevant_memory = relevant_memory[:-1] - ( - next_message_to_add_index, - current_tokens_used, - insertion_index, - current_context, - ) = generate_context( - prompt, relevant_memory, full_message_history, model - ) - - current_tokens_used += token_counter.count_message_tokens( - [create_chat_message("user", user_input)], model - ) # Account for user input (appended later) - - while next_message_to_add_index >= 0: - # print (f"CURRENT TOKENS USED: {current_tokens_used}") - message_to_add = full_message_history[next_message_to_add_index] - - tokens_to_add = token_counter.count_message_tokens( - [message_to_add], model - ) - if current_tokens_used + tokens_to_add > send_token_limit: - break - - # Add the most recent message to the start of the current context, - # after the two system prompts. - current_context.insert( - insertion_index, full_message_history[next_message_to_add_index] - ) - - # Count the currently used tokens - current_tokens_used += tokens_to_add - - # Move to the next most recent message in the full message history - next_message_to_add_index -= 1 - - # Append user input, the length of this is accounted for above - current_context.extend([create_chat_message("user", user_input)]) - - # Calculate remaining tokens - tokens_remaining = token_limit - current_tokens_used - # assert tokens_remaining >= 0, "Tokens remaining is negative. - # This should never happen, please submit a bug report at - # https://www.github.com/Torantulino/Auto-GPT" - - # Debug print the current context - logger.debug(f"Token limit: {token_limit}") - logger.debug(f"Send Token Count: {current_tokens_used}") - logger.debug(f"Tokens remaining for response: {tokens_remaining}") - logger.debug("------------ CONTEXT SENT TO AI ---------------") - for message in current_context: - # Skip printing the prompt - if message["role"] == "system" and message["content"] == prompt: - continue - logger.debug(f"{message['role'].capitalize()}: {message['content']}") - logger.debug("") - logger.debug("----------- END OF CONTEXT ----------------") - - # TODO: use a model defined elsewhere, so that model can contain - # temperature and other settings we care about - assistant_reply = create_chat_completion( - model=model, - messages=current_context, - max_tokens=tokens_remaining, - ) - - # Update full message history - full_message_history.append(create_chat_message("user", user_input)) - full_message_history.append( - create_chat_message("assistant", assistant_reply) - ) - - return assistant_reply - except RateLimitError: - # TODO: When we switch to langchain, this is built in - print("Error: ", "API Rate Limit Reached. Waiting 10 seconds...") - time.sleep(10) diff --git a/spaces/1line/AutoGPT/tests/unit/test_browse_scrape_links.py b/spaces/1line/AutoGPT/tests/unit/test_browse_scrape_links.py deleted file mode 100644 index 0a3340e7397a997da96b8ab9828954230e1a3c20..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/tests/unit/test_browse_scrape_links.py +++ /dev/null @@ -1,118 +0,0 @@ -# Generated by CodiumAI - -# Dependencies: -# pip install pytest-mock -import pytest - -from autogpt.commands.web_requests import scrape_links - -""" -Code Analysis - -Objective: -The objective of the 'scrape_links' function is to scrape hyperlinks from a -given URL and return them in a formatted way. - -Inputs: -- url: a string representing the URL to be scraped. - -Flow: -1. Send a GET request to the given URL using the requests library and the user agent header from the config file. -2. Check if the response contains an HTTP error. If it does, return "error". -3. Parse the HTML content of the response using the BeautifulSoup library. -4. Remove any script and style tags from the parsed HTML. -5. Extract all hyperlinks from the parsed HTML using the 'extract_hyperlinks' function. -6. Format the extracted hyperlinks using the 'format_hyperlinks' function. -7. Return the formatted hyperlinks. - -Outputs: -- A list of formatted hyperlinks. - -Additional aspects: -- The function uses the 'requests' and 'BeautifulSoup' libraries to send HTTP -requests and parse HTML content, respectively. -- The 'extract_hyperlinks' function is called to extract hyperlinks from the parsed HTML. -- The 'format_hyperlinks' function is called to format the extracted hyperlinks. -- The function checks for HTTP errors and returns "error" if any are found. -""" - - -class TestScrapeLinks: - # Tests that the function returns a list of formatted hyperlinks when - # provided with a valid url that returns a webpage with hyperlinks. - def test_valid_url_with_hyperlinks(self): - url = "https://www.google.com" - result = scrape_links(url) - assert len(result) > 0 - assert isinstance(result, list) - assert isinstance(result[0], str) - - # Tests that the function returns correctly formatted hyperlinks when given a valid url. - def test_valid_url(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = ( - "Google" - ) - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a valid URL - result = scrape_links("https://www.example.com") - - # Assert that the function returns correctly formatted hyperlinks - assert result == ["Google (https://www.google.com)"] - - # Tests that the function returns "error" when given an invalid url. - def test_invalid_url(self, mocker): - # Mock the requests.get() function to return an HTTP error response - mock_response = mocker.Mock() - mock_response.status_code = 404 - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with an invalid URL - result = scrape_links("https://www.invalidurl.com") - - # Assert that the function returns "error" - assert "Error:" in result - - # Tests that the function returns an empty list when the html contains no hyperlinks. - def test_no_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing no hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = "

No hyperlinks here

" - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a URL containing no hyperlinks - result = scrape_links("https://www.example.com") - - # Assert that the function returns an empty list - assert result == [] - - # Tests that scrape_links() correctly extracts and formats hyperlinks from - # a sample HTML containing a few hyperlinks. - def test_scrape_links_with_few_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with a sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = """ - - - -
GitHub
-
CodiumAI
- - - """ - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function being tested - result = scrape_links("https://www.example.com") - - # Assert that the function returns a list of formatted hyperlinks - assert isinstance(result, list) - assert len(result) == 3 - assert result[0] == "Google (https://www.google.com)" - assert result[1] == "GitHub (https://github.com)" - assert result[2] == "CodiumAI (https://www.codium.ai)" diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Arceus X How to Run PC Scripts on Your iOS or Android Phone.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Arceus X How to Run PC Scripts on Your iOS or Android Phone.md deleted file mode 100644 index 79a5aab8ed30fa73381cdc4d92aead0e166ecadb..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Arceus X How to Run PC Scripts on Your iOS or Android Phone.md +++ /dev/null @@ -1,145 +0,0 @@ -
-

Arceus X: How to Download and Play the Ultimate Roblox Mod Menu on iOS

-

If you are a fan of Roblox, you might have heard of Arceus X, a mod menu that allows you to exploit your favorite games with features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, and more. Arceus X is a first and one of the most widely used Roblox Mod Menu/exploit specially developed for Android. But what if you want to play it on your iOS device? Is it possible to download and install Arceus X on iOS? The answer is yes, and in this article, we will show you how to do it step by step.

-

What is Arceus X?

-

Arceus X is a first Android Roblox Mod Menu/Exploit to improve the gameplay. It allows you to use features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. Arceus X APK is developed using Node.js, C++, JAVA. It’s an Android application that has floating Menu to execute scripts while you are in the game.

-

arceus x ios download


DOWNLOAD ····· https://urlin.us/2uSXlv



-

Features of Arceus X

-

Some of the features that make Arceus X stand out from other Roblox mod menus are:

- -

Requirements for Arceus X

-

To download and play Arceus X on your iOS device, you will need:

- -

How to Download Arceus X on iOS

-

Now that you know what Arceus X is and what you need to play it on your iOS device, let's get started with the download process. Here are the steps you need to follow:

-

Step 1: Get the Arceus X APK file

-

The first step is to get the Arceus X APK file from a reliable source. You can either use an Android device or an emulator on your PC to do this. Here are some options for getting the APK file:

- -

Once you have the APK file, you need to transfer it to your iOS device. You can use a USB cable, Bluetooth, Wi-Fi, or any other method that works for you. Just make sure you have a file manager app on your iOS device to locate the APK file.

-

Step 2: Install an iOS emulator

-

The next step is to install an iOS emulator app on your iOS device that can run Android apps. An emulator is a software that mimics the behavior of another device or platform. There are many iOS emulators available on the App Store, but not all of them can run Arceus X smoothly. Here are some of the best iOS emulators that we recommend for Arceus X:

- -

Once you have installed an iOS emulator of your choice, you need to launch it and grant it the necessary permissions to access your device's storage, camera, microphone, etc.

-

Step 3: Run the Arceus X APK file on the emulator

-

The final step is to run the Arceus X APK file on the emulator and start playing. Here are the steps you need to follow:

-

arceus x v3 download tutorial
-arceus x apk official
-arceus x roblox mod menu
-arceus x v3.1.0 public beta
-arceus x android roblox exploit
-arceus x ios 16.0.4 install
-arceus x script executor for mobile
-arceus x apk without linkvertise
-arceus x roblox hack android
-arceus x v3 update download
-arceus x mod menu apk
-arceus x roblox cheat ios
-arceus x verification process
-arceus x apk free download
-arceus x roblox exploit ios
-arceus x v3 mod menu tutorial
-arceus x apk latest version
-arceus x roblox script hub
-arceus x verification bypass
-arceus x apk no ads
-arceus x roblox infinite jump
-arceus x v3 install guide
-arceus x apk direct download
-arceus x roblox super speed
-arceus x verification completed
-arceus x apk no verification
-arceus x roblox btools hack
-arceus x v3 download link
-arceus x apk easy download
-arceus x roblox luau execution
-arceus x verification failed fix
-arceus x apk no root
-arceus x roblox android modding
-arceus x v3 features overview
-arceus x apk fast download
-arceus x roblox exploit features
-arceus x verification steps explained
-arceus x apk safe download
-arceus x roblox pc scripts support
-arceus x v3 release date ios
-arceus x apk working download
-arceus x roblox exploit review
-arceus x verification code generator
-arceus x apk virus free download
-arceus x roblox exploit comparison

-
    -
  1. Open the file manager app on your iOS device and locate the Arceus X APK file that you transferred earlier.
  2. -
  3. Tap on the APK file and select the option to open it with the emulator app that you installed.
  4. -
  5. The emulator will launch and install the Arceus X app on its virtual environment.
  6. -
  7. Once the installation is complete, you will see the Arceus X icon on the emulator's home screen.
  8. -
  9. Tap on the icon and log in with your Roblox account credentials.
  10. -
  11. You will see a floating mod menu on your screen with various options to exploit your favorite games.
  12. -
-

Step 4: Enjoy the game

-

Congratulations! You have successfully downloaded and installed Arceus X on your iOS device. Now you can enjoy playing Roblox with unlimited features and fun. You can access the mod menu anytime by tapping on it and selecting the options you want to use. You can also use the script hub to find and execute scripts for different games. Just be careful not to abuse the mod menu or get reported by other players, as you might get banned by Roblox.

-

Tips and Tricks for Arceus X

-

To make the most out of Arceus X, here are some tips and tricks that you should know:

-

How to use the script hub

-

The script hub is a feature that allows you to access a collection of scripts for various games from the mod menu. You can use these scripts to enhance your gameplay or perform certain actions that are not possible otherwise. Here are some steps to use the script hub:

-
    -
  1. Tap on the mod menu and select the script hub option.
  2. -
  3. You will see a list of games that have scripts available for them.
  4. -
  5. Select the game that you want to play and tap on it.
  6. -
  7. You will see a list of scripts that you can use for that game.
  8. -
  9. Select the script that you want to use and tap on it.
  10. -
  11. The script will be executed automatically and you will see its effects in the game.
  12. -
-

How to customize the mod menu

-

The mod menu is a feature that allows you to customize various aspects of Arceus X, such as its appearance, position, size, transparency, etc. You can also enable or disable certain features or change their settings according to your preference. Here are some steps to customize the mod menu:

-
    -
  1. Tap on the mod menu and select the settings option.
  2. -
  3. You will see a list of options that you can change, such as color, size, position, transparency, etc.
  4. -
  5. Select the option that you want to change and adjust it according to your liking.
  6. -
  7. You can also enable or disable certain features or change their settings by tapping on them.
  8. -
  9. Once you are done, tap on the save button to apply the changes.
  10. -
-

How to avoid getting banned

-

While Arceus X is a fun and powerful mod menu, it is also a risky one. If you use it too much or too blatantly, you might get detected and banned by Roblox. To avoid this, here are some tips that you should follow:

- -

Conclusion

-

In this article, we have shown you how to download and play Arceus X on your iOS device. Arceus X is a first Android Roblox Mod Menu/Exploit that allows you to exploit your favorite games with features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. To play it on your iOS device, you need to get the Arceus X APK file from a reliable source, install an iOS emulator app on your device, run the APK file on the emulator, and enjoy the game. We have also given you some tips and tricks for using Arceus X, such as how to use the script hub, how to customize the mod menu, and how to avoid getting banned. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.

-

FAQs

-

Here are some of the frequently asked questions about Arceus X:

-
    -
  1. Is Arceus X safe to use?
  2. -

    Arceus X is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, there is always a risk of getting banned by Roblox if you use it too much or too blatantly. To minimize this risk, use the anti-ban feature and a VPN service.

    -
  3. Is Arceus X free to use?
  4. -

    Yes, Arceus X is free to use and does not require any payment or subscription. However, you might need to complete some verification steps or watch some ads before downloading it.

    -
  5. Does Arceus X work on all games?
  6. -

    No, Arceus X does not work on all games. Some games have anti-cheat systems or scripts that prevent Arceus X from working properly. You can check the script hub for the list of games that have scripts available for them.

    -
  7. Can I use Arceus X on other devices?
  8. -

    Yes, you can use Arceus X on other devices besides iOS. You can use it on Android devices directly without any emulator. You can also use it on PC devices with an Android emulator such as BlueStacks or Nox Player.

    -
  9. Where can I get more information about Arceus X?
  10. -

    You can get more information about Arceus X from its official website, its YouTube channel, or its Discord server. You can also contact the developers or other users for support or feedback.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Blockman Go v2.40.1 APK with Unlimited Money and Gems.md b/spaces/1phancelerku/anime-remove-background/Download Blockman Go v2.40.1 APK with Unlimited Money and Gems.md deleted file mode 100644 index 58e1ca6f6f11f4cffb6eefa34e7c29ddf38adce8..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Blockman Go v2.40.1 APK with Unlimited Money and Gems.md +++ /dev/null @@ -1,120 +0,0 @@ - -

Blockman Go v2.40.1 APK: A Fun and Creative Sandbox Game

-

If you are looking for a game that offers you a variety of fun and creative minigames, as well as a platform to chat and make friends with other players, then you should check out Blockman Go v2.40.1 APK. This is the latest version of the popular sandbox game that has millions of fans around the world.

-

blockman go v2.40.1 apk


Downloadhttps://jinyurl.com/2uNKMD



-

What is Blockman Go?

-

Blockman Go is a free app that lets you play various block style minigames with different themes and genres. You can join the games by a simple tap, or create your own games with your own rules and settings. You can also chat and make friends with other players in the game, and join clans and parties to play together.

-

A free app with various minigames

-

Blockman Go offers you a wide range of minigames to choose from, such as Bed Wars, Egg Wars, Sky Block, Murder Mystery, Survival Games, Capture Flag, Snowball Battle, Bow Spleef, TNT Run, and many more. Each minigame has its own gameplay, objectives, and challenges that will keep you entertained and engaged.

-

A platform to chat and make friends

-

Blockman Go is not just a game, but also a social platform where you can chat and make friends with other players from all over the world. You can use the chat function to communicate and cooperate with your teammates, or to have fun conversations with other players. You can also join clans and parties to play together, or send gifts and messages to your friends.

-

A way to customize your avatar and show your style

-

Blockman Go allows you to customize your avatar with various items and accessories that you can buy with gold or Gcubes (the premium currency of the game). You can change your hair, eyes, clothes, hats, glasses, masks, wings, tails, pets, weapons, vehicles, and more. You can also create your own skins and upload them to the game. With Blockman Go, you can show your unique style and personality to the world.

-

What's New in Blockman Go v2.40.1 APK?

-

Blockman Go v2.40.1 APK is the latest version of the game that was released on June 20th 2023. This version brings some new minigames and features, as well as some improvements and bug fixes.

-

blockman go 2.40.1 apk download
-blockman go 2.40.1 mod apk
-blockman go 2.40.1 apk free download
-blockman go 2.40.1 apk latest version
-blockman go 2.40.1 apk pure
-blockman go 2.40.1 apk for android
-blockman go 2.40.1 apk hack
-blockman go 2.40.1 apk unlimited money
-blockman go 2.40.1 apk old version
-blockman go 2.40.1 apk offline
-blockman go 2.40.1 apk update
-blockman go 2.40.1 apk no ads
-blockman go 2.40.1 apk premium
-blockman go 2.40.1 apk full version
-blockman go 2.40.1 apk cracked
-blockman go 2.40.1 apk mod menu
-blockman go 2.40.1 apk obb
-blockman go 2.40.1 apk revdl
-blockman go 2.40.1 apk rexdl
-blockman go 2.40.1 apk mirror
-blockman go 2.40.1 apk uptodown
-blockman go 2.40.1 apk apkpure.com[^1^]
-blockman go 2.40.1 apk android oyun club
-blockman go 2.40.1 apk happymod
-blockman go 2.40.1 apk modded
-blockman go 2.40.1 apk original
-blockman go 2.40.1 apk file download
-blockman go 2.40.1 apk install
-blockman go 2.40.1 apk direct link
-blockman go 2.40.1 apk google play
-blockman go 2.40.1 apk data download
-blockman go 2.40.1 apk unlocked all features
-blockman go 2.40.1 apk pro version
-blockman go 2.40.1 apk mega mod
-blockman go 2.40.1 apk cheat codes
-blockman go 2.40.1 apk unlimited cubes
-blockman go 2.40.1 apk unlimited gems
-blockman go 2,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,

-

New minigames and features

-

Some of the new minigames and features that are added in this version are:

- -

Improved game experience and performance

-

Some of the improvements and optimizations that are made in this version are:

- -

Bug fixes and optimizations

-

Some of the bug fixes and optimizations that are done in this version are:

- -

How to Download and Install Blockman Go v2.40.1 APK?

-

If you want to download and install Blockman Go v2.40.1 APK on your device, you can follow these simple steps:

-

Download the APK file from a trusted source

-

The first step is to download the APK file from a trusted source, such as [APKPure] or [Uptodown]. You can use the links below to download the file directly:

- -

The file size is about 150 MB, so make sure you have enough space on your device before downloading it.

-

Enable unknown sources on your device settings

-

The second step is to enable unknown sources on your device settings, so that you can install the APK file without any problems. You can do this by following these steps:

- -

This will allow you to install apps that are not from the official app store, such as Blockman Go v2.40.1 APK.

Install the APK file and enjoy the game

-

The third and final step is to install the APK file and enjoy the game. You can do this by following these steps:

- -

Congratulations, you have successfully installed Blockman Go v2.40.1 APK on your device. You can now launch the game and enjoy the new features and minigames.

-

Tips and Tricks to Play Blockman Go Better

-

If you want to play Blockman Go better and have more fun, you can use some of these tips and tricks:

-

Choose the right minigame for your preference and skill level

-

Blockman Go has a lot of minigames to choose from, but not all of them may suit your preference or skill level. You can browse through the categories and genres of the minigames and find the ones that you like and are good at. You can also check the ratings, reviews, and descriptions of the minigames to get an idea of what they are about and how to play them.

-

Use the chat function to communicate and cooperate with other players

-

Blockman Go is a social game where you can chat and make friends with other players. You can use the chat function to communicate and cooperate with your teammates, or to have fun conversations with other players. You can also use emojis, stickers, voice messages, and gifs to express yourself better. You can also join clans and parties to play together, or send gifts and messages to your friends.

-

Earn gold by playing minigames and use it to buy items and accessories

-

Blockman Go allows you to earn gold by playing minigames and use it to buy items and accessories for your avatar. You can also earn Gcubes, which are the premium currency of the game, by completing tasks, watching ads, or buying them with real money. You can use gold and Gcubes to buy various items and accessories that will make your avatar more stylish and unique. You can also create your own skins and upload them to the game.

-

Conclusion

-

Blockman Go v2.40.1 APK is a fun and creative sandbox game that offers you a variety of block style minigames with different themes and genres. You can also chat and make friends with other players in the game, and customize your avatar with various items and accessories. You can download and install Blockman Go v2.40.1 APK on your device by following the simple steps above. You can also use some tips and tricks to play Blockman Go better and have more fun.

-

FAQs

-

Here are some frequently asked questions about Blockman Go v2.40.1 APK:

- - - - - - -
Q: Is Blockman Go v2.40.1 APK safe to download and install?A: Yes, Blockman Go v2.40.1 APK is safe to download and install, as long as you download it from a trusted source, such as [APKPure] or [Uptodown]. You should also enable unknown sources on your device settings before installing it.
Q: What are the requirements to play Blockman Go v2.40.1 APK?A: Blockman Go v2.40.1 APK requires Android 4.4 or higher, as well as a stable internet connection. The game also requires about 150 MB of free space on your device.
Q: How can I update Blockman Go v2.40.1 APK?A: You can update Blockman Go v2.40.1 APK by downloading the latest version from a trusted source, such as [APKPure] or [Uptodown], and installing it over the existing one. You can also check for updates within the game settings.
Q: How can I contact Blockman Go support team?A: You can contact Blockman Go support team by sending an email to blockymods@sandboxol.com, or by visiting their official website at https://www .blockmango.net/.
Q: How can I get more gold and Gcubes in Blockman Go?A: You can get more gold and Gcubes in Blockman Go by playing minigames and completing tasks, watching ads and videos, inviting friends and joining events, or buying them with real money.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Messenger on Your Desktop with the Official Windows App.md b/spaces/1phancelerku/anime-remove-background/Enjoy Messenger on Your Desktop with the Official Windows App.md deleted file mode 100644 index 2ad786f80d25fa47da9a15d549cf1b2b13005fd3..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Messenger on Your Desktop with the Official Windows App.md +++ /dev/null @@ -1,183 +0,0 @@ - -

How to Download Messenger for Windows

-

Do you want to stay connected with your friends and family on Messenger, but don't want to use your phone or browser? If so, you might be interested in downloading Messenger for Windows, a desktop app that lets you use Messenger on your PC or Mac. In this article, we will show you how to download, install, and use Messenger for Windows, as well as how to troubleshoot some common issues. Let's get started!

-

What is Messenger for Windows?

-

Messenger for Windows is a desktop app that lets you use Messenger on your Windows or Mac computer. It is similar to the mobile app, but with some additional features that make it more convenient and enjoyable to use on a larger screen. With Messenger for Windows, you can:

-

download messenger for windows


Download Filehttps://jinyurl.com/2uNLns



- -

Messenger for Windows is free to download and use. All you need is a Facebook account or a phone number.

-

Why Download Messenger for Windows?

-

There are many benefits of using Messenger for Windows instead of your phone or browser. Here are some of them:

- -

If you want to experience these benefits, read on to learn how to download and install Messenger for Windows.

-

How to Download and Install Messenger for Windows

-

There are three ways to download and install Messenger for Windows: from Messenger.com, from Microsoft Store, or from Mac App Store. We will explain each method below.

-

Download from Messenger.com

-

This is the easiest way to get the app. Here are the steps:

-

How to download messenger for windows 10
-Download messenger for windows desktop app
-Download messenger for windows without facebook account
-Download messenger for windows 7 free
-Download messenger for windows 8.1
-Download messenger for windows laptop
-Download messenger for windows offline installer
-Download messenger for windows 10 64 bit
-Download messenger for windows and mac
-Download messenger for windows from microsoft store
-Download messenger for windows latest version
-Download messenger for windows pc free
-Download messenger for windows xp
-Download messenger for windows phone
-Download messenger for windows 10 pro
-Download messenger for windows with video call
-Download messenger for windows full version
-Download messenger for windows 7 32 bit
-Download messenger for windows vista
-Download messenger for windows 10 home
-Download messenger for windows without bluestacks
-Download messenger for windows with dark mode
-Download messenger for windows 10 update
-Download messenger for windows 7 ultimate
-Download messenger for windows old version
-Download messenger for windows using qr code
-Download messenger for windows with stickers
-Download messenger for windows 10 laptop
-Download messenger for windows 7 professional
-Download messenger for windows not working
-Download messenger for windows with games
-Download messenger for windows 10 free download
-Download messenger for windows 7 starter
-Download messenger for windows error code 1603
-Download messenger for windows with voice messages
-Download messenger for windows 10 mobile
-Download messenger for windows 7 home premium
-Download messenger for windows installation failed
-Download messenger for windows with emojis
-Download messenger for windows 10 pc free download

-
    -
  1. Go to Messenger.com.
  2. -
  3. Click on Download.
  4. -
  5. The desktop app will automatically download based on the desktop device you are using. If it doesn't start automatically, click on Restart.
  6. -
  7. Once the download is complete, click on the installer file to run it.
  8. -
  9. Follow the instructions on the screen to complete the installation.
  10. -
  11. Launch the app and log in with your Facebook account or phone number.
  12. -
-

Congratulations, you have successfully downloaded and installed Messenger for Windows from Messenger.com!

-

Download from Microsoft Store

-

This is another way to get the app if you are using a Windows 10 device. Here are the steps:

-
    -
  1. Go to Microsoft Store.
  2. -
  3. Search for Messenger.
  4. -
  5. Select the app with the blue logo and the name Messenger for Windows 10.
  6. -
  7. Click on Get.
  8. -
  9. The app will start downloading and installing on your device.
  10. -
  11. Launch the app and log in with your Facebook account or phone number.
  12. -
-

Congratulations, you have successfully downloaded and installed Messenger for Windows from Microsoft Store!

-

Download from Mac App Store

-

This is the way to get the app if you are using a Mac device. Here are the steps:

-
    -
  1. Go to Mac App Store.
  2. -
  3. Search for Messenger.
  4. -
  5. Select the app with the blue logo and the name Messenger for macOS.
  6. -
  7. Click on Get.
  8. -
  9. The app will start downloading and installing on your device.
  10. -
  11. Launch the app and log in with your Facebook account or phone number.
  12. -
-

Congratulations, you have successfully downloaded and installed Messenger for Windows from Mac App Store!

-

How to Use Messenger for Windows

-

Now that you have downloaded and installed Messenger for Windows, you might be wondering how to use it effectively. Here are some tips and tricks for using the app:

-

How to Log in and Out

-

To log in to Messenger for Windows, you need to enter your Facebook account or phone number and password. If you don't have a Facebook account, you can create one by clicking on Create New Account. You can also choose to stay logged in by checking the box next to Keep me signed in.

-

To log out of Messenger for Windows, you need to click on your profile picture in the top left corner of the app. Then, click on Log Out. You can also switch accounts by clicking on Switch Account.

-

How to Chat and Call

-

To chat with someone on Messenger for Windows, you need to click on their name in the left sidebar of the app. You can also search for someone by typing their name or phone number in the search bar at the top of the app. You can then type your message in the text box at the bottom of the chat window. You can also send photos, videos, GIFs, stickers, emojis, and more by clicking on the icons next to the text box.

-

To make a voice or video call with someone on Messenger for Windows, you need to click on their name in the left sidebar of the app. Then, click on the phone or camera icon at the top right corner of the chat window. You can also join a group call by clicking on the group name in the left sidebar of the app. Then, click on Join Call. You can also create a room where you can invite anyone you want to join by clicking on Create a Room.

-

How to Manage Notifications and Settings

-

To manage your notifications and settings on Messenger for Windows, you need to click on your profile picture in the top left corner of the app. Then, click on Preferences. You can then customize your preferences and privacy options, such as:

- -

Troubleshooting Messenger for Windows

-

Sometimes, you might encounter some issues while using Messenger for Windows. Here are some common issues and solutions for using the app:

-

How to Update the App

-

To update Messenger for Windows, you need to follow these steps:

-
    -
  1. Go to Messenger.com.
  2. -
  3. Click on Download.
  4. -
  5. The latest version of the app will automatically download based on the desktop device you are using. If it doesn't start automatically, click on Restart.
  6. -
  7. Once the download is complete, click on the installer file to run it.
  8. -
  9. Follow the instructions on the screen to complete the installation.
  10. -
  11. Launch the app and log in with your Facebook account or phone number.
  12. -
-

You can also check for updates manually by clicking on your profile picture in the top left corner of the app. Then, click on About Messenger. If there is an update available, you will see a notification and a button to download it.

-

How to Uninstall the App

-

To uninstall Messenger for Windows, you need to follow these steps:

-
    -
  1. Close the app if it is running.
  2. -
  3. Go to Control Panel on your Windows device or Finder on your Mac device.
  4. -
  5. Select Programs and Features on Windows or Applications on Mac.
  6. -
  7. Find and select Messenger for Windows.
  8. -
  9. Click on Uninstall on Windows or drag the app to the Trash on Mac.
  10. -
  11. Follow the instructions on the screen to complete the uninstallation.
  12. -
-

Note that uninstalling the app will not delete your chats or account. You can still access them on your phone or browser.

-

How to Contact Support

-

If you have any questions or issues that are not covered in this article, you can contact the Messenger support team or community for help. Here are some ways to do that:

- -

Conclusion

-

Messenger for Windows is a great way to stay connected with your friends and family on Messenger, without using your phone or browser. It offers many features and benefits that make it more convenient and enjoyable to use on a larger screen. In this article, we showed you how to download, install, and use Messenger for Windows, as well as how to troubleshoot some common issues. We hope you found this article helpful and informative. If you did, please share it with your friends and family who might also be interested in downloading Messenger for Windows. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions and answers about Messenger for Windows:

-

Q: Is Messenger for Windows safe?

-

A: Yes, Messenger for Windows is safe to use. It uses encryption to protect your messages and calls from hackers and third parties. It also lets you control your privacy settings and block or report anyone who bothers you.

-

Q: Is Messenger for Windows compatible with my device?

-

A: Messenger for Windows is compatible with Windows 10 devices and Mac devices running macOS 10.10 or higher. It is not compatible with older versions of Windows or Mac, or other operating systems such as Linux.

-

Q: How much space does Messenger for Windows take up?

-

A: Messenger for Windows takes up about 200 MB of space on your device. This may vary depending on your device model and settings.

-

Q: How do I watch videos together with my friends on Messenger for Windows?

-

A: To watch videos together with your friends on Messenger for Windows, you need to use the Watch Together feature. Here are the steps:

-
    -
  1. Start a video call with one or more friends.
  2. -
  3. Click on the TV icon at the bottom of the call window.
  4. -
  5. Select a video from the suggested list or search for one.
  6. -
  7. Click on Watch Together.
  8. -
  9. You and your friends can now watch the video together and chat at the same time.
  10. -
-

Q: How do I play games with my friends on Messenger for Windows?A: To play games with your friends on Messenger for Windows, you need to use the Games feature. Here are the steps:

-
    -
  1. Click on the game controller icon in the left sidebar of the app.
  2. -
  3. Select a game from the list or search for one.
  4. -
  5. Click on Play.
  6. -
  7. You can play the game solo or challenge your friends to beat your score.
  8. -
  9. You can also chat with your friends while playing the game.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/audio.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/audio.py deleted file mode 100644 index 2fcb77ad1d3a85f523e24f84691886736a5686cb..0000000000000000000000000000000000000000 --- a/spaces/44ov41za8i/FreeVC/speaker_encoder/audio.py +++ /dev/null @@ -1,107 +0,0 @@ -from scipy.ndimage.morphology import binary_dilation -from speaker_encoder.params_data import * -from pathlib import Path -from typing import Optional, Union -import numpy as np -import webrtcvad -import librosa -import struct - -int16_max = (2 ** 15) - 1 - - -def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray], - source_sr: Optional[int] = None): - """ - Applies the preprocessing operations used in training the Speaker Encoder to a waveform - either on disk or in memory. The waveform will be resampled to match the data hyperparameters. - - :param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not - just .wav), either the waveform as a numpy array of floats. - :param source_sr: if passing an audio waveform, the sampling rate of the waveform before - preprocessing. After preprocessing, the waveform's sampling rate will match the data - hyperparameters. If passing a filepath, the sampling rate will be automatically detected and - this argument will be ignored. - """ - # Load the wav from disk if needed - if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path): - wav, source_sr = librosa.load(fpath_or_wav, sr=None) - else: - wav = fpath_or_wav - - # Resample the wav if needed - if source_sr is not None and source_sr != sampling_rate: - wav = librosa.resample(wav, source_sr, sampling_rate) - - # Apply the preprocessing: normalize volume and shorten long silences - wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True) - wav = trim_long_silences(wav) - - return wav - - -def wav_to_mel_spectrogram(wav): - """ - Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform. - Note: this not a log-mel spectrogram. - """ - frames = librosa.feature.melspectrogram( - y=wav, - sr=sampling_rate, - n_fft=int(sampling_rate * mel_window_length / 1000), - hop_length=int(sampling_rate * mel_window_step / 1000), - n_mels=mel_n_channels - ) - return frames.astype(np.float32).T - - -def trim_long_silences(wav): - """ - Ensures that segments without voice in the waveform remain no longer than a - threshold determined by the VAD parameters in params.py. - - :param wav: the raw waveform as a numpy array of floats - :return: the same waveform with silences trimmed away (length <= original wav length) - """ - # Compute the voice detection window size - samples_per_window = (vad_window_length * sampling_rate) // 1000 - - # Trim the end of the audio to have a multiple of the window size - wav = wav[:len(wav) - (len(wav) % samples_per_window)] - - # Convert the float waveform to 16-bit mono PCM - pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16)) - - # Perform voice activation detection - voice_flags = [] - vad = webrtcvad.Vad(mode=3) - for window_start in range(0, len(wav), samples_per_window): - window_end = window_start + samples_per_window - voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2], - sample_rate=sampling_rate)) - voice_flags = np.array(voice_flags) - - # Smooth the voice detection with a moving average - def moving_average(array, width): - array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2))) - ret = np.cumsum(array_padded, dtype=float) - ret[width:] = ret[width:] - ret[:-width] - return ret[width - 1:] / width - - audio_mask = moving_average(voice_flags, vad_moving_average_width) - audio_mask = np.round(audio_mask).astype(np.bool) - - # Dilate the voiced regions - audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1)) - audio_mask = np.repeat(audio_mask, samples_per_window) - - return wav[audio_mask == True] - - -def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False): - if increase_only and decrease_only: - raise ValueError("Both increase only and decrease only are set") - dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2)) - if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only): - return wav - return wav * (10 ** (dBFS_change / 20)) diff --git a/spaces/A666sxr/Genshin_TTS/text/mandarin.py b/spaces/A666sxr/Genshin_TTS/text/mandarin.py deleted file mode 100644 index a9ce0c4b223cd7fbb00e8332d2dd53de4c7cea09..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/text/mandarin.py +++ /dev/null @@ -1,328 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text, taiwanese=False): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - if taiwanese: - text += '#'+'#'.join(bopomofos) - else: - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text, taiwanese=False): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text, taiwanese) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text diff --git a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/README.md b/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/README.md deleted file mode 100644 index 658908b1ae58eca835fb7f73086332f3c2173fd0..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: AI.Dashboard.PHQ9.GAD7.SDOH -emoji: 🏢 -colorFrom: red -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AIWaves/Debate/src/agents/Agent/__init__.py b/spaces/AIWaves/Debate/src/agents/Agent/__init__.py deleted file mode 100644 index 5919811a5cec1b9d44051cdb1e9ac26a21ee3064..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/Agent/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .Agent import Agent \ No newline at end of file diff --git a/spaces/AIZero2HeroBootcamp/3DHuman/app.py b/spaces/AIZero2HeroBootcamp/3DHuman/app.py deleted file mode 100644 index 06fd1947c7e9be88f0e449f073d510ed754a739b..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/3DHuman/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import time -import gradio as gr -import os - - -def load_mesh(mesh_file_name): - time.sleep(2) - return mesh_file_name - -description="3D Virtual Food 🥐🥑🥒🥓🥔🥕🥖🥗🥘🥙🥚🥛🥜🥝🥞🥟🥠🥡🥢🥣🥤🥥🥦🥧🥨🥩🥪🥫🥬🥭🥮🥯" - -inputs = gr.Model3D() -outputs = gr.Model3D(clear_color=[0.8, 0.2, 0.2, 1.0]) - -demo = gr.Interface( - fn=load_mesh, - inputs=inputs, - outputs=outputs, - examples=[ - [os.path.join(os.path.dirname(__file__), "FinalBaseMesh.obj")], - [os.path.join(os.path.dirname(__file__), "BEAR_BLK.OBJ")] - ], - description=description, - cache_examples=True, -) - -demo.launch() \ No newline at end of file diff --git a/spaces/AIatUIUC/CodeLATS/lats/lats_main.py b/spaces/AIatUIUC/CodeLATS/lats/lats_main.py deleted file mode 100644 index 0cb4c12f36e4556e8a04614daadcf0193a1054d0..0000000000000000000000000000000000000000 --- a/spaces/AIatUIUC/CodeLATS/lats/lats_main.py +++ /dev/null @@ -1,78 +0,0 @@ -import os -import argparse - -from lats import run_lats - - -def get_args(): - parser = argparse.ArgumentParser() - parser.add_argument("--run_name", type=str, help="The name of the run") - parser.add_argument("--root_dir", type=str, - help="The root logging directory", default="root") - parser.add_argument("--dataset_path", type=str, - help="The path to the benchmark dataset", default="root") - parser.add_argument("--strategy", type=str, - help="Strategy: `simple`, `reflexion`") - parser.add_argument("--language", type=str, help="Strategy: `py` or `rs`") - parser.add_argument( - "--model", type=str, help="OpenAI models only for now. For best results, use GPT-4") - parser.add_argument("--pass_at_k", type=int, - help="Pass@k metric", default=1) - parser.add_argument("--max_iters", type=int, - help="The maximum number of self-improvement iterations", default=10) - parser.add_argument("--expansion_factor", type=int, - help="The expansion factor for the reflexion UCS and A* strategy", default=3) - parser.add_argument("--verbose", action='store_true', - help="To print live logs") - parser.add_argument("--instruction", type=str, - help="text string", default="") - parser.add_argument("--n_samples", type=int, - help="The number of nodes added during expansion", default=3) - parser.add_argument("--depth", type=int, - help="Tree depth", default=5) - - # TODO: implement this - # parser.add_argument("--is_resume", action='store_true', help="To resume run") - # parser.add_argument("--resume_dir", type=str, help="If resume, the logging directory", default="") - args = parser.parse_args() - return args - - -def strategy_factory(strategy: str): - def kwargs_wrapper_gen(func, delete_keys=[]): - def kwargs_wrapper(**kwargs): - for key in delete_keys: - del kwargs[key] - return func(**kwargs) - return kwargs_wrapper - - return kwargs_wrapper_gen(run_lats, delete_keys=[]) - - -def lats_main(args): - - # check if the strategy is valid - run_strategy = strategy_factory(args.strategy) - - # start the run - # evaluate with pass@k - x = run_strategy( - model_name=args.model, - language=args.language, - max_iters=args.max_iters, - verbose=args.verbose, - instruction=args.instruction, - n_samples=args.n_samples, - depth=args.depth - ) - - return x - - - -def main(args): - lats_main(args) - -if __name__ == "__main__": - args = get_args() - main(args) diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/AiService.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/AiService.py deleted file mode 100644 index 9b41e3c82261585d4eb2114665cc2b88354ee45b..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/AiService.py +++ /dev/null @@ -1,36 +0,0 @@ -from __future__ import annotations - -import requests - -from ...typing import Any, CreateResult -from ..base_provider import BaseProvider - - -class AiService(BaseProvider): - url = "https://aiservice.vercel.app/" - working = False - supports_gpt_35_turbo = True - - @staticmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, - **kwargs: Any, - ) -> CreateResult: - base = "\n".join(f"{message['role']}: {message['content']}" for message in messages) - base += "\nassistant: " - - headers = { - "accept": "*/*", - "content-type": "text/plain;charset=UTF-8", - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "Referer": "https://aiservice.vercel.app/chat", - } - data = {"input": base} - url = "https://aiservice.vercel.app/api/chat/answer" - response = requests.post(url, headers=headers, json=data) - response.raise_for_status() - yield response.json()["data"] diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.d.ts deleted file mode 100644 index b938fd0546c80efdd2f9a971d900de644992259e..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.d.ts +++ /dev/null @@ -1,13 +0,0 @@ -import CircularProgress from './CircularProgress'; - -export default function ( - config?: CircularProgress.IConfig -): CircularProgress; - -export default function ( - x?: number, y?: number, - radius?: number, - barColor?: string | number, - value?: number, - config?: CircularProgress.IConfig -): CircularProgress; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/Dialog.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/Dialog.js deleted file mode 100644 index d0731dd19b21505a95c46ce39cf63cdee77f2175..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/Dialog.js +++ /dev/null @@ -1,306 +0,0 @@ -import Sizer from '../sizer/Sizer.js'; -import OverlapSizer from '../overlapsizer/OverlapSizer.js'; -import Buttons from '../buttons/Buttons.js'; -import FixWidthButtons from '../fixwidthbuttons/FixWidthButtons.js'; -import GridButtons from '../gridbuttons/GridButtons.js'; -import Methods from './methods/Methods.js'; - -const GetValue = Phaser.Utils.Objects.GetValue; - -class Dialog extends Sizer { - constructor(scene, config) { - if (config === undefined) { - config = {}; - } - // Create sizer - config.orientation = 1; // Top to bottom - super(scene, config); - this.type = 'rexDialog'; - this.eventEmitter = GetValue(config, 'eventEmitter', this); - - // Add elements - var background = GetValue(config, 'background', undefined); - var title = GetValue(config, 'title', undefined); - var toolbar = GetValue(config, 'toolbar', undefined); - var toolbarBackground = GetValue(config, 'toolbarBackground', undefined); - var leftToolbar = GetValue(config, 'leftToolbar', undefined); - var leftToolbarBackground = GetValue(config, 'leftToolbarBackground', undefined); - var content = GetValue(config, 'content', undefined); - var description = GetValue(config, 'description', undefined); - var choicesSizer; - var choices = GetValue(config, 'choices', undefined); - var choicesBackground = GetValue(config, 'choicesBackground', undefined); - var actionsSizer; - var actions = GetValue(config, 'actions', undefined); - var actionsBackground = GetValue(config, 'actionsBackground', undefined); - var clickConfig = GetValue(config, 'click', undefined); - - if (background) { - this.addBackground(background); - } - - var toolbarSizer; - if (toolbar) { - toolbarSizer = new Buttons(scene, { - groupName: 'toolbar', - background: toolbarBackground, - buttons: toolbar, - orientation: 0, // Left-right - space: { item: GetValue(config, 'space.toolbarItem', 0) }, - click: clickConfig, - eventEmitter: this.eventEmitter, - }); - } - - var leftToolbarSizer; - if (leftToolbar) { - leftToolbarSizer = new Buttons(scene, { - groupName: 'leftToolbar', - background: leftToolbarBackground, - buttons: leftToolbar, - orientation: 0, // Left-right - space: { item: GetValue(config, 'space.leftToolbarItem', 0) }, - click: clickConfig, - eventEmitter: this.eventEmitter, - }); - } - - // title or toolbar or leftToolbar - if (title || toolbar || leftToolbar) { - var titleExpandWidth = !!title && GetValue(config, 'expand.title', true); - var titleAlign = GetValue(config, 'align.title', 'center'); - var useOverlapSizer = - // Has title, title is not exapnd-width, title align to center - (title && !titleExpandWidth && (titleAlign === 'center')) || - // No title - (!title && (toolbar || leftToolbar)); - var useSizer = !useOverlapSizer; - - var titleSizer; - if (useSizer) { - titleSizer = new Sizer(scene, { orientation: 0 }); - } else { - titleSizer = new OverlapSizer(scene); - } - - var titleChildExpand = (useSizer) ? true : { height: true }; - - // Add leftToolbar - if (leftToolbarSizer) { - titleSizer.add( - leftToolbarSizer, - { align: 'left', expand: titleChildExpand } - ); - } - - // Add title - if (title) { - // Add space if not expand, align to right - if (useSizer && !titleExpandWidth && (titleAlign === 'right')) { - titleSizer.addSpace(); - } - - var padding = { - left: GetValue(config, 'space.titleLeft', 0), - right: GetValue(config, 'space.titleRight', 0) - } - var proportion = (titleExpandWidth) ? 1 : 0; - titleSizer.add( - title, - { align: titleAlign, proportion: proportion, expand: titleChildExpand, padding: padding } - ); - - // Add space if not expand, align to left - if (useSizer && !titleExpandWidth && (titleAlign === 'left')) { - titleSizer.addSpace(); - } - } - - // Add toolbar - if (toolbarSizer) { - // Add space if not title - if (useSizer && !title) { - titleSizer.addSpace(); - } - titleSizer.add( - toolbarSizer, - { align: 'right', expand: titleChildExpand } - ); - } - - // Add sizer to dialog - var titleSpace = GetValue(config, 'space.title', 0); - var padding; - if (content || description || choices || actions) { - padding = { bottom: titleSpace }; - } - var proportion = GetValue(config, 'proportion.title', 0); - this.add( - titleSizer, - { padding: padding, proportion: proportion, expand: true } - ); - } - - if (content) { - var align = GetValue(config, 'align.content', 'center'); - var contentSpace = GetValue(config, 'space.content', 0); - var padding = { - left: GetValue(config, 'space.contentLeft', 0), - right: GetValue(config, 'space.contentRight', 0), - bottom: ((description || choices || actions) ? contentSpace : 0) - } - var proportion = GetValue(config, 'proportion.content', 0); - var expand = GetValue(config, 'expand.content', true); - this.add( - content, - { align: align, padding: padding, proportion: proportion, expand: expand } - ); - } - - if (description) { - var align = GetValue(config, 'align.description', 'center'); - var descriptionSpace = GetValue(config, 'space.description', 0); - var padding = { - left: GetValue(config, 'space.descriptionLeft', 0), - right: GetValue(config, 'space.descriptionRight', 0), - bottom: ((choices || actions) ? descriptionSpace : 0) - } - var proportion = GetValue(config, 'proportion.description', 0); - var expand = GetValue(config, 'expand.description', true); - this.add( - description, - { align: align, padding: padding, proportion: proportion, expand: expand } - ); - } - - if (choices) { - var choicesType = GetValue(config, 'choicesType', '').split('-'); - var ButtonsClass = Contains(choicesType, 'wrap') ? FixWidthButtons : - Contains(choicesType, 'grid') ? GridButtons : - Buttons; - var buttonsType = Contains(choicesType, 'radio') ? 'radio' : - Contains(choicesType, 'checkboxes') ? 'checkboxes' : undefined; - - var space = { - left: GetValue(config, 'space.choicesBackgroundLeft', 0), - right: GetValue(config, 'space.choicesBackgroundRight', 0), - top: GetValue(config, 'space.choicesBackgroundTop', 0), - bottom: GetValue(config, 'space.choicesBackgroundBottom', 0), - }; - var itemSpace = GetValue(config, 'space.choice', 0); - if (ButtonsClass === Buttons) { - space.item = itemSpace; - } else if (ButtonsClass === FixWidthButtons) { - space.item = itemSpace; - space.line = GetValue(config, 'space.choiceLine', itemSpace); - } else { // GridButtons - space.column = GetValue(config, 'space.choiceColumn', itemSpace); - space.row = GetValue(config, 'space.choiceRow', itemSpace); - } - - var choicesConfig = { - width: GetValue(config, 'choicesWidth', undefined), - height: GetValue(config, 'choicesHeight', undefined), - groupName: 'choices', - buttonsType: buttonsType, - background: choicesBackground, - buttons: choices, - space: space, - click: clickConfig, - eventEmitter: this.eventEmitter, - setValueCallback: GetValue(config, 'choicesSetValueCallback', undefined), - setValueCallbackScope: GetValue(config, 'choicesSetValueCallbackScope', undefined) - }; - - if (ButtonsClass === Buttons) { - choicesConfig.orientation = Contains(choicesType, 'x') ? 0 : 1; - } - - choicesSizer = new ButtonsClass(scene, choicesConfig); - var choicesSpace = GetValue(config, 'space.choices', 0); - var padding = { - left: GetValue(config, 'space.choicesLeft', 0), - right: GetValue(config, 'space.choicesRight', 0), - bottom: ((actions) ? choicesSpace : 0) - } - var align = GetValue(config, 'align.choices', 'center'); - var proportion = GetValue(config, 'proportion.choices', 0); - var expand = GetValue(config, 'expand.choices', true); - this.add( - choicesSizer, - { align: align, padding: padding, proportion: proportion, expand: expand } - ); - - this.buttonsType = buttonsType; - } - - if (actions) { - actionsSizer = new Buttons(scene, { - groupName: 'actions', - background: actionsBackground, - buttons: actions, - orientation: 0, // Left-right - space: { item: GetValue(config, 'space.action', 0) }, - expand: GetValue(config, 'expand.actions', false), - align: GetValue(config, 'align.actions', 'center'), - click: clickConfig, - eventEmitter: this.eventEmitter, - }) - var padding = { - left: GetValue(config, 'space.actionsLeft', 0), - right: GetValue(config, 'space.actionsRight', 0) - } - var proportion = GetValue(config, 'proportion.action', 0); - this.add( - actionsSizer, - { align: 'center', padding: padding, proportion: proportion, expand: true } - ); - } - - EmitButtonEvent(this, 'click'); - EmitButtonEvent(this, 'over'); - EmitButtonEvent(this, 'out'); - EmitButtonEvent(this, 'enable'); - EmitButtonEvent(this, 'disable'); - - this.addChildrenMap('background', background); - this.addChildrenMap('title', title); - this.addChildrenMap('toolbar', toolbar); - this.addChildrenMap('leftToolbar', leftToolbar); - this.addChildrenMap('content', content); - this.addChildrenMap('description', description); - this.addChildrenMap('choices', (choicesSizer) ? choicesSizer.buttons : undefined); - this.addChildrenMap('actions', (actionsSizer) ? actionsSizer.buttons : undefined); - this.addChildrenMap('choicesSizer', choicesSizer); - this.addChildrenMap('actionsSizer', actionsSizer); - this.addChildrenMap('toolbarSizer', toolbarSizer); - this.addChildrenMap('leftToolbarSizer', leftToolbarSizer); - } -} - -var Contains = function (arr, item) { - return arr.indexOf(item) !== -1; -} - -var ButtonsGroupEventNameMap = { - actions: 'action', - choices: 'choice', - toolbar: 'toolbar', - leftToolbar: 'leftToolbar' -} - -var EmitButtonEvent = function (dialog, postEventName) { - dialog.on(`button.${postEventName}`, function (button, groupName, index, pointer, event) { - if (!ButtonsGroupEventNameMap.hasOwnProperty(groupName)) { - return - } - dialog.emit(`${ButtonsGroupEventNameMap[groupName]}.${postEventName}`, button, index, pointer, event); - }) -} - -Object.assign( - Dialog.prototype, - Methods -); - -export default Dialog; \ No newline at end of file diff --git a/spaces/Ahmedmewloud/Depplearnig/Makefile b/spaces/Ahmedmewloud/Depplearnig/Makefile deleted file mode 100644 index f080a464de5241653a9ea1335062dcccb4d681c4..0000000000000000000000000000000000000000 --- a/spaces/Ahmedmewloud/Depplearnig/Makefile +++ /dev/null @@ -1,28 +0,0 @@ -install: - pip install --upgrade pip &&\ - pip install -r requirements.txt - -test: - python -m pytest -vvv --cov=hello --cov=greeting \ - --cov=smath --cov=web tests - python -m pytest --nbval notebook.ipynb #tests our jupyter notebook - #python -m pytest -v tests/test_web.py #if you just want to test web - -debug: - python -m pytest -vv --pdb #Debugger is invoked - -one-test: - python -m pytest -vv tests/test_greeting.py::test_my_name4 - -debugthree: - #not working the way I expect - python -m pytest -vv --pdb --maxfail=4 # drop to PDB for first three failures - -format: - black *.py - -lint: - pylint --disable=R,C *.py - -all: install lint test format - diff --git a/spaces/AiPalsDev/Translate_It/README.md b/spaces/AiPalsDev/Translate_It/README.md deleted file mode 100644 index 92585f9f09bde3105258f48263b447cbe4fd45d1..0000000000000000000000000000000000000000 --- a/spaces/AiPalsDev/Translate_It/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Translate It -emoji: 🔥 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ajaymaurya1008/meme-identifier/README.md b/spaces/Ajaymaurya1008/meme-identifier/README.md deleted file mode 100644 index 22071664bff479d4614c3922869d55f165005263..0000000000000000000000000000000000000000 --- a/spaces/Ajaymaurya1008/meme-identifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hrishikesh332 Autotrain Meme Classification 42897109437 -emoji: 😻 -colorFrom: pink -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Alpaca233/SadTalker/src/face3d/options/inference_options.py b/spaces/Alpaca233/SadTalker/src/face3d/options/inference_options.py deleted file mode 100644 index c453965959ab4cfb31acbc424f994db68c3d4df5..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/options/inference_options.py +++ /dev/null @@ -1,23 +0,0 @@ -from face3d.options.base_options import BaseOptions - - -class InferenceOptions(BaseOptions): - """This class includes test options. - - It also includes shared options defined in BaseOptions. - """ - - def initialize(self, parser): - parser = BaseOptions.initialize(self, parser) # define shared options - parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc') - parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]') - - parser.add_argument('--input_dir', type=str, help='the folder of the input files') - parser.add_argument('--keypoint_dir', type=str, help='the folder of the keypoint files') - parser.add_argument('--output_dir', type=str, default='mp4', help='the output dir to save the extracted coefficients') - parser.add_argument('--save_split_files', action='store_true', help='save split files or not') - parser.add_argument('--inference_batch_size', type=int, default=8) - - # Dropout and Batchnorm has different behavior during training and test. - self.isTrain = False - return parser diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/README.md deleted file mode 100644 index ef50d423e68ff5c641e4419bd30f84787aebf839..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/README.md +++ /dev/null @@ -1,14 +0,0 @@ -# Research projects - -This folder contains various research projects using 🧨 Diffusers. -They are not really maintained by the core maintainers of this library and often require a specific version of Diffusers that is indicated in the requirements file of each folder. -Updating them to the most recent version of the library will require some work. - -To use any of them, just run the command - -``` -pip install -r requirements.txt -``` -inside the folder of your choice. - -If you need help with any of those, please open an issue where you directly ping the author(s), as indicated at the top of the README of each folder. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/conversion_ldm_uncond.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/conversion_ldm_uncond.py deleted file mode 100644 index d2ebb3934b6696fd427c9bf09eb051cf7befe7f4..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/conversion_ldm_uncond.py +++ /dev/null @@ -1,56 +0,0 @@ -import argparse - -import OmegaConf -import torch - -from diffusers import DDIMScheduler, LDMPipeline, UNetLDMModel, VQModel - - -def convert_ldm_original(checkpoint_path, config_path, output_path): - config = OmegaConf.load(config_path) - state_dict = torch.load(checkpoint_path, map_location="cpu")["model"] - keys = list(state_dict.keys()) - - # extract state_dict for VQVAE - first_stage_dict = {} - first_stage_key = "first_stage_model." - for key in keys: - if key.startswith(first_stage_key): - first_stage_dict[key.replace(first_stage_key, "")] = state_dict[key] - - # extract state_dict for UNetLDM - unet_state_dict = {} - unet_key = "model.diffusion_model." - for key in keys: - if key.startswith(unet_key): - unet_state_dict[key.replace(unet_key, "")] = state_dict[key] - - vqvae_init_args = config.model.params.first_stage_config.params - unet_init_args = config.model.params.unet_config.params - - vqvae = VQModel(**vqvae_init_args).eval() - vqvae.load_state_dict(first_stage_dict) - - unet = UNetLDMModel(**unet_init_args).eval() - unet.load_state_dict(unet_state_dict) - - noise_scheduler = DDIMScheduler( - timesteps=config.model.params.timesteps, - beta_schedule="scaled_linear", - beta_start=config.model.params.linear_start, - beta_end=config.model.params.linear_end, - clip_sample=False, - ) - - pipeline = LDMPipeline(vqvae, unet, noise_scheduler) - pipeline.save_pretrained(output_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--checkpoint_path", type=str, required=True) - parser.add_argument("--config_path", type=str, required=True) - parser.add_argument("--output_path", type=str, required=True) - args = parser.parse_args() - - convert_ldm_original(args.checkpoint_path, args.config_path, args.output_path) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py deleted file mode 100644 index 50df4e2db500d575eaddd7538b49cc808e30b50e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco.py' -model = dict( - pretrained='open-mmlab://res2net101_v1d_26w_4s', - backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py deleted file mode 100644 index a6a668c4e33611e2b69009741558d83558cc9b4f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py +++ /dev/null @@ -1,53 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_caffe_c4.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - type='TridentFasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='TridentResNet', - trident_dilations=(1, 2, 3), - num_branch=3, - test_branch_idx=1), - roi_head=dict(type='TridentRoIHead', num_branch=3, test_branch_idx=1), - train_cfg=dict( - rpn_proposal=dict(max_per_img=500), - rcnn=dict( - sampler=dict(num=128, pos_fraction=0.5, - add_gt_as_proposals=False)))) - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/README.md b/spaces/Andy1621/uniformer_image_segmentation/README.md deleted file mode 100644 index e7fc71b41bc1cfe47578010d4116bc4e297fce2b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uniformer_image_segmentation -emoji: ⚡ -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.0.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py deleted file mode 100644 index eb57ed1519f82adb79a3d2377e1f286df9d8ef6b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py +++ /dev/null @@ -1,72 +0,0 @@ -import os -import re -import sys -from typing import List, Optional - -from pip._internal.locations import site_packages, user_site -from pip._internal.utils.virtualenv import ( - running_under_virtualenv, - virtualenv_no_global, -) - -__all__ = [ - "egg_link_path_from_sys_path", - "egg_link_path_from_location", -] - - -def _egg_link_name(raw_name: str) -> str: - """ - Convert a Name metadata value to a .egg-link name, by applying - the same substitution as pkg_resources's safe_name function. - Note: we cannot use canonicalize_name because it has a different logic. - """ - return re.sub("[^A-Za-z0-9.]+", "-", raw_name) + ".egg-link" - - -def egg_link_path_from_sys_path(raw_name: str) -> Optional[str]: - """ - Look for a .egg-link file for project name, by walking sys.path. - """ - egg_link_name = _egg_link_name(raw_name) - for path_item in sys.path: - egg_link = os.path.join(path_item, egg_link_name) - if os.path.isfile(egg_link): - return egg_link - return None - - -def egg_link_path_from_location(raw_name: str) -> Optional[str]: - """ - Return the path for the .egg-link file if it exists, otherwise, None. - - There's 3 scenarios: - 1) not in a virtualenv - try to find in site.USER_SITE, then site_packages - 2) in a no-global virtualenv - try to find in site_packages - 3) in a yes-global virtualenv - try to find in site_packages, then site.USER_SITE - (don't look in global location) - - For #1 and #3, there could be odd cases, where there's an egg-link in 2 - locations. - - This method will just return the first one found. - """ - sites: List[str] = [] - if running_under_virtualenv(): - sites.append(site_packages) - if not virtualenv_no_global() and user_site: - sites.append(user_site) - else: - if user_site: - sites.append(user_site) - sites.append(site_packages) - - egg_link_name = _egg_link_name(raw_name) - for site in sites: - egglink = os.path.join(site, egg_link_name) - if os.path.isfile(egglink): - return egglink - return None diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_null_file.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_null_file.py deleted file mode 100644 index b659673ef3c1d5431e6699898ae4d073b4be764b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_null_file.py +++ /dev/null @@ -1,69 +0,0 @@ -from types import TracebackType -from typing import IO, Iterable, Iterator, List, Optional, Type - - -class NullFile(IO[str]): - def close(self) -> None: - pass - - def isatty(self) -> bool: - return False - - def read(self, __n: int = 1) -> str: - return "" - - def readable(self) -> bool: - return False - - def readline(self, __limit: int = 1) -> str: - return "" - - def readlines(self, __hint: int = 1) -> List[str]: - return [] - - def seek(self, __offset: int, __whence: int = 1) -> int: - return 0 - - def seekable(self) -> bool: - return False - - def tell(self) -> int: - return 0 - - def truncate(self, __size: Optional[int] = 1) -> int: - return 0 - - def writable(self) -> bool: - return False - - def writelines(self, __lines: Iterable[str]) -> None: - pass - - def __next__(self) -> str: - return "" - - def __iter__(self) -> Iterator[str]: - return iter([""]) - - def __enter__(self) -> IO[str]: - pass - - def __exit__( - self, - __t: Optional[Type[BaseException]], - __value: Optional[BaseException], - __traceback: Optional[TracebackType], - ) -> None: - pass - - def write(self, text: str) -> int: - return 0 - - def flush(self) -> None: - pass - - def fileno(self) -> int: - return -1 - - -NULL_FILE = NullFile() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_ext.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_ext.py deleted file mode 100644 index cbfe3ec1c28529aade613b000d5b051807287deb..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_ext.py +++ /dev/null @@ -1,383 +0,0 @@ -import os -import sys -import itertools -from importlib.machinery import EXTENSION_SUFFIXES -from importlib.util import cache_from_source as _compiled_file_name -from typing import Dict, Iterator, List, Tuple - -from distutils.command.build_ext import build_ext as _du_build_ext -from distutils.ccompiler import new_compiler -from distutils.sysconfig import customize_compiler, get_config_var -from distutils import log - -from setuptools.errors import BaseError -from setuptools.extension import Extension, Library - -try: - # Attempt to use Cython for building extensions, if available - from Cython.Distutils.build_ext import build_ext as _build_ext - # Additionally, assert that the compiler module will load - # also. Ref #1229. - __import__('Cython.Compiler.Main') -except ImportError: - _build_ext = _du_build_ext - -# make sure _config_vars is initialized -get_config_var("LDSHARED") -from distutils.sysconfig import _config_vars as _CONFIG_VARS # noqa - - -def _customize_compiler_for_shlib(compiler): - if sys.platform == "darwin": - # building .dylib requires additional compiler flags on OSX; here we - # temporarily substitute the pyconfig.h variables so that distutils' - # 'customize_compiler' uses them before we build the shared libraries. - tmp = _CONFIG_VARS.copy() - try: - # XXX Help! I don't have any idea whether these are right... - _CONFIG_VARS['LDSHARED'] = ( - "gcc -Wl,-x -dynamiclib -undefined dynamic_lookup") - _CONFIG_VARS['CCSHARED'] = " -dynamiclib" - _CONFIG_VARS['SO'] = ".dylib" - customize_compiler(compiler) - finally: - _CONFIG_VARS.clear() - _CONFIG_VARS.update(tmp) - else: - customize_compiler(compiler) - - -have_rtld = False -use_stubs = False -libtype = 'shared' - -if sys.platform == "darwin": - use_stubs = True -elif os.name != 'nt': - try: - import dl - use_stubs = have_rtld = hasattr(dl, 'RTLD_NOW') - except ImportError: - pass - - -def if_dl(s): - return s if have_rtld else '' - - -def get_abi3_suffix(): - """Return the file extension for an abi3-compliant Extension()""" - for suffix in EXTENSION_SUFFIXES: - if '.abi3' in suffix: # Unix - return suffix - elif suffix == '.pyd': # Windows - return suffix - - -class build_ext(_build_ext): - editable_mode: bool = False - inplace: bool = False - - def run(self): - """Build extensions in build directory, then copy if --inplace""" - old_inplace, self.inplace = self.inplace, 0 - _build_ext.run(self) - self.inplace = old_inplace - if old_inplace: - self.copy_extensions_to_source() - - def _get_inplace_equivalent(self, build_py, ext: Extension) -> Tuple[str, str]: - fullname = self.get_ext_fullname(ext.name) - filename = self.get_ext_filename(fullname) - modpath = fullname.split('.') - package = '.'.join(modpath[:-1]) - package_dir = build_py.get_package_dir(package) - inplace_file = os.path.join(package_dir, os.path.basename(filename)) - regular_file = os.path.join(self.build_lib, filename) - return (inplace_file, regular_file) - - def copy_extensions_to_source(self): - build_py = self.get_finalized_command('build_py') - for ext in self.extensions: - inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext) - - # Always copy, even if source is older than destination, to ensure - # that the right extensions for the current Python/platform are - # used. - if os.path.exists(regular_file) or not ext.optional: - self.copy_file(regular_file, inplace_file, level=self.verbose) - - if ext._needs_stub: - inplace_stub = self._get_equivalent_stub(ext, inplace_file) - self._write_stub_file(inplace_stub, ext, compile=True) - # Always compile stub and remove the original (leave the cache behind) - # (this behaviour was observed in previous iterations of the code) - - def _get_equivalent_stub(self, ext: Extension, output_file: str) -> str: - dir_ = os.path.dirname(output_file) - _, _, name = ext.name.rpartition(".") - return f"{os.path.join(dir_, name)}.py" - - def _get_output_mapping(self) -> Iterator[Tuple[str, str]]: - if not self.inplace: - return - - build_py = self.get_finalized_command('build_py') - opt = self.get_finalized_command('install_lib').optimize or "" - - for ext in self.extensions: - inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext) - yield (regular_file, inplace_file) - - if ext._needs_stub: - # This version of `build_ext` always builds artifacts in another dir, - # when "inplace=True" is given it just copies them back. - # This is done in the `copy_extensions_to_source` function, which - # always compile stub files via `_compile_and_remove_stub`. - # At the end of the process, a `.pyc` stub file is created without the - # corresponding `.py`. - - inplace_stub = self._get_equivalent_stub(ext, inplace_file) - regular_stub = self._get_equivalent_stub(ext, regular_file) - inplace_cache = _compiled_file_name(inplace_stub, optimization=opt) - output_cache = _compiled_file_name(regular_stub, optimization=opt) - yield (output_cache, inplace_cache) - - def get_ext_filename(self, fullname): - so_ext = os.getenv('SETUPTOOLS_EXT_SUFFIX') - if so_ext: - filename = os.path.join(*fullname.split('.')) + so_ext - else: - filename = _build_ext.get_ext_filename(self, fullname) - so_ext = get_config_var('EXT_SUFFIX') - - if fullname in self.ext_map: - ext = self.ext_map[fullname] - use_abi3 = getattr(ext, 'py_limited_api') and get_abi3_suffix() - if use_abi3: - filename = filename[:-len(so_ext)] - so_ext = get_abi3_suffix() - filename = filename + so_ext - if isinstance(ext, Library): - fn, ext = os.path.splitext(filename) - return self.shlib_compiler.library_filename(fn, libtype) - elif use_stubs and ext._links_to_dynamic: - d, fn = os.path.split(filename) - return os.path.join(d, 'dl-' + fn) - return filename - - def initialize_options(self): - _build_ext.initialize_options(self) - self.shlib_compiler = None - self.shlibs = [] - self.ext_map = {} - self.editable_mode = False - - def finalize_options(self): - _build_ext.finalize_options(self) - self.extensions = self.extensions or [] - self.check_extensions_list(self.extensions) - self.shlibs = [ext for ext in self.extensions - if isinstance(ext, Library)] - if self.shlibs: - self.setup_shlib_compiler() - for ext in self.extensions: - ext._full_name = self.get_ext_fullname(ext.name) - for ext in self.extensions: - fullname = ext._full_name - self.ext_map[fullname] = ext - - # distutils 3.1 will also ask for module names - # XXX what to do with conflicts? - self.ext_map[fullname.split('.')[-1]] = ext - - ltd = self.shlibs and self.links_to_dynamic(ext) or False - ns = ltd and use_stubs and not isinstance(ext, Library) - ext._links_to_dynamic = ltd - ext._needs_stub = ns - filename = ext._file_name = self.get_ext_filename(fullname) - libdir = os.path.dirname(os.path.join(self.build_lib, filename)) - if ltd and libdir not in ext.library_dirs: - ext.library_dirs.append(libdir) - if ltd and use_stubs and os.curdir not in ext.runtime_library_dirs: - ext.runtime_library_dirs.append(os.curdir) - - if self.editable_mode: - self.inplace = True - - def setup_shlib_compiler(self): - compiler = self.shlib_compiler = new_compiler( - compiler=self.compiler, dry_run=self.dry_run, force=self.force - ) - _customize_compiler_for_shlib(compiler) - - if self.include_dirs is not None: - compiler.set_include_dirs(self.include_dirs) - if self.define is not None: - # 'define' option is a list of (name,value) tuples - for (name, value) in self.define: - compiler.define_macro(name, value) - if self.undef is not None: - for macro in self.undef: - compiler.undefine_macro(macro) - if self.libraries is not None: - compiler.set_libraries(self.libraries) - if self.library_dirs is not None: - compiler.set_library_dirs(self.library_dirs) - if self.rpath is not None: - compiler.set_runtime_library_dirs(self.rpath) - if self.link_objects is not None: - compiler.set_link_objects(self.link_objects) - - # hack so distutils' build_extension() builds a library instead - compiler.link_shared_object = link_shared_object.__get__(compiler) - - def get_export_symbols(self, ext): - if isinstance(ext, Library): - return ext.export_symbols - return _build_ext.get_export_symbols(self, ext) - - def build_extension(self, ext): - ext._convert_pyx_sources_to_lang() - _compiler = self.compiler - try: - if isinstance(ext, Library): - self.compiler = self.shlib_compiler - _build_ext.build_extension(self, ext) - if ext._needs_stub: - build_lib = self.get_finalized_command('build_py').build_lib - self.write_stub(build_lib, ext) - finally: - self.compiler = _compiler - - def links_to_dynamic(self, ext): - """Return true if 'ext' links to a dynamic lib in the same package""" - # XXX this should check to ensure the lib is actually being built - # XXX as dynamic, and not just using a locally-found version or a - # XXX static-compiled version - libnames = dict.fromkeys([lib._full_name for lib in self.shlibs]) - pkg = '.'.join(ext._full_name.split('.')[:-1] + ['']) - return any(pkg + libname in libnames for libname in ext.libraries) - - def get_outputs(self) -> List[str]: - if self.inplace: - return list(self.get_output_mapping().keys()) - return sorted(_build_ext.get_outputs(self) + self.__get_stubs_outputs()) - - def get_output_mapping(self) -> Dict[str, str]: - """See :class:`setuptools.commands.build.SubCommand`""" - mapping = self._get_output_mapping() - return dict(sorted(mapping, key=lambda x: x[0])) - - def __get_stubs_outputs(self): - # assemble the base name for each extension that needs a stub - ns_ext_bases = ( - os.path.join(self.build_lib, *ext._full_name.split('.')) - for ext in self.extensions - if ext._needs_stub - ) - # pair each base with the extension - pairs = itertools.product(ns_ext_bases, self.__get_output_extensions()) - return list(base + fnext for base, fnext in pairs) - - def __get_output_extensions(self): - yield '.py' - yield '.pyc' - if self.get_finalized_command('build_py').optimize: - yield '.pyo' - - def write_stub(self, output_dir, ext, compile=False): - stub_file = os.path.join(output_dir, *ext._full_name.split('.')) + '.py' - self._write_stub_file(stub_file, ext, compile) - - def _write_stub_file(self, stub_file: str, ext: Extension, compile=False): - log.info("writing stub loader for %s to %s", ext._full_name, stub_file) - if compile and os.path.exists(stub_file): - raise BaseError(stub_file + " already exists! Please delete.") - if not self.dry_run: - f = open(stub_file, 'w') - f.write( - '\n'.join([ - "def __bootstrap__():", - " global __bootstrap__, __file__, __loader__", - " import sys, os, pkg_resources, importlib.util" + - if_dl(", dl"), - " __file__ = pkg_resources.resource_filename" - "(__name__,%r)" - % os.path.basename(ext._file_name), - " del __bootstrap__", - " if '__loader__' in globals():", - " del __loader__", - if_dl(" old_flags = sys.getdlopenflags()"), - " old_dir = os.getcwd()", - " try:", - " os.chdir(os.path.dirname(__file__))", - if_dl(" sys.setdlopenflags(dl.RTLD_NOW)"), - " spec = importlib.util.spec_from_file_location(", - " __name__, __file__)", - " mod = importlib.util.module_from_spec(spec)", - " spec.loader.exec_module(mod)", - " finally:", - if_dl(" sys.setdlopenflags(old_flags)"), - " os.chdir(old_dir)", - "__bootstrap__()", - "" # terminal \n - ]) - ) - f.close() - if compile: - self._compile_and_remove_stub(stub_file) - - def _compile_and_remove_stub(self, stub_file: str): - from distutils.util import byte_compile - - byte_compile([stub_file], optimize=0, - force=True, dry_run=self.dry_run) - optimize = self.get_finalized_command('install_lib').optimize - if optimize > 0: - byte_compile([stub_file], optimize=optimize, - force=True, dry_run=self.dry_run) - if os.path.exists(stub_file) and not self.dry_run: - os.unlink(stub_file) - - -if use_stubs or os.name == 'nt': - # Build shared libraries - # - def link_shared_object( - self, objects, output_libname, output_dir=None, libraries=None, - library_dirs=None, runtime_library_dirs=None, export_symbols=None, - debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, - target_lang=None): - self.link( - self.SHARED_LIBRARY, objects, output_libname, - output_dir, libraries, library_dirs, runtime_library_dirs, - export_symbols, debug, extra_preargs, extra_postargs, - build_temp, target_lang - ) -else: - # Build static libraries everywhere else - libtype = 'static' - - def link_shared_object( - self, objects, output_libname, output_dir=None, libraries=None, - library_dirs=None, runtime_library_dirs=None, export_symbols=None, - debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, - target_lang=None): - # XXX we need to either disallow these attrs on Library instances, - # or warn/abort here if set, or something... - # libraries=None, library_dirs=None, runtime_library_dirs=None, - # export_symbols=None, extra_preargs=None, extra_postargs=None, - # build_temp=None - - assert output_dir is None # distutils build_ext doesn't pass this - output_dir, filename = os.path.split(output_libname) - basename, ext = os.path.splitext(filename) - if self.library_filename("x").startswith('lib'): - # strip 'lib' prefix; this is kludgy if some platform uses - # a different prefix - basename = basename[3:] - - self.create_static_lib( - objects, basename, output_dir, debug, target_lang - ) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/optim.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/optim.py deleted file mode 100644 index d39d3aaa546c17e831d21d1758b69e8c1609415e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/optim.py +++ /dev/null @@ -1,15 +0,0 @@ -import torch - -from detectron2.config import LazyCall as L -from detectron2.solver.build import get_default_optimizer_params - -SGD = L(torch.optim.SGD)( - params=L(get_default_optimizer_params)( - # params.model is meant to be set to the model object, before instantiating - # the optimizer. - weight_decay_norm=0.0 - ), - lr=0.02, - momentum=0.9, - weight_decay=1e-4, -) diff --git a/spaces/Benson/text-generation/Examples/Captulo 5 Matemticas Clase 12 Pdf.md b/spaces/Benson/text-generation/Examples/Captulo 5 Matemticas Clase 12 Pdf.md deleted file mode 100644 index c6b41e6828662c0444dab8e2edd1fbb9fc6f5734..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Captulo 5 Matemticas Clase 12 Pdf.md +++ /dev/null @@ -1,183 +0,0 @@ -
-

Capítulo 5 Matemáticas Clase 12 PDF Descargar

-

¿Está buscando una manera confiable y fácil de prepararse para su examen de matemáticas CBSE Class 12? ¿Quieres acceder al mejor material de estudio para el Capítulo 5 Continuidad y diferenciabilidad? Si es así, entonces has venido al lugar correcto. En este artículo, te diremos cómo descargar el PDF del Capítulo 5 de la Clase de Matemáticas 12 y por qué es beneficioso para la preparación de tu examen. También le proporcionaremos el plan de estudios, preguntas importantes y soluciones para el capítulo 5 de la clase de matemáticas 12. Así que sigue leyendo y prepárate para aprobar tu examen.

-

capítulo 5 matemáticas clase 12 pdf


Downloadhttps://bltlly.com/2v6Ms2



-

Introducción

-

Capítulo 5 Continuidad y diferenciabilidad es uno de los capítulos más importantes del programa de matemáticas de la clase 12 del CBSE. Se trata de los conceptos de continuidad y diferenciabilidad de funciones, sus propiedades algebraicas, derivadas de compuesto, implícito, trigonométrica inversa, exponencial, y funciones logarítmicas, diferenciación logarítmica, derivadas de funciones en formas paramétricas, derivadas de segundo orden, teorema del valor medio, y el teorema de Rolle. Este capítulo tiene una ponderación de aproximadamente 8 calificaciones en el examen de la junta y también es útil para exámenes competitivos como JEE y NEET.

-

Para dominar este capítulo, necesita entender la teoría, practicar los ejercicios y resolver las preguntas del año anterior. Sin embargo, puede ser difícil llevar todos los libros y notas a todas partes. Es por eso que descargar el PDF del Capítulo 5 Matemáticas Clase 12 es una idea inteligente. Te ayudará a acceder al capítulo en cualquier momento y en cualquier lugar de tu dispositivo.

-

¿Por qué descargar Capítulo 5 Matemáticas Clase 12 PDF?

-

Hay muchas razones por las que deberías descargar el PDF del Capítulo 5 de la Clase de Matemáticas 12. Algunas de ellas son:

-

Beneficios del Capítulo 5 Matemáticas Clase 12 PDF

- -

¿Cómo descargar el Capítulo 5 Matemáticas Clase 12 PDF?

-

Para descargar el PDF del Capítulo 5 Matemáticas Clase 12, puedes seguir estos sencillos pasos:

-
    -
  1. Ir al sitio web de NCERT o al sitio web de Vedantu.
  2. -
  3. Seleccione la clase, el tema y el nombre del libro.
  4. -
  5. Haga clic en el nombre del capítulo y ábralo en una nueva pestaña.
  6. -
  7. Haga clic en el botón de descarga o guardar como opción.
  8. -
  9. Elija la ubicación donde desea guardar el archivo.
  10. -
  11. Abre el archivo y empieza a estudiar.
  12. -
-

Capítulo 5 Matemáticas Clase 12 Plan de estudios

-

Antes de empezar a estudiar el Capítulo 5 Continuidad y diferenciabilidad, debe conocer el plan de estudios de la clase CBSE [asistente](#mensaje) Algunas oraciones adicionales son 12 Matemáticas. El plan de estudios de CBSE clase 12 Matemáticas se divide en seis unidades, a saber, Relaciones y Funciones, Álgebra, Cálculo, Vectores y Geometría Tridimensional, Programación Lineal y Probabilidad. Las calificaciones totales del examen de la junta son 100, de los cuales 80 son para el documento de teoría y 20 son para la evaluación interna. La duración del trabajo teórico es de tres horas.

-

Resumen del Capítulo 5 Matemáticas Clase 12 Plan de estudios

- -

Distribución de marcas por unidad

-

La siguiente tabla muestra la distribución de las marcas unitarias para el programa de matemáticas de la clase 12 del CBSE:

-

- - -Unidad -Marcas - - -Relaciones y funciones -8 - - -Álgebra -10 - - -Cálculo -35 - - -Vectores y geometría tridimensional -14 - - -Programación lineal -5 - - -Probabilidad -8 - - -Total -80 - - -

Temas y subtemas tratados

-

La siguiente tabla muestra los temas y subtemas cubiertos en el Capítulo 5 Continuidad y diferenciabilidad:

- - - Tema - Subtema - - - Continuidad - Continuidad en un punto y en un intervalo. - - - Álgebra de funciones continuas. - - - Teorema del valor intermedio. - - - Diferenciabilidad - Diferenciabilidad en un punto y en un intervalo. - - - Álgebra de funciones diferenciables. - - - Derivados de funciones compuestas. - - - Derivadas de funciones implícitas. - - - Derivadas de funciones trigonométricas inversas. - - - Derivadas de funciones exponenciales y logarítmicas. - - - Diferenciación logarítmica. - - - Derivadas de funciones en formas paramétricas. - - - Derivados de segundo orden. - - Teoremas del valor medio - Teorema del valor medio. - - - Teorema de Rolle. - - -

Capítulo 5 Matemáticas Clase 12 Preguntas importantes

- -

¿Cuáles son las preguntas importantes para el capítulo 5 de la clase de matemáticas 12?

-

Preguntas importantes para el Capítulo 5 Matemáticas Clase 12 son las preguntas que ponen a prueba su comprensión de los conceptos, fórmulas y métodos del capítulo. Pueden ser de diferentes tipos, como respuesta corta, respuesta larga, opción múltiple, llenar los espacios en blanco, verdadero o falso, coincidir con lo siguiente, etc. También pueden variar en el nivel de dificultad, de fácil a moderado a difícil.

-

Tipos de preguntas importantes para el capítulo 5 Matemáticas Clase 12

-

Algunos de los tipos de preguntas importantes para el Capítulo 5 Matemáticas Clase 12 son:

- -

Fuentes de preguntas importantes para el Capítulo 5 Matemáticas Clase 12

-

Algunas de las fuentes de preguntas importantes para el Capítulo 5 Matemáticas Clase 12 son:

- -

Capítulo 5 Matemáticas Clase 12 Soluciones

-

Otra forma de prepararse bien para su examen de Matemáticas CBSE Class 12 es consultar las soluciones para el Capítulo 5 Continuidad y diferenciabilidad. Estas son las explicaciones paso a paso y las respuestas a las preguntas y ejercicios dados en el libro de texto del NCERT y otras fuentes. Leer estas soluciones le ayudará a entender mejor los conceptos, métodos y fórmulas del capítulo. También te ayudarán a revisar tus respuestas, aclarar tus dudas y mejorar tu precisión.

-

¿Cuáles son las soluciones para el capítulo 5 de la clase de matemáticas 12?

-

Soluciones para el Capítulo 5 Matemáticas Clase 12 son las soluciones detalladas y precisas a las preguntas y ejercicios dados en el libro de texto de NCERT y otras fuentes para el Capítulo 5 Continuidad y diferenciabilidad. Están escritos por profesores expertos y expertos en la materia que tienen años de experiencia en la enseñanza de CBSE clase 12 Matemáticas. Siguen el último programa de estudios y el esquema de marcado del CBSE y se adhieren a las directrices del CBSE.

-

Características de las soluciones para el capítulo 5 Matemáticas Clase 12

-

Algunas de las características de las soluciones para el Capítulo 5 Matemáticas Clase 12 son:

- -

Fuentes de soluciones para el capítulo 5 Matemáticas Clase 12

- - -

Conclusión

-

En este artículo, le hemos proporcionado toda la información que necesita para descargar el PDF del capítulo 5 de la clase de matemáticas 12 y prepararse para su examen de matemáticas CBSE Class 12. También le hemos dado el plan de estudios, preguntas importantes y soluciones para el Capítulo 5 Continuidad y diferenciabilidad. Esperamos que este artículo te haya ayudado a entender mejor el capítulo y aumentar tu confianza. Te deseamos todo lo mejor para tu examen.

-

Preguntas frecuentes

-

Aquí están algunas de las preguntas más frecuentes sobre el capítulo 5 Matemáticas Clase 12:

-
    -
  1. ¿Cuál es la diferencia entre continuidad y diferenciabilidad de una función?
  2. -

    Una función es continua en un punto si el límite de la función en ese punto es igual al valor de la función en ese punto. Una función es diferenciable en un punto si la derivada de la función en ese punto existe y es finita. Una función puede ser continua pero no diferenciable en un punto, pero si una función es diferenciable en un punto, entonces también es continua en ese punto.

    -
  3. ¿Cuáles son las condiciones para que el teorema de Rolle y el teorema del valor medio sean aplicables?
  4. - -
  5. ¿Cómo encontrar las derivadas de las funciones trigonométricas inversas?
  6. -

    Las derivadas de las funciones trigonométricas inversas se pueden encontrar utilizando el método de diferenciación implícita. Por ejemplo, para encontrar la derivada de y = sin(x), podemos escribir x = sin(y) y diferenciar ambos lados con respecto a x. Obtenemos 1 = cos(y) dy/dx, lo que implica dy/dx = 1/cos(y). Dado que cos(y) = (1 - x), obtenemos dy/dx = 1/ (1 - x). De manera similar, podemos encontrar las derivadas de otras funciones trigonométricas inversas.

    -
  7. ¿Cómo usar la diferenciación logarítmica para encontrar las derivadas de funciones que involucran poderes, productos o cocientes?
  8. -

    La diferenciación logarítmica es una técnica que utiliza las propiedades de los logaritmos para simplificar la diferenciación de funciones que involucran poderes, productos o cocientes. Por ejemplo, para encontrar la derivada de y = x, podemos tomar el logaritmo natural de ambos lados y obtener ln(y) = x ln(x). Luego podemos diferenciar ambos lados con respecto a x y obtener (1/y) dy/dx = ln(x) + 1. Multiplicando ambos lados por y, obtenemos dy/dx = y (ln(x) + 1). Dado que y = x, obtenemos dy/dx = x (ln(x) + 1). Del mismo modo, podemos usar la diferenciación logarítmica para encontrar las derivadas de otras funciones que involucran poderes, productos o cocientes.

    -
  9. ¿Cómo encontrar las derivadas de funciones en formas paramétricas?
  10. -

    Una función en forma paramétrica es una función que se expresa en términos de uno o más parámetros. Por ejemplo, una curva puede ser representada por x = f(t) e y = g(t), donde t es un parámetro. Para encontrar la derivada de y con respecto a x, podemos usar la regla de cadena y obtener dy/dx = (dy/dt)/(dx/dt). Para encontrar la segunda derivada de y con respecto a x, podemos usar la regla del cociente y obtener d 2y/dx = (dy/dt)(dx/dt) - (dy/dt)(dx/dt)/(dx/dt). Del mismo modo, podemos encontrar las derivadas de otras funciones en formas paramétricas.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/auth.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/auth.py deleted file mode 100644 index da9b838e46c67658dfceea2465d92bc08ebf0a23..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/auth.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/ -# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import base64 -import calendar -import datetime -import functools -import hmac -import json -import logging -import time -from collections.abc import Mapping -from email.utils import formatdate -from hashlib import sha1, sha256 -from operator import itemgetter - -from botocore.compat import ( - HAS_CRT, - HTTPHeaders, - encodebytes, - ensure_unicode, - parse_qs, - quote, - unquote, - urlsplit, - urlunsplit, -) -from botocore.exceptions import NoAuthTokenError, NoCredentialsError -from botocore.utils import ( - is_valid_ipv6_endpoint_url, - normalize_url_path, - percent_encode_sequence, -) - -# Imports for backwards compatibility -from botocore.compat import MD5_AVAILABLE # noqa - - -logger = logging.getLogger(__name__) - - -EMPTY_SHA256_HASH = ( - 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855' -) -# This is the buffer size used when calculating sha256 checksums. -# Experimenting with various buffer sizes showed that this value generally -# gave the best result (in terms of performance). -PAYLOAD_BUFFER = 1024 * 1024 -ISO8601 = '%Y-%m-%dT%H:%M:%SZ' -SIGV4_TIMESTAMP = '%Y%m%dT%H%M%SZ' -SIGNED_HEADERS_BLACKLIST = [ - 'expect', - 'user-agent', - 'x-amzn-trace-id', -] -UNSIGNED_PAYLOAD = 'UNSIGNED-PAYLOAD' -STREAMING_UNSIGNED_PAYLOAD_TRAILER = 'STREAMING-UNSIGNED-PAYLOAD-TRAILER' - - -def _host_from_url(url): - # Given URL, derive value for host header. Ensure that value: - # 1) is lowercase - # 2) excludes port, if it was the default port - # 3) excludes userinfo - url_parts = urlsplit(url) - host = url_parts.hostname # urlsplit's hostname is always lowercase - if is_valid_ipv6_endpoint_url(url): - host = f'[{host}]' - default_ports = { - 'http': 80, - 'https': 443, - } - if url_parts.port is not None: - if url_parts.port != default_ports.get(url_parts.scheme): - host = '%s:%d' % (host, url_parts.port) - return host - - -def _get_body_as_dict(request): - # For query services, request.data is form-encoded and is already a - # dict, but for other services such as rest-json it could be a json - # string or bytes. In those cases we attempt to load the data as a - # dict. - data = request.data - if isinstance(data, bytes): - data = json.loads(data.decode('utf-8')) - elif isinstance(data, str): - data = json.loads(data) - return data - - -class BaseSigner: - REQUIRES_REGION = False - REQUIRES_TOKEN = False - - def add_auth(self, request): - raise NotImplementedError("add_auth") - - -class TokenSigner(BaseSigner): - REQUIRES_TOKEN = True - """ - Signers that expect an authorization token to perform the authorization - """ - - def __init__(self, auth_token): - self.auth_token = auth_token - - -class SigV2Auth(BaseSigner): - """ - Sign a request with Signature V2. - """ - - def __init__(self, credentials): - self.credentials = credentials - - def calc_signature(self, request, params): - logger.debug("Calculating signature using v2 auth.") - split = urlsplit(request.url) - path = split.path - if len(path) == 0: - path = '/' - string_to_sign = f"{request.method}\n{split.netloc}\n{path}\n" - lhmac = hmac.new( - self.credentials.secret_key.encode("utf-8"), digestmod=sha256 - ) - pairs = [] - for key in sorted(params): - # Any previous signature should not be a part of this - # one, so we skip that particular key. This prevents - # issues during retries. - if key == 'Signature': - continue - value = str(params[key]) - quoted_key = quote(key.encode('utf-8'), safe='') - quoted_value = quote(value.encode('utf-8'), safe='-_~') - pairs.append(f'{quoted_key}={quoted_value}') - qs = '&'.join(pairs) - string_to_sign += qs - logger.debug('String to sign: %s', string_to_sign) - lhmac.update(string_to_sign.encode('utf-8')) - b64 = base64.b64encode(lhmac.digest()).strip().decode('utf-8') - return (qs, b64) - - def add_auth(self, request): - # The auth handler is the last thing called in the - # preparation phase of a prepared request. - # Because of this we have to parse the query params - # from the request body so we can update them with - # the sigv2 auth params. - if self.credentials is None: - raise NoCredentialsError() - if request.data: - # POST - params = request.data - else: - # GET - params = request.params - params['AWSAccessKeyId'] = self.credentials.access_key - params['SignatureVersion'] = '2' - params['SignatureMethod'] = 'HmacSHA256' - params['Timestamp'] = time.strftime(ISO8601, time.gmtime()) - if self.credentials.token: - params['SecurityToken'] = self.credentials.token - qs, signature = self.calc_signature(request, params) - params['Signature'] = signature - return request - - -class SigV3Auth(BaseSigner): - def __init__(self, credentials): - self.credentials = credentials - - def add_auth(self, request): - if self.credentials is None: - raise NoCredentialsError() - if 'Date' in request.headers: - del request.headers['Date'] - request.headers['Date'] = formatdate(usegmt=True) - if self.credentials.token: - if 'X-Amz-Security-Token' in request.headers: - del request.headers['X-Amz-Security-Token'] - request.headers['X-Amz-Security-Token'] = self.credentials.token - new_hmac = hmac.new( - self.credentials.secret_key.encode('utf-8'), digestmod=sha256 - ) - new_hmac.update(request.headers['Date'].encode('utf-8')) - encoded_signature = encodebytes(new_hmac.digest()).strip() - signature = ( - f"AWS3-HTTPS AWSAccessKeyId={self.credentials.access_key}," - f"Algorithm=HmacSHA256,Signature={encoded_signature.decode('utf-8')}" - ) - if 'X-Amzn-Authorization' in request.headers: - del request.headers['X-Amzn-Authorization'] - request.headers['X-Amzn-Authorization'] = signature - - -class SigV4Auth(BaseSigner): - """ - Sign a request with Signature V4. - """ - - REQUIRES_REGION = True - - def __init__(self, credentials, service_name, region_name): - self.credentials = credentials - # We initialize these value here so the unit tests can have - # valid values. But these will get overriden in ``add_auth`` - # later for real requests. - self._region_name = region_name - self._service_name = service_name - - def _sign(self, key, msg, hex=False): - if hex: - sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest() - else: - sig = hmac.new(key, msg.encode('utf-8'), sha256).digest() - return sig - - def headers_to_sign(self, request): - """ - Select the headers from the request that need to be included - in the StringToSign. - """ - header_map = HTTPHeaders() - for name, value in request.headers.items(): - lname = name.lower() - if lname not in SIGNED_HEADERS_BLACKLIST: - header_map[lname] = value - if 'host' not in header_map: - # TODO: We should set the host ourselves, instead of relying on our - # HTTP client to set it for us. - header_map['host'] = _host_from_url(request.url) - return header_map - - def canonical_query_string(self, request): - # The query string can come from two parts. One is the - # params attribute of the request. The other is from the request - # url (in which case we have to re-split the url into its components - # and parse out the query string component). - if request.params: - return self._canonical_query_string_params(request.params) - else: - return self._canonical_query_string_url(urlsplit(request.url)) - - def _canonical_query_string_params(self, params): - # [(key, value), (key2, value2)] - key_val_pairs = [] - if isinstance(params, Mapping): - params = params.items() - for key, value in params: - key_val_pairs.append( - (quote(key, safe='-_.~'), quote(str(value), safe='-_.~')) - ) - sorted_key_vals = [] - # Sort by the URI-encoded key names, and in the case of - # repeated keys, then sort by the value. - for key, value in sorted(key_val_pairs): - sorted_key_vals.append(f'{key}={value}') - canonical_query_string = '&'.join(sorted_key_vals) - return canonical_query_string - - def _canonical_query_string_url(self, parts): - canonical_query_string = '' - if parts.query: - # [(key, value), (key2, value2)] - key_val_pairs = [] - for pair in parts.query.split('&'): - key, _, value = pair.partition('=') - key_val_pairs.append((key, value)) - sorted_key_vals = [] - # Sort by the URI-encoded key names, and in the case of - # repeated keys, then sort by the value. - for key, value in sorted(key_val_pairs): - sorted_key_vals.append(f'{key}={value}') - canonical_query_string = '&'.join(sorted_key_vals) - return canonical_query_string - - def canonical_headers(self, headers_to_sign): - """ - Return the headers that need to be included in the StringToSign - in their canonical form by converting all header keys to lower - case, sorting them in alphabetical order and then joining - them into a string, separated by newlines. - """ - headers = [] - sorted_header_names = sorted(set(headers_to_sign)) - for key in sorted_header_names: - value = ','.join( - self._header_value(v) for v in headers_to_sign.get_all(key) - ) - headers.append(f'{key}:{ensure_unicode(value)}') - return '\n'.join(headers) - - def _header_value(self, value): - # From the sigv4 docs: - # Lowercase(HeaderName) + ':' + Trimall(HeaderValue) - # - # The Trimall function removes excess white space before and after - # values, and converts sequential spaces to a single space. - return ' '.join(value.split()) - - def signed_headers(self, headers_to_sign): - headers = sorted(n.lower().strip() for n in set(headers_to_sign)) - return ';'.join(headers) - - def _is_streaming_checksum_payload(self, request): - checksum_context = request.context.get('checksum', {}) - algorithm = checksum_context.get('request_algorithm') - return isinstance(algorithm, dict) and algorithm.get('in') == 'trailer' - - def payload(self, request): - if self._is_streaming_checksum_payload(request): - return STREAMING_UNSIGNED_PAYLOAD_TRAILER - elif not self._should_sha256_sign_payload(request): - # When payload signing is disabled, we use this static string in - # place of the payload checksum. - return UNSIGNED_PAYLOAD - request_body = request.body - if request_body and hasattr(request_body, 'seek'): - position = request_body.tell() - read_chunksize = functools.partial( - request_body.read, PAYLOAD_BUFFER - ) - checksum = sha256() - for chunk in iter(read_chunksize, b''): - checksum.update(chunk) - hex_checksum = checksum.hexdigest() - request_body.seek(position) - return hex_checksum - elif request_body: - # The request serialization has ensured that - # request.body is a bytes() type. - return sha256(request_body).hexdigest() - else: - return EMPTY_SHA256_HASH - - def _should_sha256_sign_payload(self, request): - # Payloads will always be signed over insecure connections. - if not request.url.startswith('https'): - return True - - # Certain operations may have payload signing disabled by default. - # Since we don't have access to the operation model, we pass in this - # bit of metadata through the request context. - return request.context.get('payload_signing_enabled', True) - - def canonical_request(self, request): - cr = [request.method.upper()] - path = self._normalize_url_path(urlsplit(request.url).path) - cr.append(path) - cr.append(self.canonical_query_string(request)) - headers_to_sign = self.headers_to_sign(request) - cr.append(self.canonical_headers(headers_to_sign) + '\n') - cr.append(self.signed_headers(headers_to_sign)) - if 'X-Amz-Content-SHA256' in request.headers: - body_checksum = request.headers['X-Amz-Content-SHA256'] - else: - body_checksum = self.payload(request) - cr.append(body_checksum) - return '\n'.join(cr) - - def _normalize_url_path(self, path): - normalized_path = quote(normalize_url_path(path), safe='/~') - return normalized_path - - def scope(self, request): - scope = [self.credentials.access_key] - scope.append(request.context['timestamp'][0:8]) - scope.append(self._region_name) - scope.append(self._service_name) - scope.append('aws4_request') - return '/'.join(scope) - - def credential_scope(self, request): - scope = [] - scope.append(request.context['timestamp'][0:8]) - scope.append(self._region_name) - scope.append(self._service_name) - scope.append('aws4_request') - return '/'.join(scope) - - def string_to_sign(self, request, canonical_request): - """ - Return the canonical StringToSign as well as a dict - containing the original version of all headers that - were included in the StringToSign. - """ - sts = ['AWS4-HMAC-SHA256'] - sts.append(request.context['timestamp']) - sts.append(self.credential_scope(request)) - sts.append(sha256(canonical_request.encode('utf-8')).hexdigest()) - return '\n'.join(sts) - - def signature(self, string_to_sign, request): - key = self.credentials.secret_key - k_date = self._sign( - (f"AWS4{key}").encode(), request.context["timestamp"][0:8] - ) - k_region = self._sign(k_date, self._region_name) - k_service = self._sign(k_region, self._service_name) - k_signing = self._sign(k_service, 'aws4_request') - return self._sign(k_signing, string_to_sign, hex=True) - - def add_auth(self, request): - if self.credentials is None: - raise NoCredentialsError() - datetime_now = datetime.datetime.utcnow() - request.context['timestamp'] = datetime_now.strftime(SIGV4_TIMESTAMP) - # This could be a retry. Make sure the previous - # authorization header is removed first. - self._modify_request_before_signing(request) - canonical_request = self.canonical_request(request) - logger.debug("Calculating signature using v4 auth.") - logger.debug('CanonicalRequest:\n%s', canonical_request) - string_to_sign = self.string_to_sign(request, canonical_request) - logger.debug('StringToSign:\n%s', string_to_sign) - signature = self.signature(string_to_sign, request) - logger.debug('Signature:\n%s', signature) - - self._inject_signature_to_request(request, signature) - - def _inject_signature_to_request(self, request, signature): - auth_str = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(request)] - headers_to_sign = self.headers_to_sign(request) - auth_str.append( - f"SignedHeaders={self.signed_headers(headers_to_sign)}" - ) - auth_str.append('Signature=%s' % signature) - request.headers['Authorization'] = ', '.join(auth_str) - return request - - def _modify_request_before_signing(self, request): - if 'Authorization' in request.headers: - del request.headers['Authorization'] - self._set_necessary_date_headers(request) - if self.credentials.token: - if 'X-Amz-Security-Token' in request.headers: - del request.headers['X-Amz-Security-Token'] - request.headers['X-Amz-Security-Token'] = self.credentials.token - - if not request.context.get('payload_signing_enabled', True): - if 'X-Amz-Content-SHA256' in request.headers: - del request.headers['X-Amz-Content-SHA256'] - request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD - - def _set_necessary_date_headers(self, request): - # The spec allows for either the Date _or_ the X-Amz-Date value to be - # used so we check both. If there's a Date header, we use the date - # header. Otherwise we use the X-Amz-Date header. - if 'Date' in request.headers: - del request.headers['Date'] - datetime_timestamp = datetime.datetime.strptime( - request.context['timestamp'], SIGV4_TIMESTAMP - ) - request.headers['Date'] = formatdate( - int(calendar.timegm(datetime_timestamp.timetuple())) - ) - if 'X-Amz-Date' in request.headers: - del request.headers['X-Amz-Date'] - else: - if 'X-Amz-Date' in request.headers: - del request.headers['X-Amz-Date'] - request.headers['X-Amz-Date'] = request.context['timestamp'] - - -class S3SigV4Auth(SigV4Auth): - def _modify_request_before_signing(self, request): - super()._modify_request_before_signing(request) - if 'X-Amz-Content-SHA256' in request.headers: - del request.headers['X-Amz-Content-SHA256'] - - request.headers['X-Amz-Content-SHA256'] = self.payload(request) - - def _should_sha256_sign_payload(self, request): - # S3 allows optional body signing, so to minimize the performance - # impact, we opt to not SHA256 sign the body on streaming uploads, - # provided that we're on https. - client_config = request.context.get('client_config') - s3_config = getattr(client_config, 's3', None) - - # The config could be None if it isn't set, or if the customer sets it - # to None. - if s3_config is None: - s3_config = {} - - # The explicit configuration takes precedence over any implicit - # configuration. - sign_payload = s3_config.get('payload_signing_enabled', None) - if sign_payload is not None: - return sign_payload - - # We require that both a checksum be present and https be enabled - # to implicitly disable body signing. The combination of TLS and - # a checksum is sufficiently secure and durable for us to be - # confident in the request without body signing. - checksum_header = 'Content-MD5' - checksum_context = request.context.get('checksum', {}) - algorithm = checksum_context.get('request_algorithm') - if isinstance(algorithm, dict) and algorithm.get('in') == 'header': - checksum_header = algorithm['name'] - if ( - not request.url.startswith("https") - or checksum_header not in request.headers - ): - return True - - # If the input is streaming we disable body signing by default. - if request.context.get('has_streaming_input', False): - return False - - # If the S3-specific checks had no results, delegate to the generic - # checks. - return super()._should_sha256_sign_payload(request) - - def _normalize_url_path(self, path): - # For S3, we do not normalize the path. - return path - - -class SigV4QueryAuth(SigV4Auth): - DEFAULT_EXPIRES = 3600 - - def __init__( - self, credentials, service_name, region_name, expires=DEFAULT_EXPIRES - ): - super().__init__(credentials, service_name, region_name) - self._expires = expires - - def _modify_request_before_signing(self, request): - # We automatically set this header, so if it's the auto-set value we - # want to get rid of it since it doesn't make sense for presigned urls. - content_type = request.headers.get('content-type') - blacklisted_content_type = ( - 'application/x-www-form-urlencoded; charset=utf-8' - ) - if content_type == blacklisted_content_type: - del request.headers['content-type'] - - # Note that we're not including X-Amz-Signature. - # From the docs: "The Canonical Query String must include all the query - # parameters from the preceding table except for X-Amz-Signature. - signed_headers = self.signed_headers(self.headers_to_sign(request)) - - auth_params = { - 'X-Amz-Algorithm': 'AWS4-HMAC-SHA256', - 'X-Amz-Credential': self.scope(request), - 'X-Amz-Date': request.context['timestamp'], - 'X-Amz-Expires': self._expires, - 'X-Amz-SignedHeaders': signed_headers, - } - if self.credentials.token is not None: - auth_params['X-Amz-Security-Token'] = self.credentials.token - # Now parse the original query string to a dict, inject our new query - # params, and serialize back to a query string. - url_parts = urlsplit(request.url) - # parse_qs makes each value a list, but in our case we know we won't - # have repeated keys so we know we have single element lists which we - # can convert back to scalar values. - query_string_parts = parse_qs(url_parts.query, keep_blank_values=True) - query_dict = {k: v[0] for k, v in query_string_parts.items()} - - if request.params: - query_dict.update(request.params) - request.params = {} - # The spec is particular about this. It *has* to be: - # https://?& - # You can't mix the two types of params together, i.e just keep doing - # new_query_params.update(op_params) - # new_query_params.update(auth_params) - # percent_encode_sequence(new_query_params) - operation_params = '' - if request.data: - # We also need to move the body params into the query string. To - # do this, we first have to convert it to a dict. - query_dict.update(_get_body_as_dict(request)) - request.data = '' - if query_dict: - operation_params = percent_encode_sequence(query_dict) + '&' - new_query_string = ( - f"{operation_params}{percent_encode_sequence(auth_params)}" - ) - # url_parts is a tuple (and therefore immutable) so we need to create - # a new url_parts with the new query string. - # - - # scheme - 0 - # netloc - 1 - # path - 2 - # query - 3 <-- we're replacing this. - # fragment - 4 - p = url_parts - new_url_parts = (p[0], p[1], p[2], new_query_string, p[4]) - request.url = urlunsplit(new_url_parts) - - def _inject_signature_to_request(self, request, signature): - # Rather than calculating an "Authorization" header, for the query - # param quth, we just append an 'X-Amz-Signature' param to the end - # of the query string. - request.url += '&X-Amz-Signature=%s' % signature - - -class S3SigV4QueryAuth(SigV4QueryAuth): - """S3 SigV4 auth using query parameters. - - This signer will sign a request using query parameters and signature - version 4, i.e a "presigned url" signer. - - Based off of: - - http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html - - """ - - def _normalize_url_path(self, path): - # For S3, we do not normalize the path. - return path - - def payload(self, request): - # From the doc link above: - # "You don't include a payload hash in the Canonical Request, because - # when you create a presigned URL, you don't know anything about the - # payload. Instead, you use a constant string "UNSIGNED-PAYLOAD". - return UNSIGNED_PAYLOAD - - -class S3SigV4PostAuth(SigV4Auth): - """ - Presigns a s3 post - - Implementation doc here: - http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html - """ - - def add_auth(self, request): - datetime_now = datetime.datetime.utcnow() - request.context['timestamp'] = datetime_now.strftime(SIGV4_TIMESTAMP) - - fields = {} - if request.context.get('s3-presign-post-fields', None) is not None: - fields = request.context['s3-presign-post-fields'] - - policy = {} - conditions = [] - if request.context.get('s3-presign-post-policy', None) is not None: - policy = request.context['s3-presign-post-policy'] - if policy.get('conditions', None) is not None: - conditions = policy['conditions'] - - policy['conditions'] = conditions - - fields['x-amz-algorithm'] = 'AWS4-HMAC-SHA256' - fields['x-amz-credential'] = self.scope(request) - fields['x-amz-date'] = request.context['timestamp'] - - conditions.append({'x-amz-algorithm': 'AWS4-HMAC-SHA256'}) - conditions.append({'x-amz-credential': self.scope(request)}) - conditions.append({'x-amz-date': request.context['timestamp']}) - - if self.credentials.token is not None: - fields['x-amz-security-token'] = self.credentials.token - conditions.append({'x-amz-security-token': self.credentials.token}) - - # Dump the base64 encoded policy into the fields dictionary. - fields['policy'] = base64.b64encode( - json.dumps(policy).encode('utf-8') - ).decode('utf-8') - - fields['x-amz-signature'] = self.signature(fields['policy'], request) - - request.context['s3-presign-post-fields'] = fields - request.context['s3-presign-post-policy'] = policy - - -class HmacV1Auth(BaseSigner): - - # List of Query String Arguments of Interest - QSAOfInterest = [ - 'accelerate', - 'acl', - 'cors', - 'defaultObjectAcl', - 'location', - 'logging', - 'partNumber', - 'policy', - 'requestPayment', - 'torrent', - 'versioning', - 'versionId', - 'versions', - 'website', - 'uploads', - 'uploadId', - 'response-content-type', - 'response-content-language', - 'response-expires', - 'response-cache-control', - 'response-content-disposition', - 'response-content-encoding', - 'delete', - 'lifecycle', - 'tagging', - 'restore', - 'storageClass', - 'notification', - 'replication', - 'requestPayment', - 'analytics', - 'metrics', - 'inventory', - 'select', - 'select-type', - 'object-lock', - ] - - def __init__(self, credentials, service_name=None, region_name=None): - self.credentials = credentials - - def sign_string(self, string_to_sign): - new_hmac = hmac.new( - self.credentials.secret_key.encode('utf-8'), digestmod=sha1 - ) - new_hmac.update(string_to_sign.encode('utf-8')) - return encodebytes(new_hmac.digest()).strip().decode('utf-8') - - def canonical_standard_headers(self, headers): - interesting_headers = ['content-md5', 'content-type', 'date'] - hoi = [] - if 'Date' in headers: - del headers['Date'] - headers['Date'] = self._get_date() - for ih in interesting_headers: - found = False - for key in headers: - lk = key.lower() - if headers[key] is not None and lk == ih: - hoi.append(headers[key].strip()) - found = True - if not found: - hoi.append('') - return '\n'.join(hoi) - - def canonical_custom_headers(self, headers): - hoi = [] - custom_headers = {} - for key in headers: - lk = key.lower() - if headers[key] is not None: - if lk.startswith('x-amz-'): - custom_headers[lk] = ','.join( - v.strip() for v in headers.get_all(key) - ) - sorted_header_keys = sorted(custom_headers.keys()) - for key in sorted_header_keys: - hoi.append(f"{key}:{custom_headers[key]}") - return '\n'.join(hoi) - - def unquote_v(self, nv): - """ - TODO: Do we need this? - """ - if len(nv) == 1: - return nv - else: - return (nv[0], unquote(nv[1])) - - def canonical_resource(self, split, auth_path=None): - # don't include anything after the first ? in the resource... - # unless it is one of the QSA of interest, defined above - # NOTE: - # The path in the canonical resource should always be the - # full path including the bucket name, even for virtual-hosting - # style addressing. The ``auth_path`` keeps track of the full - # path for the canonical resource and would be passed in if - # the client was using virtual-hosting style. - if auth_path is not None: - buf = auth_path - else: - buf = split.path - if split.query: - qsa = split.query.split('&') - qsa = [a.split('=', 1) for a in qsa] - qsa = [ - self.unquote_v(a) for a in qsa if a[0] in self.QSAOfInterest - ] - if len(qsa) > 0: - qsa.sort(key=itemgetter(0)) - qsa = ['='.join(a) for a in qsa] - buf += '?' - buf += '&'.join(qsa) - return buf - - def canonical_string( - self, method, split, headers, expires=None, auth_path=None - ): - cs = method.upper() + '\n' - cs += self.canonical_standard_headers(headers) + '\n' - custom_headers = self.canonical_custom_headers(headers) - if custom_headers: - cs += custom_headers + '\n' - cs += self.canonical_resource(split, auth_path=auth_path) - return cs - - def get_signature( - self, method, split, headers, expires=None, auth_path=None - ): - if self.credentials.token: - del headers['x-amz-security-token'] - headers['x-amz-security-token'] = self.credentials.token - string_to_sign = self.canonical_string( - method, split, headers, auth_path=auth_path - ) - logger.debug('StringToSign:\n%s', string_to_sign) - return self.sign_string(string_to_sign) - - def add_auth(self, request): - if self.credentials is None: - raise NoCredentialsError - logger.debug("Calculating signature using hmacv1 auth.") - split = urlsplit(request.url) - logger.debug('HTTP request method: %s', request.method) - signature = self.get_signature( - request.method, split, request.headers, auth_path=request.auth_path - ) - self._inject_signature(request, signature) - - def _get_date(self): - return formatdate(usegmt=True) - - def _inject_signature(self, request, signature): - if 'Authorization' in request.headers: - # We have to do this because request.headers is not - # normal dictionary. It has the (unintuitive) behavior - # of aggregating repeated setattr calls for the same - # key value. For example: - # headers['foo'] = 'a'; headers['foo'] = 'b' - # list(headers) will print ['foo', 'foo']. - del request.headers['Authorization'] - - auth_header = f"AWS {self.credentials.access_key}:{signature}" - request.headers['Authorization'] = auth_header - - -class HmacV1QueryAuth(HmacV1Auth): - """ - Generates a presigned request for s3. - - Spec from this document: - - http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html - #RESTAuthenticationQueryStringAuth - - """ - - DEFAULT_EXPIRES = 3600 - - def __init__(self, credentials, expires=DEFAULT_EXPIRES): - self.credentials = credentials - self._expires = expires - - def _get_date(self): - return str(int(time.time() + int(self._expires))) - - def _inject_signature(self, request, signature): - query_dict = {} - query_dict['AWSAccessKeyId'] = self.credentials.access_key - query_dict['Signature'] = signature - - for header_key in request.headers: - lk = header_key.lower() - # For query string requests, Expires is used instead of the - # Date header. - if header_key == 'Date': - query_dict['Expires'] = request.headers['Date'] - # We only want to include relevant headers in the query string. - # These can be anything that starts with x-amz, is Content-MD5, - # or is Content-Type. - elif lk.startswith('x-amz-') or lk in ( - 'content-md5', - 'content-type', - ): - query_dict[lk] = request.headers[lk] - # Combine all of the identified headers into an encoded - # query string - new_query_string = percent_encode_sequence(query_dict) - - # Create a new url with the presigned url. - p = urlsplit(request.url) - if p[3]: - # If there was a pre-existing query string, we should - # add that back before injecting the new query string. - new_query_string = f'{p[3]}&{new_query_string}' - new_url_parts = (p[0], p[1], p[2], new_query_string, p[4]) - request.url = urlunsplit(new_url_parts) - - -class HmacV1PostAuth(HmacV1Auth): - """ - Generates a presigned post for s3. - - Spec from this document: - - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html - """ - - def add_auth(self, request): - fields = {} - if request.context.get('s3-presign-post-fields', None) is not None: - fields = request.context['s3-presign-post-fields'] - - policy = {} - conditions = [] - if request.context.get('s3-presign-post-policy', None) is not None: - policy = request.context['s3-presign-post-policy'] - if policy.get('conditions', None) is not None: - conditions = policy['conditions'] - - policy['conditions'] = conditions - - fields['AWSAccessKeyId'] = self.credentials.access_key - - if self.credentials.token is not None: - fields['x-amz-security-token'] = self.credentials.token - conditions.append({'x-amz-security-token': self.credentials.token}) - - # Dump the base64 encoded policy into the fields dictionary. - fields['policy'] = base64.b64encode( - json.dumps(policy).encode('utf-8') - ).decode('utf-8') - - fields['signature'] = self.sign_string(fields['policy']) - - request.context['s3-presign-post-fields'] = fields - request.context['s3-presign-post-policy'] = policy - - -class BearerAuth(TokenSigner): - """ - Performs bearer token authorization by placing the bearer token in the - Authorization header as specified by Section 2.1 of RFC 6750. - - https://datatracker.ietf.org/doc/html/rfc6750#section-2.1 - """ - - def add_auth(self, request): - if self.auth_token is None: - raise NoAuthTokenError() - - auth_header = f'Bearer {self.auth_token.token}' - if 'Authorization' in request.headers: - del request.headers['Authorization'] - request.headers['Authorization'] = auth_header - - -AUTH_TYPE_MAPS = { - 'v2': SigV2Auth, - 'v3': SigV3Auth, - 'v3https': SigV3Auth, - 's3': HmacV1Auth, - 's3-query': HmacV1QueryAuth, - 's3-presign-post': HmacV1PostAuth, - 's3v4-presign-post': S3SigV4PostAuth, - 'bearer': BearerAuth, -} - -# Define v4 signers depending on if CRT is present -if HAS_CRT: - from botocore.crt.auth import CRT_AUTH_TYPE_MAPS - - AUTH_TYPE_MAPS.update(CRT_AUTH_TYPE_MAPS) -else: - AUTH_TYPE_MAPS.update( - { - 'v4': SigV4Auth, - 'v4-query': SigV4QueryAuth, - 's3v4': S3SigV4Auth, - 's3v4-query': S3SigV4QueryAuth, - } - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/palette.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/palette.py deleted file mode 100644 index fa0c4dd40381addf5b42fae4228b6d8fef03abd9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/palette.py +++ /dev/null @@ -1,100 +0,0 @@ -from math import sqrt -from functools import lru_cache -from typing import Sequence, Tuple, TYPE_CHECKING - -from .color_triplet import ColorTriplet - -if TYPE_CHECKING: - from pip._vendor.rich.table import Table - - -class Palette: - """A palette of available colors.""" - - def __init__(self, colors: Sequence[Tuple[int, int, int]]): - self._colors = colors - - def __getitem__(self, number: int) -> ColorTriplet: - return ColorTriplet(*self._colors[number]) - - def __rich__(self) -> "Table": - from pip._vendor.rich.color import Color - from pip._vendor.rich.style import Style - from pip._vendor.rich.text import Text - from pip._vendor.rich.table import Table - - table = Table( - "index", - "RGB", - "Color", - title="Palette", - caption=f"{len(self._colors)} colors", - highlight=True, - caption_justify="right", - ) - for index, color in enumerate(self._colors): - table.add_row( - str(index), - repr(color), - Text(" " * 16, style=Style(bgcolor=Color.from_rgb(*color))), - ) - return table - - # This is somewhat inefficient and needs caching - @lru_cache(maxsize=1024) - def match(self, color: Tuple[int, int, int]) -> int: - """Find a color from a palette that most closely matches a given color. - - Args: - color (Tuple[int, int, int]): RGB components in range 0 > 255. - - Returns: - int: Index of closes matching color. - """ - red1, green1, blue1 = color - _sqrt = sqrt - get_color = self._colors.__getitem__ - - def get_color_distance(index: int) -> float: - """Get the distance to a color.""" - red2, green2, blue2 = get_color(index) - red_mean = (red1 + red2) // 2 - red = red1 - red2 - green = green1 - green2 - blue = blue1 - blue2 - return _sqrt( - (((512 + red_mean) * red * red) >> 8) - + 4 * green * green - + (((767 - red_mean) * blue * blue) >> 8) - ) - - min_index = min(range(len(self._colors)), key=get_color_distance) - return min_index - - -if __name__ == "__main__": # pragma: no cover - import colorsys - from typing import Iterable - from pip._vendor.rich.color import Color - from pip._vendor.rich.console import Console, ConsoleOptions - from pip._vendor.rich.segment import Segment - from pip._vendor.rich.style import Style - - class ColorBox: - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> Iterable[Segment]: - height = console.size.height - 3 - for y in range(0, height): - for x in range(options.max_width): - h = x / options.max_width - l = y / (height + 1) - r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0) - r2, g2, b2 = colorsys.hls_to_rgb(h, l + (1 / height / 2), 1.0) - bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255) - color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255) - yield Segment("▄", Style(color=color, bgcolor=bgcolor)) - yield Segment.line() - - console = Console() - console.print(ColorBox()) diff --git a/spaces/Bostoncake/ChatAssistant/README.md b/spaces/Bostoncake/ChatAssistant/README.md deleted file mode 100644 index 7eeec313696830e08b4a26a4e6dac3c78eb13edb..0000000000000000000000000000000000000000 --- a/spaces/Bostoncake/ChatAssistant/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatAssistant -emoji: 💩 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.22.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/caffe2_export.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/caffe2_export.py deleted file mode 100644 index ccac809d7bf49ab144b5f0a34f57e00c3534ad60..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/caffe2_export.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import copy -import io -import logging -import numpy as np -from typing import List -import onnx -import torch -from caffe2.proto import caffe2_pb2 -from caffe2.python import core -from caffe2.python.onnx.backend import Caffe2Backend -from tabulate import tabulate -from termcolor import colored -from torch.onnx import OperatorExportTypes - -from .shared import ( - ScopedWS, - construct_init_net_from_params, - fuse_alias_placeholder, - fuse_copy_between_cpu_and_gpu, - get_params_from_init_net, - group_norm_replace_aten_with_caffe2, - infer_device_type, - remove_dead_end_ops, - remove_reshape_for_fc, - save_graph, -) - -logger = logging.getLogger(__name__) - - -def export_onnx_model(model, inputs): - """ - Trace and export a model to onnx format. - - Args: - model (nn.Module): - inputs (tuple[args]): the model will be called by `model(*inputs)` - - Returns: - an onnx model - """ - assert isinstance(model, torch.nn.Module) - - # make sure all modules are in eval mode, onnx may change the training state - # of the module if the states are not consistent - def _check_eval(module): - assert not module.training - - model.apply(_check_eval) - - # Export the model to ONNX - with torch.no_grad(): - with io.BytesIO() as f: - torch.onnx.export( - model, - inputs, - f, - operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK, - # verbose=True, # NOTE: uncomment this for debugging - # export_params=True, - ) - onnx_model = onnx.load_from_string(f.getvalue()) - - # Apply ONNX's Optimization - all_passes = onnx.optimizer.get_available_passes() - passes = ["fuse_bn_into_conv"] - assert all(p in all_passes for p in passes) - onnx_model = onnx.optimizer.optimize(onnx_model, passes) - return onnx_model - - -def _op_stats(net_def): - type_count = {} - for t in [op.type for op in net_def.op]: - type_count[t] = type_count.get(t, 0) + 1 - type_count_list = sorted(type_count.items(), key=lambda kv: kv[0]) # alphabet - type_count_list = sorted(type_count_list, key=lambda kv: -kv[1]) # count - return "\n".join("{:>4}x {}".format(count, name) for name, count in type_count_list) - - -def _assign_device_option( - predict_net: caffe2_pb2.NetDef, init_net: caffe2_pb2.NetDef, tensor_inputs: List[torch.Tensor] -): - """ - ONNX exported network doesn't have concept of device, assign necessary - device option for each op in order to make it runable on GPU runtime. - """ - - def _get_device_type(torch_tensor): - assert torch_tensor.device.type in ["cpu", "cuda"] - assert torch_tensor.device.index == 0 - return torch_tensor.device.type - - def _assign_op_device_option(net_proto, net_ssa, blob_device_types): - for op, ssa_i in zip(net_proto.op, net_ssa): - if op.type in ["CopyCPUToGPU", "CopyGPUToCPU"]: - op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0)) - else: - devices = [blob_device_types[b] for b in ssa_i[0] + ssa_i[1]] - assert all(d == devices[0] for d in devices) - if devices[0] == "cuda": - op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0)) - - # update ops in predict_net - predict_net_input_device_types = { - (name, 0): _get_device_type(tensor) - for name, tensor in zip(predict_net.external_input, tensor_inputs) - } - predict_net_device_types = infer_device_type( - predict_net, known_status=predict_net_input_device_types, device_name_style="pytorch" - ) - predict_net_ssa, _ = core.get_ssa(predict_net) - _assign_op_device_option(predict_net, predict_net_ssa, predict_net_device_types) - - # update ops in init_net - init_net_ssa, versions = core.get_ssa(init_net) - init_net_output_device_types = { - (name, versions[name]): predict_net_device_types[(name, 0)] - for name in init_net.external_output - } - init_net_device_types = infer_device_type( - init_net, known_status=init_net_output_device_types, device_name_style="pytorch" - ) - _assign_op_device_option(init_net, init_net_ssa, init_net_device_types) - - -def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs: List[torch.Tensor]): - """ - Export a caffe2-compatible Detectron2 model to caffe2 format via ONNX. - - Arg: - model: a caffe2-compatible version of detectron2 model, defined in caffe2_modeling.py - tensor_inputs: a list of tensors that caffe2 model takes as input. - """ - model = copy.deepcopy(model) - assert isinstance(model, torch.nn.Module) - assert hasattr(model, "encode_additional_info") - - # Export via ONNX - logger.info("Exporting a {} model via ONNX ...".format(type(model).__name__)) - onnx_model = export_onnx_model(model, (tensor_inputs,)) - # Convert ONNX model to Caffe2 protobuf - init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model) - ops_table = [[op.type, op.input, op.output] for op in predict_net.op] - table = tabulate(ops_table, headers=["type", "input", "output"], tablefmt="pipe") - logger.info( - "ONNX export Done. Exported predict_net (before optimizations):\n" + colored(table, "cyan") - ) - - # Apply protobuf optimization - fuse_alias_placeholder(predict_net, init_net) - if any(t.device.type != "cpu" for t in tensor_inputs): - fuse_copy_between_cpu_and_gpu(predict_net) - remove_dead_end_ops(init_net) - _assign_device_option(predict_net, init_net, tensor_inputs) - params, device_options = get_params_from_init_net(init_net) - predict_net, params = remove_reshape_for_fc(predict_net, params) - init_net = construct_init_net_from_params(params, device_options) - group_norm_replace_aten_with_caffe2(predict_net) - - # Record necessary information for running the pb model in Detectron2 system. - model.encode_additional_info(predict_net, init_net) - - logger.info("Operators used in predict_net: \n{}".format(_op_stats(predict_net))) - logger.info("Operators used in init_net: \n{}".format(_op_stats(init_net))) - - return predict_net, init_net - - -def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_path): - """ - Run the caffe2 model on given inputs, recording the shape and draw the graph. - - predict_net/init_net: caffe2 model. - tensor_inputs: a list of tensors that caffe2 model takes as input. - graph_save_path: path for saving graph of exported model. - """ - - logger.info("Saving graph of ONNX exported model to {} ...".format(graph_save_path)) - save_graph(predict_net, graph_save_path, op_only=False) - - # Run the exported Caffe2 net - logger.info("Running ONNX exported model ...") - with ScopedWS("__ws_tmp__", True) as ws: - ws.RunNetOnce(init_net) - initialized_blobs = set(ws.Blobs()) - uninitialized = [inp for inp in predict_net.external_input if inp not in initialized_blobs] - for name, blob in zip(uninitialized, tensor_inputs): - ws.FeedBlob(name, blob) - - try: - ws.RunNetOnce(predict_net) - except RuntimeError as e: - logger.warning("Encountered RuntimeError: \n{}".format(str(e))) - - ws_blobs = {b: ws.FetchBlob(b) for b in ws.Blobs()} - blob_sizes = {b: ws_blobs[b].shape for b in ws_blobs if isinstance(ws_blobs[b], np.ndarray)} - - logger.info("Saving graph with blob shapes to {} ...".format(graph_save_path)) - save_graph(predict_net, graph_save_path, op_only=False, blob_sizes=blob_sizes) - - return ws_blobs diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_stl.py b/spaces/CVPR/LIVE/pybind11/tests/test_stl.py deleted file mode 100644 index 141b3e8492c7400e4d0980dd9bc6347f5229f80a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_stl.py +++ /dev/null @@ -1,252 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest - -from pybind11_tests import stl as m -from pybind11_tests import UserType -from pybind11_tests import ConstructorStats - - -def test_vector(doc): - """std::vector <-> list""" - lst = m.cast_vector() - assert lst == [1] - lst.append(2) - assert m.load_vector(lst) - assert m.load_vector(tuple(lst)) - - assert m.cast_bool_vector() == [True, False] - assert m.load_bool_vector([True, False]) - - assert doc(m.cast_vector) == "cast_vector() -> List[int]" - assert doc(m.load_vector) == "load_vector(arg0: List[int]) -> bool" - - # Test regression caused by 936: pointers to stl containers weren't castable - assert m.cast_ptr_vector() == ["lvalue", "lvalue"] - - -def test_deque(doc): - """std::deque <-> list""" - lst = m.cast_deque() - assert lst == [1] - lst.append(2) - assert m.load_deque(lst) - assert m.load_deque(tuple(lst)) - - -def test_array(doc): - """std::array <-> list""" - lst = m.cast_array() - assert lst == [1, 2] - assert m.load_array(lst) - - assert doc(m.cast_array) == "cast_array() -> List[int[2]]" - assert doc(m.load_array) == "load_array(arg0: List[int[2]]) -> bool" - - -def test_valarray(doc): - """std::valarray <-> list""" - lst = m.cast_valarray() - assert lst == [1, 4, 9] - assert m.load_valarray(lst) - - assert doc(m.cast_valarray) == "cast_valarray() -> List[int]" - assert doc(m.load_valarray) == "load_valarray(arg0: List[int]) -> bool" - - -def test_map(doc): - """std::map <-> dict""" - d = m.cast_map() - assert d == {"key": "value"} - assert "key" in d - d["key2"] = "value2" - assert "key2" in d - assert m.load_map(d) - - assert doc(m.cast_map) == "cast_map() -> Dict[str, str]" - assert doc(m.load_map) == "load_map(arg0: Dict[str, str]) -> bool" - - -def test_set(doc): - """std::set <-> set""" - s = m.cast_set() - assert s == {"key1", "key2"} - s.add("key3") - assert m.load_set(s) - - assert doc(m.cast_set) == "cast_set() -> Set[str]" - assert doc(m.load_set) == "load_set(arg0: Set[str]) -> bool" - - -def test_recursive_casting(): - """Tests that stl casters preserve lvalue/rvalue context for container values""" - assert m.cast_rv_vector() == ["rvalue", "rvalue"] - assert m.cast_lv_vector() == ["lvalue", "lvalue"] - assert m.cast_rv_array() == ["rvalue", "rvalue", "rvalue"] - assert m.cast_lv_array() == ["lvalue", "lvalue"] - assert m.cast_rv_map() == {"a": "rvalue"} - assert m.cast_lv_map() == {"a": "lvalue", "b": "lvalue"} - assert m.cast_rv_nested() == [[[{"b": "rvalue", "c": "rvalue"}], [{"a": "rvalue"}]]] - assert m.cast_lv_nested() == { - "a": [[["lvalue", "lvalue"]], [["lvalue", "lvalue"]]], - "b": [[["lvalue", "lvalue"], ["lvalue", "lvalue"]]] - } - - # Issue #853 test case: - z = m.cast_unique_ptr_vector() - assert z[0].value == 7 and z[1].value == 42 - - -def test_move_out_container(): - """Properties use the `reference_internal` policy by default. If the underlying function - returns an rvalue, the policy is automatically changed to `move` to avoid referencing - a temporary. In case the return value is a container of user-defined types, the policy - also needs to be applied to the elements, not just the container.""" - c = m.MoveOutContainer() - moved_out_list = c.move_list - assert [x.value for x in moved_out_list] == [0, 1, 2] - - -@pytest.mark.skipif(not hasattr(m, "has_optional"), reason='no ') -def test_optional(): - assert m.double_or_zero(None) == 0 - assert m.double_or_zero(42) == 84 - pytest.raises(TypeError, m.double_or_zero, 'foo') - - assert m.half_or_none(0) is None - assert m.half_or_none(42) == 21 - pytest.raises(TypeError, m.half_or_none, 'foo') - - assert m.test_nullopt() == 42 - assert m.test_nullopt(None) == 42 - assert m.test_nullopt(42) == 42 - assert m.test_nullopt(43) == 43 - - assert m.test_no_assign() == 42 - assert m.test_no_assign(None) == 42 - assert m.test_no_assign(m.NoAssign(43)) == 43 - pytest.raises(TypeError, m.test_no_assign, 43) - - assert m.nodefer_none_optional(None) - - holder = m.OptionalHolder() - mvalue = holder.member - assert mvalue.initialized - assert holder.member_initialized() - - -@pytest.mark.skipif(not hasattr(m, "has_exp_optional"), reason='no ') -def test_exp_optional(): - assert m.double_or_zero_exp(None) == 0 - assert m.double_or_zero_exp(42) == 84 - pytest.raises(TypeError, m.double_or_zero_exp, 'foo') - - assert m.half_or_none_exp(0) is None - assert m.half_or_none_exp(42) == 21 - pytest.raises(TypeError, m.half_or_none_exp, 'foo') - - assert m.test_nullopt_exp() == 42 - assert m.test_nullopt_exp(None) == 42 - assert m.test_nullopt_exp(42) == 42 - assert m.test_nullopt_exp(43) == 43 - - assert m.test_no_assign_exp() == 42 - assert m.test_no_assign_exp(None) == 42 - assert m.test_no_assign_exp(m.NoAssign(43)) == 43 - pytest.raises(TypeError, m.test_no_assign_exp, 43) - - holder = m.OptionalExpHolder() - mvalue = holder.member - assert mvalue.initialized - assert holder.member_initialized() - - -@pytest.mark.skipif(not hasattr(m, "load_variant"), reason='no ') -def test_variant(doc): - assert m.load_variant(1) == "int" - assert m.load_variant("1") == "std::string" - assert m.load_variant(1.0) == "double" - assert m.load_variant(None) == "std::nullptr_t" - - assert m.load_variant_2pass(1) == "int" - assert m.load_variant_2pass(1.0) == "double" - - assert m.cast_variant() == (5, "Hello") - - assert doc(m.load_variant) == "load_variant(arg0: Union[int, str, float, None]) -> str" - - -def test_vec_of_reference_wrapper(): - """#171: Can't return reference wrappers (or STL structures containing them)""" - assert str(m.return_vec_of_reference_wrapper(UserType(4))) == \ - "[UserType(1), UserType(2), UserType(3), UserType(4)]" - - -def test_stl_pass_by_pointer(msg): - """Passing nullptr or None to an STL container pointer is not expected to work""" - with pytest.raises(TypeError) as excinfo: - m.stl_pass_by_pointer() # default value is `nullptr` - assert msg(excinfo.value) == """ - stl_pass_by_pointer(): incompatible function arguments. The following argument types are supported: - 1. (v: List[int] = None) -> List[int] - - Invoked with: - """ # noqa: E501 line too long - - with pytest.raises(TypeError) as excinfo: - m.stl_pass_by_pointer(None) - assert msg(excinfo.value) == """ - stl_pass_by_pointer(): incompatible function arguments. The following argument types are supported: - 1. (v: List[int] = None) -> List[int] - - Invoked with: None - """ # noqa: E501 line too long - - assert m.stl_pass_by_pointer([1, 2, 3]) == [1, 2, 3] - - -def test_missing_header_message(): - """Trying convert `list` to a `std::vector`, or vice versa, without including - should result in a helpful suggestion in the error message""" - import pybind11_cross_module_tests as cm - - expected_message = ("Did you forget to `#include `? Or ,\n" - ", , etc. Some automatic\n" - "conversions are optional and require extra headers to be included\n" - "when compiling your pybind11 module.") - - with pytest.raises(TypeError) as excinfo: - cm.missing_header_arg([1.0, 2.0, 3.0]) - assert expected_message in str(excinfo.value) - - with pytest.raises(TypeError) as excinfo: - cm.missing_header_return() - assert expected_message in str(excinfo.value) - - -def test_function_with_string_and_vector_string_arg(): - """Check if a string is NOT implicitly converted to a list, which was the - behavior before fix of issue #1258""" - assert m.func_with_string_or_vector_string_arg_overload(('A', 'B', )) == 2 - assert m.func_with_string_or_vector_string_arg_overload(['A', 'B']) == 2 - assert m.func_with_string_or_vector_string_arg_overload('A') == 3 - - -def test_stl_ownership(): - cstats = ConstructorStats.get(m.Placeholder) - assert cstats.alive() == 0 - r = m.test_stl_ownership() - assert len(r) == 1 - del r - assert cstats.alive() == 0 - - -def test_array_cast_sequence(): - assert m.array_cast_sequence((1, 2, 3)) == [1, 2, 3] - - -def test_issue_1561(): - """ check fix for issue #1561 """ - bar = m.Issue1561Outer() - bar.list = [m.Issue1561Inner('bar')] - bar.list - assert bar.list[0].data == 'bar' diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/placeholder.h b/spaces/CVPR/LIVE/thrust/thrust/detail/functional/placeholder.h deleted file mode 100644 index d0832cfecb1c70dd28d78c44349f0ee5ad78c0fa..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/placeholder.h +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace detail -{ -namespace functional -{ - -template - struct placeholder -{ - typedef actor > type; -}; - -} // end functional -} // end detail -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/scatter.h deleted file mode 100644 index baaf1e63b1e28fbe8b071ca0fb6666145bfe7c1f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/scatter.h +++ /dev/null @@ -1,423 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file scatter.h - * \brief Irregular copying to a destination range - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup scattering - * \ingroup copying - * \{ - */ - - -/*! \p scatter copies elements from a source range into an output array - * according to a map. For each iterator \c i in the range [\p first, \p last), - * the value \c *i is assigned to output[*(map + (i - first))]. The - * output iterator must permit random access. If the same index - * appears more than once in the range [map, map + (last - first)), - * the result is undefined. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first Beginning of the sequence of values to scatter. - * \param last End of the sequence of values to scatter. - * \param map Beginning of the sequence of output indices. - * \param result Destination of the source elements. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type. - * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type. - * \tparam RandomAccessIterator must be a model of Random Access iterator. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The expression `result[*i]` shall be valid for all iterators in the range `[map,map + (last - first))`. - * - * The following code snippet demonstrates how to use \p scatter to - * reorder a range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * // mark even indices with a 1; odd indices with a 0 - * int values[10] = {1, 0, 1, 0, 1, 0, 1, 0, 1, 0}; - * thrust::device_vector d_values(values, values + 10); - * - * // scatter all even indices into the first half of the - * // range, and odd indices vice versa - * int map[10] = {0, 5, 1, 6, 2, 7, 3, 8, 4, 9}; - * thrust::device_vector d_map(map, map + 10); - * - * thrust::device_vector d_output(10); - * thrust::scatter(thrust::device, - * d_values.begin(), d_values.end(), - * d_map.begin(), d_output.begin()); - * // d_output is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0} - * \endcode - * - * \note \p scatter is the inverse of thrust::gather. - */ -template -__host__ __device__ - void scatter(const thrust::detail::execution_policy_base &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - RandomAccessIterator result); - - -/*! \p scatter copies elements from a source range into an output array - * according to a map. For each iterator \c i in the range [\p first, \p last), - * the value \c *i is assigned to output[*(map + (i - first))]. The - * output iterator must permit random access. If the same index - * appears more than once in the range [map, map + (last - first)), - * the result is undefined. - * - * \param first Beginning of the sequence of values to scatter. - * \param last End of the sequence of values to scatter. - * \param map Beginning of the sequence of output indices. - * \param result Destination of the source elements. - * - * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type. - * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type. - * \tparam RandomAccessIterator must be a model of Random Access iterator. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The expression `result[*i]` shall be valid for all iterators in the range `[map,map + (last - first))`. - * - * The following code snippet demonstrates how to use \p scatter to - * reorder a range. - * - * \code - * #include - * #include - * ... - * // mark even indices with a 1; odd indices with a 0 - * int values[10] = {1, 0, 1, 0, 1, 0, 1, 0, 1, 0}; - * thrust::device_vector d_values(values, values + 10); - * - * // scatter all even indices into the first half of the - * // range, and odd indices vice versa - * int map[10] = {0, 5, 1, 6, 2, 7, 3, 8, 4, 9}; - * thrust::device_vector d_map(map, map + 10); - * - * thrust::device_vector d_output(10); - * thrust::scatter(d_values.begin(), d_values.end(), - * d_map.begin(), d_output.begin()); - * // d_output is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0} - * \endcode - * - * \note \p scatter is the inverse of thrust::gather. - */ -template - void scatter(InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - RandomAccessIterator result); - - -/*! \p scatter_if conditionally copies elements from a source range into an - * output array according to a map. For each iterator \c i in the - * range [first, last) such that *(stencil + (i - first)) is - * true, the value \c *i is assigned to output[*(map + (i - first))]. - * The output iterator must permit random access. If the same index - * appears more than once in the range [map, map + (last - first)) - * the result is undefined. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first Beginning of the sequence of values to scatter. - * \param last End of the sequence of values to scatter. - * \param map Beginning of the sequence of output indices. - * \param stencil Beginning of the sequence of predicate values. - * \param output Beginning of the destination range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type. - * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type. - * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c bool. - * \tparam RandomAccessIterator must be a model of Random Access iterator. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `*(stencil + i) != false`. - * - * \code - * #include - * #include - * ... - * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80}; - * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4}; - * int S[8] = {1, 0, 1, 0, 1, 0, 1, 0}; - * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0}; - * - * thrust::scatter_if(thrust::host, V, V + 8, M, S, D); - * - * // D contains [10, 30, 50, 70, 0, 0, 0, 0]; - * \endcode - * - * \note \p scatter_if is the inverse of thrust::gather_if. - */ -template -__host__ __device__ - void scatter_if(const thrust::detail::execution_policy_base &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - InputIterator3 stencil, - RandomAccessIterator output); - - -/*! \p scatter_if conditionally copies elements from a source range into an - * output array according to a map. For each iterator \c i in the - * range [first, last) such that *(stencil + (i - first)) is - * true, the value \c *i is assigned to output[*(map + (i - first))]. - * The output iterator must permit random access. If the same index - * appears more than once in the range [map, map + (last - first)) - * the result is undefined. - * - * \param first Beginning of the sequence of values to scatter. - * \param last End of the sequence of values to scatter. - * \param map Beginning of the sequence of output indices. - * \param stencil Beginning of the sequence of predicate values. - * \param output Beginning of the destination range. - * - * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type. - * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type. - * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c bool. - * \tparam RandomAccessIterator must be a model of Random Access iterator. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `*(stencil + i) != false`. - * - * \code - * #include - * ... - * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80}; - * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4}; - * int S[8] = {1, 0, 1, 0, 1, 0, 1, 0}; - * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0}; - * - * thrust::scatter_if(V, V + 8, M, S, D); - * - * // D contains [10, 30, 50, 70, 0, 0, 0, 0]; - * \endcode - * - * \note \p scatter_if is the inverse of thrust::gather_if. - */ -template - void scatter_if(InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - InputIterator3 stencil, - RandomAccessIterator output); - - -/*! \p scatter_if conditionally copies elements from a source range into an - * output array according to a map. For each iterator \c i in the - * range [first, last) such that pred(*(stencil + (i - first))) is - * \c true, the value \c *i is assigned to output[*(map + (i - first))]. - * The output iterator must permit random access. If the same index - * appears more than once in the range [map, map + (last - first)) - * the result is undefined. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first Beginning of the sequence of values to scatter. - * \param last End of the sequence of values to scatter. - * \param map Beginning of the sequence of output indices. - * \param stencil Beginning of the sequence of predicate values. - * \param output Beginning of the destination range. - * \param pred Predicate to apply to the stencil values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type. - * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type. - * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c Predicate's \c argument_type. - * \tparam RandomAccessIterator must be a model of Random Access iterator. - * \tparam Predicate must be a model of Predicate. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `pred(*(stencil + i)) != false`. - * - * \code - * #include - * #include - * - * struct is_even - * { - * __host__ __device__ - * bool operator()(int x) - * { - * return (x % 2) == 0; - * } - * }; - * - * ... - * - * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80}; - * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4}; - * int S[8] = {2, 1, 2, 1, 2, 1, 2, 1}; - * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0}; - * - * is_even pred; - * thrust::scatter_if(thrust::host, V, V + 8, M, S, D, pred); - * - * // D contains [10, 30, 50, 70, 0, 0, 0, 0]; - * \endcode - * - * \note \p scatter_if is the inverse of thrust::gather_if. - */ -template -__host__ __device__ - void scatter_if(const thrust::detail::execution_policy_base &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - InputIterator3 stencil, - RandomAccessIterator output, - Predicate pred); - - -/*! \p scatter_if conditionally copies elements from a source range into an - * output array according to a map. For each iterator \c i in the - * range [first, last) such that pred(*(stencil + (i - first))) is - * \c true, the value \c *i is assigned to output[*(map + (i - first))]. - * The output iterator must permit random access. If the same index - * appears more than once in the range [map, map + (last - first)) - * the result is undefined. - * - * \param first Beginning of the sequence of values to scatter. - * \param last End of the sequence of values to scatter. - * \param map Beginning of the sequence of output indices. - * \param stencil Beginning of the sequence of predicate values. - * \param output Beginning of the destination range. - * \param pred Predicate to apply to the stencil values. - * - * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type. - * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type. - * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c Predicate's \c argument_type. - * \tparam RandomAccessIterator must be a model of Random Access iterator. - * \tparam Predicate must be a model of Predicate. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`. - * - * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `pred(*(stencil + i)) != false`. - * - * \code - * #include - * - * struct is_even - * { - * __host__ __device__ - * bool operator()(int x) - * { - * return (x % 2) == 0; - * } - * }; - * - * ... - * - * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80}; - * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4}; - * int S[8] = {2, 1, 2, 1, 2, 1, 2, 1}; - * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0}; - * - * is_even pred; - * thrust::scatter_if(V, V + 8, M, S, D, pred); - * - * // D contains [10, 30, 50, 70, 0, 0, 0, 0]; - * \endcode - * - * \note \p scatter_if is the inverse of thrust::gather_if. - */ -template - void scatter_if(InputIterator1 first, - InputIterator1 last, - InputIterator2 map, - InputIterator3 stencil, - RandomAccessIterator output, - Predicate pred); - - -/*! \} // end scattering - */ - - -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/advance.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/advance.h deleted file mode 100644 index f9cab587b374b9349ee7bfff8128a42462ad17ab..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/advance.h +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - -template -__host__ __device__ -void advance(InputIterator& i, Distance n); - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_text/text_detection.py b/spaces/Cpp4App/Cpp4App/CDM/detect_text/text_detection.py deleted file mode 100644 index 3d7a92c993a5ae3544dd20b6b17e02b37bc9aaf9..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/CDM/detect_text/text_detection.py +++ /dev/null @@ -1,289 +0,0 @@ -import CDM.detect_text.ocr as ocr -from CDM.detect_text.Text import Text -import numpy as np -import cv2 -import json -import time -import os -from os.path import join as pjoin -# from paddleocr import PaddleOCR -import pytesseract - -# paddle_model = PaddleOCR(use_angle_cls=True, lang="en") #'ch' for chinese and english, 'en' for english - - -def save_detection_json(file_path, texts, img_shape): - f_out = open(file_path, 'w') - output = {'img_shape': img_shape, 'texts': []} - for text in texts: - c = {'id': text.id, 'content': text.content} - loc = text.location - c['column_min'], c['row_min'], c['column_max'], c['row_max'] = loc['left'], loc['top'], loc['right'], loc['bottom'] - c['width'] = text.width - c['height'] = text.height - output['texts'].append(c) - json.dump(output, f_out, indent=4) - - -def visualize_texts(org_img, texts, shown_resize_height=None, show=False, write_path=None): - img = org_img.copy() - for text in texts: - text.visualize_element(img, line=2) - - img_resize = img - if shown_resize_height is not None: - img_resize = cv2.resize(img, (int(shown_resize_height * (img.shape[1]/img.shape[0])), shown_resize_height)) - - if show: - cv2.imshow('texts', img_resize) - cv2.waitKey(0) - cv2.destroyWindow('texts') - if write_path is not None: - cv2.imwrite(write_path, img) - - -def text_sentences_recognition(texts): - ''' - Merge separate words detected by Google ocr into a sentence - ''' - changed = True - while changed: - changed = False - temp_set = [] - for text_a in texts: - merged = False - for text_b in temp_set: - if text_a.is_on_same_line(text_b, 'h', bias_justify=0.2 * min(text_a.height, text_b.height), bias_gap=2 * max(text_a.word_width, text_b.word_width)): - text_b.merge_text(text_a) - merged = True - changed = True - break - if not merged: - temp_set.append(text_a) - texts = temp_set.copy() - - for i, text in enumerate(texts): - text.id = i - return texts - - -def merge_intersected_texts(texts): - ''' - Merge intersected texts (sentences or words) - ''' - changed = True - while changed: - changed = False - temp_set = [] - for text_a in texts: - merged = False - for text_b in temp_set: - if text_a.is_intersected(text_b, bias=2): - text_b.merge_text(text_a) - merged = True - changed = True - break - if not merged: - temp_set.append(text_a) - texts = temp_set.copy() - return texts - - -def text_cvt_orc_format(ocr_result): - texts = [] - if ocr_result is not None: - for i, result in enumerate(ocr_result): - error = False - x_coordinates = [] - y_coordinates = [] - text_location = result['boundingPoly']['vertices'] - content = result['description'] - for loc in text_location: - if 'x' not in loc or 'y' not in loc: - error = True - break - x_coordinates.append(loc['x']) - y_coordinates.append(loc['y']) - if error: continue - location = {'left': min(x_coordinates), 'top': min(y_coordinates), - 'right': max(x_coordinates), 'bottom': max(y_coordinates)} - texts.append(Text(i, content, location)) - return texts - - -def text_cvt_orc_format_paddle(paddle_result): - texts = [] - for i, line in enumerate(paddle_result): - points = np.array(line[0]) - # points = points * 5 - location = {'left': int(min(points[:, 0])), 'top': int(min(points[:, 1])), 'right': int(max(points[:, 0])), - 'bottom': int(max(points[:, 1]))} - content = line[1][0] - texts.append(Text(i, content, location)) - return texts - - -def text_cvt_orc_format_tesseract(tesseract_result): - # texts = [] - # i_real = 0 - # for i, line in enumerate(tesseract_result['text']): - # content = line.strip() - # location = { - # 'left': int(tesseract_result['left'][i]), - # 'top': int(tesseract_result['top'][i]), - # 'right': int(tesseract_result['left'][i]) + int(tesseract_result['width'][i]), - # 'bottom': int(tesseract_result['top'][i]) + int(tesseract_result['height'][i]) - # } - # if len(content) > 0: - # texts.append(Text(i_real, content, location)) - # i_real = i_real + 1 - - # Extract line boxes - texts = [] - i_real = 0 - line_boxes = [] - n_boxes = len(tesseract_result['level']) - for i in range(n_boxes): - if tesseract_result['level'][i] == 4 and len(tesseract_result['text'][i].strip()) > 0: - # (x, y, w, h) = (tesseract_result['left'][i], tesseract_result['top'][i], tesseract_result['width'][i], tesseract_result['height'][i]) - content = tesseract_result['text'][i].strip() - location = { - 'left': int(tesseract_result['left'][i]), - 'top': int(tesseract_result['top'][i]), - 'right': int(tesseract_result['left'][i]) + int(tesseract_result['width'][i]), - 'bottom': int(tesseract_result['top'][i]) + int(tesseract_result['height'][i]) - } - texts.append(Text(i_real, content, location)) - i_real = i_real + 1 - # print("ocr result: ", texts) - - return texts - -def text_cvt_orc_format_tesseract_by_line(data): - - # line_data = [] - line_num = None - line_text = [] - line_box = [0, 0, 0, 0] - texts = [] - i_real = 0 - - for i in range(len(data['level'])): - # check if the level is word - if data['level'][i] == 5: - if line_num != data['line_num'][i]: - if line_num is not None: # append the previous line data to line_data - content = ' '.join(line_text) - location = { - 'left': line_box[0], - 'top': line_box[1], - 'right': line_box[2], - 'bottom': line_box[3] - } - texts.append(Text(i_real, content, location)) - i_real = i_real + 1 - - # start a new line - line_num = data['line_num'][i] - line_text = [data['text'][i]] - line_box = [ - data['left'][i], - data['top'][i], - data['left'][i] + data['width'][i], - data['top'][i] + data['height'][i], - ] - else: # add a word to the current line - line_text.append(data['text'][i]) - line_box[2] = max(line_box[2], data['left'][i] + data['width'][i]) - line_box[3] = max(line_box[3], data['top'][i] + data['height'][i]) - - # append the last line data to line_data - if line_text: - content = ' '.join(line_text) - location = { - 'left': line_box[0], - 'top': line_box[1], - 'right': line_box[2], - 'bottom': line_box[3] - } - texts.append(Text(i_real, content, location)) - i_real = i_real + 1 - - return texts - - -def text_filter_noise(texts): - valid_texts = [] - for text in texts: - if len(text.content) <= 1 and text.content.lower() not in ['a', ',', '.', '!', '?', '$', '%', ':', '&', '+']: - continue - valid_texts.append(text) - return valid_texts - - -def text_detection(input_file='../data/input/30800.jpg', output_file='../data/output', show=False, method='google', paddle_model=None): - ''' - :param method: google or paddle - :param paddle_model: the preload paddle model for paddle ocr - ''' - start = time.process_time() - name = input_file.split('/')[-1][:-4] - ocr_root = pjoin(output_file, 'ocr') - img = cv2.imread(input_file) - if img is None: - print("imread nothing!") - - # resize the img to speed up the ocr - # img = cv2.resize(img, (int(img.shape[1]/5), int(img.shape[0]/5))) - # cv2.imshow("img", img) - # cv2.waitKey(0) - - if method == 'google': - print('*** Detect Text through Google OCR ***') - ocr_result = ocr.ocr_detection_google(input_file) - texts = text_cvt_orc_format(ocr_result) - texts = merge_intersected_texts(texts) - texts = text_filter_noise(texts) - texts = text_sentences_recognition(texts) - ocr_time_cost = time.process_time() - start - elif method == 'paddle': - # The import of the paddle ocr can be separate to the beginning of the program if you decide to use this method - # from paddleocr import PaddleOCR - print('*** Detect Text through Paddle OCR ***') - # if paddle_model is None: - # paddle_model = PaddleOCR(use_angle_cls=True, lang="en") #'ch' for chinese and english, 'en' for english - # None - result = paddle_model.ocr(input_file, cls=True) - ocr_time_cost = time.process_time() - start - texts = text_cvt_orc_format_paddle(result) - - elif method == 'pytesseract': - - img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - - # Perform OCR using Tesseract - result = pytesseract.image_to_data(img_rgb, output_type=pytesseract.Output.DICT) - print("ocr result: ", result) - - ocr_time_cost = time.process_time() - start - - # Convert the Tesseract result to the desired format - texts = text_cvt_orc_format_tesseract_by_line(result) - print("texts: ", texts) - else: - raise ValueError('Method has to be "google" or "paddle" or "pytesseract"') - - visualize_texts(img, texts, shown_resize_height=800, show=show, write_path=pjoin(ocr_root, name+'.png')) - save_detection_json(pjoin(ocr_root, name+'.json'), texts, img.shape) - # ocr_time_cost = time.process_time() - start - print("[Text Detection Completed in %.3f s] Input: %s Output: %s" % (ocr_time_cost, input_file, pjoin(ocr_root, name+'.json'))) - - # print("!!! detected content !!!") - # for text in texts: - # print(text.content) - - return ocr_time_cost - - -# text_detection() - diff --git a/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v5/preprocess.py b/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v5/preprocess.py deleted file mode 100644 index 4020532614a2fa0c501e59585d2f2b52a79f0184..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v5/preprocess.py +++ /dev/null @@ -1,13 +0,0 @@ -import cv2 -import numpy as np - -def unsharp_masking(img, kernel_size=5, threshold=2.0): - if kernel_size % 2 == 0: - kernel_size += 1 # Ensure the kernel size is odd - gaussian = cv2.GaussianBlur(img, (kernel_size, kernel_size), 2.0) - unsharp_mask = cv2.addWeighted(img, threshold, gaussian, -1.0, 0) - # Clip the pixel values to the valid range [0, 255] - unsharp_mask = np.clip(unsharp_mask, 0, 255) - # Normalize the image to bring pixel values back to [0, 255] - cv2.normalize(unsharp_mask, unsharp_mask, 0, 255, cv2.NORM_MINMAX) - return unsharp_mask diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/stat.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/stat.py deleted file mode 100644 index 46c9498dc720e7c23b278ae31b65dbf55f2ad8be..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/stat.py +++ /dev/null @@ -1,142 +0,0 @@ -"""Extra methods for DesignSpaceDocument to generate its STAT table data.""" - -from __future__ import annotations - -from typing import Dict, List, Union - -import fontTools.otlLib.builder -from fontTools.designspaceLib import ( - AxisLabelDescriptor, - DesignSpaceDocument, - DesignSpaceDocumentError, - LocationLabelDescriptor, -) -from fontTools.designspaceLib.types import Region, getVFUserRegion, locationInRegion -from fontTools.ttLib import TTFont - - -def buildVFStatTable(ttFont: TTFont, doc: DesignSpaceDocument, vfName: str) -> None: - """Build the STAT table for the variable font identified by its name in - the given document. - - Knowing which variable we're building STAT data for is needed to subset - the STAT locations to only include what the variable font actually ships. - - .. versionadded:: 5.0 - - .. seealso:: - - :func:`getStatAxes()` - - :func:`getStatLocations()` - - :func:`fontTools.otlLib.builder.buildStatTable()` - """ - for vf in doc.getVariableFonts(): - if vf.name == vfName: - break - else: - raise DesignSpaceDocumentError( - f"Cannot find the variable font by name {vfName}" - ) - - region = getVFUserRegion(doc, vf) - - return fontTools.otlLib.builder.buildStatTable( - ttFont, - getStatAxes(doc, region), - getStatLocations(doc, region), - doc.elidedFallbackName if doc.elidedFallbackName is not None else 2, - ) - - -def getStatAxes(doc: DesignSpaceDocument, userRegion: Region) -> List[Dict]: - """Return a list of axis dicts suitable for use as the ``axes`` - argument to :func:`fontTools.otlLib.builder.buildStatTable()`. - - .. versionadded:: 5.0 - """ - # First, get the axis labels with explicit ordering - # then append the others in the order they appear. - maxOrdering = max( - (axis.axisOrdering for axis in doc.axes if axis.axisOrdering is not None), - default=-1, - ) - axisOrderings = [] - for axis in doc.axes: - if axis.axisOrdering is not None: - axisOrderings.append(axis.axisOrdering) - else: - maxOrdering += 1 - axisOrderings.append(maxOrdering) - return [ - dict( - tag=axis.tag, - name={"en": axis.name, **axis.labelNames}, - ordering=ordering, - values=[ - _axisLabelToStatLocation(label) - for label in axis.axisLabels - if locationInRegion({axis.name: label.userValue}, userRegion) - ], - ) - for axis, ordering in zip(doc.axes, axisOrderings) - ] - - -def getStatLocations(doc: DesignSpaceDocument, userRegion: Region) -> List[Dict]: - """Return a list of location dicts suitable for use as the ``locations`` - argument to :func:`fontTools.otlLib.builder.buildStatTable()`. - - .. versionadded:: 5.0 - """ - axesByName = {axis.name: axis for axis in doc.axes} - return [ - dict( - name={"en": label.name, **label.labelNames}, - # Location in the designspace is keyed by axis name - # Location in buildStatTable by axis tag - location={ - axesByName[name].tag: value - for name, value in label.getFullUserLocation(doc).items() - }, - flags=_labelToFlags(label), - ) - for label in doc.locationLabels - if locationInRegion(label.getFullUserLocation(doc), userRegion) - ] - - -def _labelToFlags(label: Union[AxisLabelDescriptor, LocationLabelDescriptor]) -> int: - flags = 0 - if label.olderSibling: - flags |= 1 - if label.elidable: - flags |= 2 - return flags - - -def _axisLabelToStatLocation( - label: AxisLabelDescriptor, -) -> Dict: - label_format = label.getFormat() - name = {"en": label.name, **label.labelNames} - flags = _labelToFlags(label) - if label_format == 1: - return dict(name=name, value=label.userValue, flags=flags) - if label_format == 3: - return dict( - name=name, - value=label.userValue, - linkedValue=label.linkedUserValue, - flags=flags, - ) - if label_format == 2: - res = dict( - name=name, - nominalValue=label.userValue, - flags=flags, - ) - if label.userMinimum is not None: - res["rangeMinValue"] = label.userMinimum - if label.userMaximum is not None: - res["rangeMaxValue"] = label.userMaximum - return res - raise NotImplementedError("Unknown STAT label format") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py deleted file mode 100644 index ddc0510e60e7b744b177394dba49f7541c81b803..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py +++ /dev/null @@ -1,356 +0,0 @@ -import ssl -import sys -from types import TracebackType -from typing import AsyncIterable, AsyncIterator, Iterable, List, Optional, Type - -from .._backends.auto import AutoBackend -from .._backends.base import SOCKET_OPTION, AsyncNetworkBackend -from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol -from .._models import Origin, Request, Response -from .._synchronization import AsyncEvent, AsyncLock, AsyncShieldCancellation -from .connection import AsyncHTTPConnection -from .interfaces import AsyncConnectionInterface, AsyncRequestInterface - - -class RequestStatus: - def __init__(self, request: Request): - self.request = request - self.connection: Optional[AsyncConnectionInterface] = None - self._connection_acquired = AsyncEvent() - - def set_connection(self, connection: AsyncConnectionInterface) -> None: - assert self.connection is None - self.connection = connection - self._connection_acquired.set() - - def unset_connection(self) -> None: - assert self.connection is not None - self.connection = None - self._connection_acquired = AsyncEvent() - - async def wait_for_connection( - self, timeout: Optional[float] = None - ) -> AsyncConnectionInterface: - if self.connection is None: - await self._connection_acquired.wait(timeout=timeout) - assert self.connection is not None - return self.connection - - -class AsyncConnectionPool(AsyncRequestInterface): - """ - A connection pool for making HTTP requests. - """ - - def __init__( - self, - ssl_context: Optional[ssl.SSLContext] = None, - max_connections: Optional[int] = 10, - max_keepalive_connections: Optional[int] = None, - keepalive_expiry: Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - local_address: Optional[str] = None, - uds: Optional[str] = None, - network_backend: Optional[AsyncNetworkBackend] = None, - socket_options: Optional[Iterable[SOCKET_OPTION]] = None, - ) -> None: - """ - A connection pool for making HTTP requests. - - Parameters: - ssl_context: An SSL context to use for verifying connections. - If not specified, the default `httpcore.default_ssl_context()` - will be used. - max_connections: The maximum number of concurrent HTTP connections that - the pool should allow. Any attempt to send a request on a pool that - would exceed this amount will block until a connection is available. - max_keepalive_connections: The maximum number of idle HTTP connections - that will be maintained in the pool. - keepalive_expiry: The duration in seconds that an idle HTTP connection - may be maintained for before being expired from the pool. - http1: A boolean indicating if HTTP/1.1 requests should be supported - by the connection pool. Defaults to True. - http2: A boolean indicating if HTTP/2 requests should be supported by - the connection pool. Defaults to False. - retries: The maximum number of retries when trying to establish a - connection. - local_address: Local address to connect from. Can also be used to connect - using a particular address family. Using `local_address="0.0.0.0"` - will connect using an `AF_INET` address (IPv4), while using - `local_address="::"` will connect using an `AF_INET6` address (IPv6). - uds: Path to a Unix Domain Socket to use instead of TCP sockets. - network_backend: A backend instance to use for handling network I/O. - socket_options: Socket options that have to be included - in the TCP socket when the connection was established. - """ - self._ssl_context = ssl_context - - self._max_connections = ( - sys.maxsize if max_connections is None else max_connections - ) - self._max_keepalive_connections = ( - sys.maxsize - if max_keepalive_connections is None - else max_keepalive_connections - ) - self._max_keepalive_connections = min( - self._max_connections, self._max_keepalive_connections - ) - - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - self._retries = retries - self._local_address = local_address - self._uds = uds - - self._pool: List[AsyncConnectionInterface] = [] - self._requests: List[RequestStatus] = [] - self._pool_lock = AsyncLock() - self._network_backend = ( - AutoBackend() if network_backend is None else network_backend - ) - self._socket_options = socket_options - - def create_connection(self, origin: Origin) -> AsyncConnectionInterface: - return AsyncHTTPConnection( - origin=origin, - ssl_context=self._ssl_context, - keepalive_expiry=self._keepalive_expiry, - http1=self._http1, - http2=self._http2, - retries=self._retries, - local_address=self._local_address, - uds=self._uds, - network_backend=self._network_backend, - socket_options=self._socket_options, - ) - - @property - def connections(self) -> List[AsyncConnectionInterface]: - """ - Return a list of the connections currently in the pool. - - For example: - - ```python - >>> pool.connections - [ - , - , - , - ] - ``` - """ - return list(self._pool) - - async def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool: - """ - Attempt to provide a connection that can handle the given origin. - """ - origin = status.request.url.origin - - # If there are queued requests in front of us, then don't acquire a - # connection. We handle requests strictly in order. - waiting = [s for s in self._requests if s.connection is None] - if waiting and waiting[0] is not status: - return False - - # Reuse an existing connection if one is currently available. - for idx, connection in enumerate(self._pool): - if connection.can_handle_request(origin) and connection.is_available(): - self._pool.pop(idx) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - # If the pool is currently full, attempt to close one idle connection. - if len(self._pool) >= self._max_connections: - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle(): - await connection.aclose() - self._pool.pop(idx) - break - - # If the pool is still full, then we cannot acquire a connection. - if len(self._pool) >= self._max_connections: - return False - - # Otherwise create a new connection. - connection = self.create_connection(origin) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - async def _close_expired_connections(self) -> None: - """ - Clean up the connection pool by closing off any connections that have expired. - """ - # Close any connections that have expired their keep-alive time. - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.has_expired(): - await connection.aclose() - self._pool.pop(idx) - - # If the pool size exceeds the maximum number of allowed keep-alive connections, - # then close off idle connections as required. - pool_size = len(self._pool) - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle() and pool_size > self._max_keepalive_connections: - await connection.aclose() - self._pool.pop(idx) - pool_size -= 1 - - async def handle_async_request(self, request: Request) -> Response: - """ - Send an HTTP request, and return an HTTP response. - - This is the core implementation that is called into by `.request()` or `.stream()`. - """ - scheme = request.url.scheme.decode() - if scheme == "": - raise UnsupportedProtocol( - "Request URL is missing an 'http://' or 'https://' protocol." - ) - if scheme not in ("http", "https", "ws", "wss"): - raise UnsupportedProtocol( - f"Request URL has an unsupported protocol '{scheme}://'." - ) - - status = RequestStatus(request) - - async with self._pool_lock: - self._requests.append(status) - await self._close_expired_connections() - await self._attempt_to_acquire_connection(status) - - while True: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("pool", None) - try: - connection = await status.wait_for_connection(timeout=timeout) - except BaseException as exc: - # If we timeout here, or if the task is cancelled, then make - # sure to remove the request from the queue before bubbling - # up the exception. - async with self._pool_lock: - # Ensure only remove when task exists. - if status in self._requests: - self._requests.remove(status) - raise exc - - try: - response = await connection.handle_async_request(request) - except ConnectionNotAvailable: - # The ConnectionNotAvailable exception is a special case, that - # indicates we need to retry the request on a new connection. - # - # The most common case where this can occur is when multiple - # requests are queued waiting for a single connection, which - # might end up as an HTTP/2 connection, but which actually ends - # up as HTTP/1.1. - async with self._pool_lock: - # Maintain our position in the request queue, but reset the - # status so that the request becomes queued again. - status.unset_connection() - await self._attempt_to_acquire_connection(status) - except BaseException as exc: - with AsyncShieldCancellation(): - await self.response_closed(status) - raise exc - else: - break - - # When we return the response, we wrap the stream in a special class - # that handles notifying the connection pool once the response - # has been released. - assert isinstance(response.stream, AsyncIterable) - return Response( - status=response.status, - headers=response.headers, - content=ConnectionPoolByteStream(response.stream, self, status), - extensions=response.extensions, - ) - - async def response_closed(self, status: RequestStatus) -> None: - """ - This method acts as a callback once the request/response cycle is complete. - - It is called into from the `ConnectionPoolByteStream.aclose()` method. - """ - assert status.connection is not None - connection = status.connection - - async with self._pool_lock: - # Update the state of the connection pool. - if status in self._requests: - self._requests.remove(status) - - if connection.is_closed() and connection in self._pool: - self._pool.remove(connection) - - # Since we've had a response closed, it's possible we'll now be able - # to service one or more requests that are currently pending. - for status in self._requests: - if status.connection is None: - acquired = await self._attempt_to_acquire_connection(status) - # If we could not acquire a connection for a queued request - # then we don't need to check anymore requests that are - # queued later behind it. - if not acquired: - break - - # Housekeeping. - await self._close_expired_connections() - - async def aclose(self) -> None: - """ - Close any connections in the pool. - """ - async with self._pool_lock: - for connection in self._pool: - await connection.aclose() - self._pool = [] - self._requests = [] - - async def __aenter__(self) -> "AsyncConnectionPool": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - await self.aclose() - - -class ConnectionPoolByteStream: - """ - A wrapper around the response byte stream, that additionally handles - notifying the connection pool when the response has been closed. - """ - - def __init__( - self, - stream: AsyncIterable[bytes], - pool: AsyncConnectionPool, - status: RequestStatus, - ) -> None: - self._stream = stream - self._pool = pool - self._status = status - - async def __aiter__(self) -> AsyncIterator[bytes]: - async for part in self._stream: - yield part - - async def aclose(self) -> None: - try: - if hasattr(self._stream, "aclose"): - await self._stream.aclose() - finally: - with AsyncShieldCancellation(): - await self._pool.response_closed(self._status) diff --git a/spaces/DaFujaTyping/hf-Chat-ui/PRIVACY.md b/spaces/DaFujaTyping/hf-Chat-ui/PRIVACY.md deleted file mode 100644 index 462692780d6c4617948b39f20ad1a8a32f4f3af9..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/PRIVACY.md +++ /dev/null @@ -1,35 +0,0 @@ -## Privacy - -> Last updated: May 2nd, 2023 - -In this `v0.1` of HuggingChat, users are not authenticated in any way, i.e. this app doesn't have access to your HF user account even if you're logged in to huggingface.co. The app is only using an anonymous session cookie. ❗️ Warning ❗️ this means if you switch browsers or clear cookies, you will currently lose your conversations. - -By default, your conversations are shared with the model's authors (for the `v0.1` model, to Open Assistant) to improve their training data and model over time. Model authors are the custodians of the data collected by their model, even if it's hosted on our platform. - -If you disable data sharing in your settings, your conversations will not be used for any downstream usage (including for research or model training purposes), and they will only be stored to let you access past conversations. You can click on the Delete icon to delete any past conversation at any moment. - -🗓 Please also consult huggingface.co's main privacy policy at https://huggingface.co/privacy. To exercise any of your legal privacy rights, please send an email to privacy@huggingface.co. - -## About available LLMs - -The goal of this app is to showcase that it is now (April 2023) possible to build an open source alternative to ChatGPT. 💪 - -For now, it's running OpenAssistant's [latest LLaMA based model](https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor) (which is one of the current best open source chat models), but the plan in the longer-term is to expose all good-quality chat models from the Hub. - -We are not affiliated with Open Assistant, but if you want to contribute to the training data for the next generation of open models, please consider contributing to https://open-assistant.io/ ❤️ - -## Technical details - -This app is running in a [Space](https://huggingface.co/docs/hub/spaces-overview), which entails that the code for this UI is open source: https://huggingface.co/spaces/huggingchat/chat-ui/tree/main. -The inference backend is running [text-generation-inference](https://github.com/huggingface/text-generation-inference) on HuggingFace's Inference API infrastructure. - -It is therefore possible to deploy a copy of this app to a Space and customize it (swap model, add some UI elements, or store user messages according to your own Terms and conditions) - -We welcome any feedback on this app: please participate to the public discussion at https://huggingface.co/spaces/huggingchat/chat-ui/discussions - - - -## Coming soon - -- LLM watermarking -- User setting to share conversations with model authors (done ✅) diff --git a/spaces/Dantra1/CeliaSensei/mel_processing.py b/spaces/Dantra1/CeliaSensei/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/Dantra1/CeliaSensei/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.cpp b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 44fa337d8d4c34dfa010a59cd27d86857db671aa..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,107 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.numel() > 0, "x has zero size"); - TORCH_CHECK(f.numel() > 0, "f has zero size"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK((x.size(0)-1)*x.stride(0) + (x.size(1)-1)*x.stride(1) + (x.size(2)-1)*x.stride(2) + (x.size(3)-1)*x.stride(3) <= INT_MAX, "x memory footprint is too large"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - TORCH_CHECK((y.size(0)-1)*y.stride(0) + (y.size(1)-1)*y.stride(1) + (y.size(2)-1)*y.stride(2) + (y.size(3)-1)*y.stride(3) <= INT_MAX, "output memory footprint is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/EronSamez/RVC_HFmeu/lib/globals/globals.py b/spaces/EronSamez/RVC_HFmeu/lib/globals/globals.py deleted file mode 100644 index d0da59d56e8c2e482bcda5eeae7cf797b830560e..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/globals/globals.py +++ /dev/null @@ -1,5 +0,0 @@ -DoFormant: bool = False -Quefrency: float = 8.0 -Timbre: float = 1.2 - -NotesOrHertz: bool = False \ No newline at end of file diff --git a/spaces/EsoCode/text-generation-webui/css/main.js b/spaces/EsoCode/text-generation-webui/css/main.js deleted file mode 100644 index 32820ebe15ddb80ca5fbcd2c4f88cc7c244cf3c5..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/css/main.js +++ /dev/null @@ -1,18 +0,0 @@ -document.getElementById("main").parentNode.childNodes[0].classList.add("header_bar"); -document.getElementById("main").parentNode.style = "padding: 0; margin: 0"; -document.getElementById("main").parentNode.parentNode.parentNode.style = "padding: 0"; - -// Get references to the elements -let main = document.getElementById('main'); -let main_parent = main.parentNode; -let extensions = document.getElementById('extensions'); - -// Add an event listener to the main element -main_parent.addEventListener('click', function(e) { - // Check if the main element is visible - if (main.offsetHeight > 0 && main.offsetWidth > 0) { - extensions.style.display = 'flex'; - } else { - extensions.style.display = 'none'; - } -}); diff --git a/spaces/EuroPython2022/BayesCap/src/networks_SRGAN.py b/spaces/EuroPython2022/BayesCap/src/networks_SRGAN.py deleted file mode 100644 index cd8a30dd8deecde53f527fb81c91b78409abc390..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/BayesCap/src/networks_SRGAN.py +++ /dev/null @@ -1,347 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -from torch import Tensor - -# __all__ = [ -# "ResidualConvBlock", -# "Discriminator", "Generator", -# ] - - -class ResidualConvBlock(nn.Module): - """Implements residual conv function. - - Args: - channels (int): Number of channels in the input image. - """ - - def __init__(self, channels: int) -> None: - super(ResidualConvBlock, self).__init__() - self.rcb = nn.Sequential( - nn.Conv2d(channels, channels, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(channels), - nn.PReLU(), - nn.Conv2d(channels, channels, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(channels), - ) - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.rcb(x) - out = torch.add(out, identity) - - return out - - -class Discriminator(nn.Module): - def __init__(self) -> None: - super(Discriminator, self).__init__() - self.features = nn.Sequential( - # input size. (3) x 96 x 96 - nn.Conv2d(3, 64, (3, 3), (1, 1), (1, 1), bias=False), - nn.LeakyReLU(0.2, True), - # state size. (64) x 48 x 48 - nn.Conv2d(64, 64, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(64), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(128), - nn.LeakyReLU(0.2, True), - # state size. (128) x 24 x 24 - nn.Conv2d(128, 128, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(128), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 256, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - # state size. (256) x 12 x 12 - nn.Conv2d(256, 256, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - nn.Conv2d(256, 512, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(512), - nn.LeakyReLU(0.2, True), - # state size. (512) x 6 x 6 - nn.Conv2d(512, 512, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(512), - nn.LeakyReLU(0.2, True), - ) - - self.classifier = nn.Sequential( - nn.Linear(512 * 6 * 6, 1024), - nn.LeakyReLU(0.2, True), - nn.Linear(1024, 1), - ) - - def forward(self, x: Tensor) -> Tensor: - out = self.features(x) - out = torch.flatten(out, 1) - out = self.classifier(out) - - return out - - -class Generator(nn.Module): - def __init__(self) -> None: - super(Generator, self).__init__() - # First conv layer. - self.conv_block1 = nn.Sequential( - nn.Conv2d(3, 64, (9, 9), (1, 1), (4, 4)), - nn.PReLU(), - ) - - # Features trunk blocks. - trunk = [] - for _ in range(16): - trunk.append(ResidualConvBlock(64)) - self.trunk = nn.Sequential(*trunk) - - # Second conv layer. - self.conv_block2 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(64), - ) - - # Upscale conv block. - self.upsampling = nn.Sequential( - nn.Conv2d(64, 256, (3, 3), (1, 1), (1, 1)), - nn.PixelShuffle(2), - nn.PReLU(), - nn.Conv2d(64, 256, (3, 3), (1, 1), (1, 1)), - nn.PixelShuffle(2), - nn.PReLU(), - ) - - # Output layer. - self.conv_block3 = nn.Conv2d(64, 3, (9, 9), (1, 1), (4, 4)) - - # Initialize neural network weights. - self._initialize_weights() - - def forward(self, x: Tensor, dop=None) -> Tensor: - if not dop: - return self._forward_impl(x) - else: - return self._forward_w_dop_impl(x, dop) - - # Support torch.script function. - def _forward_impl(self, x: Tensor) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = torch.add(out1, out2) - out = self.upsampling(out) - out = self.conv_block3(out) - - return out - - def _forward_w_dop_impl(self, x: Tensor, dop) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = F.dropout2d(self.conv_block2(out), p=dop) - out = torch.add(out1, out2) - out = self.upsampling(out) - out = self.conv_block3(out) - - return out - - def _initialize_weights(self) -> None: - for module in self.modules(): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - elif isinstance(module, nn.BatchNorm2d): - nn.init.constant_(module.weight, 1) - - -#### BayesCap -class BayesCap(nn.Module): - def __init__(self, in_channels=3, out_channels=3) -> None: - super(BayesCap, self).__init__() - # First conv layer. - self.conv_block1 = nn.Sequential( - nn.Conv2d( - in_channels, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - ) - - # Features trunk blocks. - trunk = [] - for _ in range(16): - trunk.append(ResidualConvBlock(64)) - self.trunk = nn.Sequential(*trunk) - - # Second conv layer. - self.conv_block2 = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=3, stride=1, padding=1, bias=False - ), - nn.BatchNorm2d(64), - ) - - # Output layer. - self.conv_block3_mu = nn.Conv2d( - 64, out_channels=out_channels, - kernel_size=9, stride=1, padding=4 - ) - self.conv_block3_alpha = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - self.conv_block3_beta = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - - # Initialize neural network weights. - self._initialize_weights() - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - # Support torch.script function. - def _forward_impl(self, x: Tensor) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = out1 + out2 - out_mu = self.conv_block3_mu(out) - out_alpha = self.conv_block3_alpha(out) - out_beta = self.conv_block3_beta(out) - return out_mu, out_alpha, out_beta - - def _initialize_weights(self) -> None: - for module in self.modules(): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - elif isinstance(module, nn.BatchNorm2d): - nn.init.constant_(module.weight, 1) - - -class BayesCap_noID(nn.Module): - def __init__(self, in_channels=3, out_channels=3) -> None: - super(BayesCap_noID, self).__init__() - # First conv layer. - self.conv_block1 = nn.Sequential( - nn.Conv2d( - in_channels, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - ) - - # Features trunk blocks. - trunk = [] - for _ in range(16): - trunk.append(ResidualConvBlock(64)) - self.trunk = nn.Sequential(*trunk) - - # Second conv layer. - self.conv_block2 = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=3, stride=1, padding=1, bias=False - ), - nn.BatchNorm2d(64), - ) - - # Output layer. - # self.conv_block3_mu = nn.Conv2d( - # 64, out_channels=out_channels, - # kernel_size=9, stride=1, padding=4 - # ) - self.conv_block3_alpha = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - self.conv_block3_beta = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - - # Initialize neural network weights. - self._initialize_weights() - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - # Support torch.script function. - def _forward_impl(self, x: Tensor) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = out1 + out2 - # out_mu = self.conv_block3_mu(out) - out_alpha = self.conv_block3_alpha(out) - out_beta = self.conv_block3_beta(out) - return out_alpha, out_beta - - def _initialize_weights(self) -> None: - for module in self.modules(): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - elif isinstance(module, nn.BatchNorm2d): - nn.init.constant_(module.weight, 1) \ No newline at end of file diff --git a/spaces/FacundoSander/PdfQA/static/style.css b/spaces/FacundoSander/PdfQA/static/style.css deleted file mode 100644 index d0269ce416827e9821a7a989a89ae3e8a8b38f81..0000000000000000000000000000000000000000 --- a/spaces/FacundoSander/PdfQA/static/style.css +++ /dev/null @@ -1,179 +0,0 @@ -body { - font-family: 'Roboto', sans-serif; -} - -.main-page { - display: flex; - flex-direction: column; - align-items: center; - justify-content: center; - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100%; - background: linear-gradient(45deg, #3a6186, #89253e); - z-index: 1000; - opacity: 1; - visibility: visible; - transition: opacity 0.5s ease-in-out, visibility 0.5s ease-in-out; -} - -.main-content { - max-width: 80%; -} - -.hidden { - opacity: 0; - visibility: hidden; -} - -.btn-outline-primary { - border-color: #ffffff; - color: #ffffff; - transition: background-color 0.3s, color 0.3s; -} - -.btn-outline-primary:hover { - background-color: #ffffff; - color: #89253e; -} - -.chat-container { - height: 100vh; - display: flex; - flex-direction: column; - background-color: #f8f9fa; -} - -#messages { - flex-grow: 1; - overflow-y: auto; - padding: 1rem; -} - -.user-message { - text-align: right; - margin-bottom: 1rem; - background-color: #007bff; - padding: 10px; - border-radius: 5px; - color: white; -} - -.response-message { - text-align: left; - margin-bottom: 1rem; - background-color: #e9ecef; - padding: 10px; - border-radius: 5px; -} - -.input-group { - box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); - border-radius: 5px; -} - -#toggle-sidebar { - transition: all 0.3s ease; -} - -#toggle-sidebar:hover { - background-color: #343a40; -} - -.mb-3 { - box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); - border-radius: 5px; - background-color: white; - padding: 10px; - margin-bottom: 15px; -} - - -@keyframes spin { - 0% { - transform: rotate(0deg); - } - 100% { - transform: rotate(360deg); - } -} - -.typing-indicator { - display: inline-block; - width: 1rem; - height: 1rem; - border: 2px solid #0d6efd; - border-top-color: transparent; - border-radius: 50%; - animation: spin 1s linear infinite; -} - - -.dark-theme { - background-color: #343a40; - color: #f8f9fa; -} - -.dark-theme .response-message { - background-color: #495057; - color: #f8f9fa; -} - -.dark-theme .user-message { - background-color: #007bff; - color: #f8f9fa; -} - -.dark-theme .input-group { - background-color: #495057; -} - -.dark-theme .form-control, -.dark-theme .form-select { - background-color: #495057; - color: #f8f9fa; -} - -.dark-theme .form-label { - color: #f8f9fa; -} - -.dark-theme #toggle-sidebar { - background-color: #adb5bd; -} - -/* Tema claro */ -.light-theme { - background-color: #f8f9fa; - color: #343a40; -} - -.light-theme .response-message { - background-color: #e9ecef; - color: #343a40; -} - -.light-theme .user-message { - background-color: #007bff; - color: #f8f9fa; -} - -.light-theme .input-group { - background-color: #f8f9fa; -} - -.light-theme .form-control, -.light-theme .form-select { - background-color: #f8f9fa; - color: #343a40; -} - -.light-theme .form-label { - color: #343a40; -} - -.light-theme #toggle-sidebar { - background-color: #343a40; -} diff --git a/spaces/Faridmaruf/RVCV2MODEL/app.py b/spaces/Faridmaruf/RVCV2MODEL/app.py deleted file mode 100644 index 8323578e050c19032d933082dc5fa3b138008565..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/app.py +++ /dev/null @@ -1,680 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -spaces = os.getenv("SYSTEM") == "spaces" -force_support = None -if config.unsupported is False: - if config.device == "mps" or config.device == "cpu": - force_support = False -else: - force_support = True - -audio_mode = [] -f0method_mode = [] -f0method_info = "" - -if force_support is False or spaces is True: - if spaces is True: - audio_mode = ["Upload audio", "TTS Audio"] - else: - audio_mode = ["Input path", "Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better). (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better), and Crepe effect is good but requires GPU (Default: PM)" - -if os.path.isfile("rmvpe.pt"): - f0method_mode.insert(2, "rmvpe") - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - logs = [] - print(f"Converting using {model_name}...") - logs.append(f"Converting using {model_name}...") - yield "\n".join(logs), None - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and spaces: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and spaces: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_name} | {info}") - logs.append(f"Successfully Convert {model_name}\n{info}") - yield "\n".join(logs), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - yield info, None - return vc_fn - -def load_model(): - categories = [] - if os.path.isfile("weights/folder_info.json"): - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - else: - categories = [] - return categories - -def download_audio(url, audio_provider): - logs = [] - if url == "": - raise gr.Error("URL Required!") - return "URL Required" - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - logs.append("Downloading the audio...") - yield None, "\n".join(logs) - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/audio', - } - audio_path = "dl_audio/audio.wav" - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - logs.append("Download Complete.") - yield audio_path, "\n".join(logs) - -def cut_vocal_and_inst(split_model): - logs = [] - logs.append("Starting the audio splitting process...") - yield "\n".join(logs), None, None, None, None - command = f"demucs --two-stems=vocals -n {split_model} dl_audio/audio.wav -o output" - result = subprocess.Popen(command.split(), stdout=subprocess.PIPE, text=True) - for line in result.stdout: - logs.append(line) - yield "\n".join(logs), None, None, None, None - print(result.stdout) - vocal = f"output/{split_model}/audio/vocals.wav" - inst = f"output/{split_model}/audio/no_vocals.wav" - logs.append("Audio splitting complete.") - yield "\n".join(logs), vocal, inst, vocal - -def combine_vocal_and_inst(audio_data, vocal_volume, inst_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - inst_path = f"output/{split_model}/audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [0:a]volume={inst_volume}[i];[1:a]volume={vocal_volume}[v];[i][v]amix=inputs=2:duration=longest[a] -map [a] -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - # Splitter - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.new_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "
    \n\n"+ - "# RVC V2 MODELS GENSHIN IMPACT\n\n"+ - "### Recommended to use Google Colab to use other character and feature.\n\n"+ - "#### All of this voice samples are taken from the game Genshin Impact, and all voice credits belong to hoyoverse.\n\n"+ - "##### NO COLAB! IM DONE WITH THAT SH*T!. \n\n"+ - "[![Google collab]](https://colab.research.google.com/drive/1KcR2BO1VGdZR7ZF2luvH7lo1QWujHi-Q?usp=sharing) - "[![Repository](https://img.shields.io/badge/Github-Multi%20Model%20RVC%20Inference-blue?style=for-the-badge&logo=github)](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+ - "
    " - ) - if categories == []: - gr.Markdown( - "
    \n\n"+ - "## No model found, please add the model into weights folder\n\n"+ - "
    " - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
    {description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
    No Model Loaded.") - gr.Markdown("##
    Please add the model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - f'
    RVC {model_version} Model
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - if spaces is False: - with gr.TabItem("Input"): - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_download_button = gr.Button("Download Audio", variant="primary", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False) - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split_log = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - with gr.TabItem("Convert"): - with gr.Row(): - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_vocal_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=1, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 1}", - visible=False - ) - vc_inst_volume = gr.Slider( - minimum=0, - maximum=10, - label="Instrument volume", - value=1, - interactive=True, - step=1, - info="Adjust instrument volume (Default: 1}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - else: - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_download_button = gr.Button("Download Audio", variant="primary", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # Splitter - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split_log = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - # TTS - tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False) - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_vocal_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=1, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 1}", - visible=False - ) - vc_inst_volume = gr.Slider( - minimum=0, - maximum=10, - label="Instrument volume", - value=1, - interactive=True, - step=1, - info="Adjust instrument volume (Default: 1}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_download_button.click( - fn=download_audio, - inputs=[vc_link, vc_download_audio], - outputs=[vc_audio_preview, vc_log_yt] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_split_model], - outputs=[vc_split_log, vc_vocal_preview, vc_inst_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_vocal_volume, vc_inst_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_log_yt, - vc_download_button, - vc_split_model, - vc_split_log, - vc_split, - vc_audio_preview, - vc_vocal_preview, - vc_inst_preview, - vc_vocal_volume, - vc_inst_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/Felix123456/bingo/src/components/chat.tsx b/spaces/Felix123456/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
    - -
    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
    - -
    - ) : null} - - ) : null} -
    - - -
    - ) -} diff --git a/spaces/Fia/StableDiffusionCPU/README.md b/spaces/Fia/StableDiffusionCPU/README.md deleted file mode 100644 index f730b8f2d42bf867108be4fd317e846b8866758a..0000000000000000000000000000000000000000 --- a/spaces/Fia/StableDiffusionCPU/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stable Diffusion -emoji: 🏃 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Flux9665/IMS-Toucan/reference_audios/__init__.py b/spaces/Flux9665/IMS-Toucan/reference_audios/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Fox1997/vits-uma-genshin-honkai/app.py b/spaces/Fox1997/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000 --- a/spaces/Fox1997/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor -import torch - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model).to(device) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 500: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() \ No newline at end of file diff --git a/spaces/Froleptan/stablediffusion-infinity/postprocess.py b/spaces/Froleptan/stablediffusion-infinity/postprocess.py deleted file mode 100644 index 90c7f535c568fa46b6433390459d82e7967bb1fd..0000000000000000000000000000000000000000 --- a/spaces/Froleptan/stablediffusion-infinity/postprocess.py +++ /dev/null @@ -1,249 +0,0 @@ -""" -https://github.com/Trinkle23897/Fast-Poisson-Image-Editing -MIT License - -Copyright (c) 2022 Jiayi Weng - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. -""" - -import time -import argparse -import os -import fpie -from process import ALL_BACKEND, CPU_COUNT, DEFAULT_BACKEND -from fpie.io import read_images, write_image -from process import BaseProcessor, EquProcessor, GridProcessor - -from PIL import Image -import numpy as np -import skimage -import skimage.measure -import scipy -import scipy.signal - - -class PhotometricCorrection: - def __init__(self,quite=False): - self.get_parser("cli") - args=self.parser.parse_args(["--method","grid","-g","src","-s","a","-t","a","-o","a"]) - args.mpi_sync_interval = getattr(args, "mpi_sync_interval", 0) - self.backend=args.backend - self.args=args - self.quite=quite - proc: BaseProcessor - proc = GridProcessor( - args.gradient, - args.backend, - args.cpu, - args.mpi_sync_interval, - args.block_size, - args.grid_x, - args.grid_y, - ) - print( - f"[PIE]Successfully initialize PIE {args.method} solver " - f"with {args.backend} backend" - ) - self.proc=proc - - def run(self, original_image, inpainted_image, mode="mask_mode"): - print(f"[PIE] start") - if mode=="disabled": - return inpainted_image - input_arr=np.array(original_image) - if input_arr[:,:,-1].sum()<1: - return inpainted_image - output_arr=np.array(inpainted_image) - mask=input_arr[:,:,-1] - mask=255-mask - if mask.sum()<1 and mode=="mask_mode": - mode="" - if mode=="mask_mode": - mask = skimage.measure.block_reduce(mask, (8, 8), np.max) - mask = mask.repeat(8, axis=0).repeat(8, axis=1) - else: - mask[8:-9,8:-9]=255 - mask = mask[:,:,np.newaxis].repeat(3,axis=2) - nmask=mask.copy() - output_arr2=output_arr[:,:,0:3].copy() - input_arr2=input_arr[:,:,0:3].copy() - output_arr2[nmask<128]=0 - input_arr2[nmask>=128]=0 - output_arr2+=input_arr2 - src = output_arr2[:,:,0:3] - tgt = src.copy() - proc=self.proc - args=self.args - if proc.root: - n = proc.reset(src, mask, tgt, (args.h0, args.w0), (args.h1, args.w1)) - proc.sync() - if proc.root: - result = tgt - t = time.time() - if args.p == 0: - args.p = args.n - - for i in range(0, args.n, args.p): - if proc.root: - result, err = proc.step(args.p) # type: ignore - print(f"[PIE] Iter {i + args.p}, abs_err {err}") - else: - proc.step(args.p) - - if proc.root: - dt = time.time() - t - print(f"[PIE] Time elapsed: {dt:.4f}s") - # make sure consistent with dummy process - return Image.fromarray(result) - - - def get_parser(self,gen_type: str) -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument( - "-v", "--version", action="store_true", help="show the version and exit" - ) - parser.add_argument( - "--check-backend", action="store_true", help="print all available backends" - ) - if gen_type == "gui" and "mpi" in ALL_BACKEND: - # gui doesn't support MPI backend - ALL_BACKEND.remove("mpi") - parser.add_argument( - "-b", - "--backend", - type=str, - choices=ALL_BACKEND, - default=DEFAULT_BACKEND, - help="backend choice", - ) - parser.add_argument( - "-c", - "--cpu", - type=int, - default=CPU_COUNT, - help="number of CPU used", - ) - parser.add_argument( - "-z", - "--block-size", - type=int, - default=1024, - help="cuda block size (only for equ solver)", - ) - parser.add_argument( - "--method", - type=str, - choices=["equ", "grid"], - default="equ", - help="how to parallelize computation", - ) - parser.add_argument("-s", "--source", type=str, help="source image filename") - if gen_type == "cli": - parser.add_argument( - "-m", - "--mask", - type=str, - help="mask image filename (default is to use the whole source image)", - default="", - ) - parser.add_argument("-t", "--target", type=str, help="target image filename") - parser.add_argument("-o", "--output", type=str, help="output image filename") - if gen_type == "cli": - parser.add_argument( - "-h0", type=int, help="mask position (height) on source image", default=0 - ) - parser.add_argument( - "-w0", type=int, help="mask position (width) on source image", default=0 - ) - parser.add_argument( - "-h1", type=int, help="mask position (height) on target image", default=0 - ) - parser.add_argument( - "-w1", type=int, help="mask position (width) on target image", default=0 - ) - parser.add_argument( - "-g", - "--gradient", - type=str, - choices=["max", "src", "avg"], - default="max", - help="how to calculate gradient for PIE", - ) - parser.add_argument( - "-n", - type=int, - help="how many iteration would you perfer, the more the better", - default=5000, - ) - if gen_type == "cli": - parser.add_argument( - "-p", type=int, help="output result every P iteration", default=0 - ) - if "mpi" in ALL_BACKEND: - parser.add_argument( - "--mpi-sync-interval", - type=int, - help="MPI sync iteration interval", - default=100, - ) - parser.add_argument( - "--grid-x", type=int, help="x axis stride for grid solver", default=8 - ) - parser.add_argument( - "--grid-y", type=int, help="y axis stride for grid solver", default=8 - ) - self.parser=parser - -if __name__ =="__main__": - import sys - import io - import base64 - from PIL import Image - def base64_to_pil(base64_str): - data = base64.b64decode(str(base64_str)) - pil = Image.open(io.BytesIO(data)) - return pil - - def pil_to_base64(out_pil): - out_buffer = io.BytesIO() - out_pil.save(out_buffer, format="PNG") - out_buffer.seek(0) - base64_bytes = base64.b64encode(out_buffer.read()) - base64_str = base64_bytes.decode("ascii") - return base64_str - correction_func=PhotometricCorrection(quite=True) - while True: - buffer = sys.stdin.readline() - print(f"[PIE] suprocess {len(buffer)} {type(buffer)} ") - if len(buffer)==0: - break - if isinstance(buffer,str): - lst=buffer.strip().split(",") - else: - lst=buffer.decode("ascii").strip().split(",") - img0=base64_to_pil(lst[0]) - img1=base64_to_pil(lst[1]) - ret=correction_func.run(img0,img1,mode=lst[2]) - ret_base64=pil_to_base64(ret) - if isinstance(buffer,str): - sys.stdout.write(f"{ret_base64}\n") - else: - sys.stdout.write(f"{ret_base64}\n".encode()) - sys.stdout.flush() \ No newline at end of file diff --git a/spaces/GAITOR/MLMondayDemo-Week1/app.py b/spaces/GAITOR/MLMondayDemo-Week1/app.py deleted file mode 100644 index 17190745e693d470632da94d5fcdfc21c65ede33..0000000000000000000000000000000000000000 --- a/spaces/GAITOR/MLMondayDemo-Week1/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import tensorflow as tf -import matplotlib.pyplot as plt -from PIL import Image, ImageOps -from tensorflow.keras.utils import img_to_array - -from streamlit_drawable_canvas import st_canvas -import streamlit as st - -# st.set_page_config(layout="wide") - -st.write('# MNIST Digit Recognition') -st.write('## Using trained CNN `Keras` model') -st.write('To view how this model was trained go to the `Files and Versions` tab and download the `Week1.ipynb` notebook') - -# Import Pre-trained Model -model = tf.keras.models.load_model('mnist.h5') -tf.device('/cpu:0') -plt.rcParams.update({'font.size': 18}) - -# Create a sidebar to hold the settings -stroke_width = st.sidebar.slider("Stroke width: ", 1, 25, 9) -realtime_update = st.sidebar.checkbox("Update in realtime", True) - - -canvas_result = st_canvas( - fill_color="rgba(255, 165, 0, 0.3)", # Fixed fill color with some opacity - stroke_width=stroke_width, - stroke_color='#FFFFFF', - background_color='#000000', - #background_image=Image.open(bg_image) if bg_image else None, - update_streamlit=realtime_update, - height=28*9, - width=28*9, - drawing_mode='freedraw', - key="canvas", -) - -if canvas_result.image_data is not None: - - # Get image data from canvas - im = ImageOps.grayscale(Image.fromarray(canvas_result.image_data.astype( - 'uint8'), mode="RGBA")).resize((28, 28)) - - # Convert image to array and reshape - data = img_to_array(im) - data = data / 255 - data = data.reshape(1, 28, 28, 1) - data = data.astype('float32') - - # Predict digit - st.write('### Predicted Digit') - prediction = model.predict(data) - - # Plot prediction - result = plt.figure(figsize=(12, 3)) - plt.bar(range(10), prediction[0]) - plt.xticks(range(10)) - plt.xlabel('Digit') - plt.ylabel('Probability') - plt.title('Drawing Prediction') - plt.ylim(0, 1) - st.write(result) - - # Show resized image - with st.expander('Show Resized Image'): - st.write( - "The image needs to be resized, because it can only input 28x28 images") - st.image(im, caption='Resized Image', width=28*9) diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/tokenizer/simple_tokenizer.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/tokenizer/simple_tokenizer.py deleted file mode 100644 index c84cc8fb3adff99225d3e3a75b2a3d81564adcef..0000000000000000000000000000000000000000 --- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/tokenizer/simple_tokenizer.py +++ /dev/null @@ -1,163 +0,0 @@ -""" -Copied from: https://github.com/openai/CLIP/blob/573315e83f07b53a61ff5098757e8fc885f1703e/clip/simple_tokenizer.py -""" - -import gzip -import html -import os -from functools import lru_cache -from typing import List, Tuple - -import ftfy -import regex as re - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz") - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2 ** 8): - if b not in bs: - bs.append(b) - cs.append(2 ** 8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r"\s+", " ", text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe()): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split("\n") - merges = merges[1 : 49152 - 256 - 2 + 1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v + "" for v in vocab] - for merge in merges: - vocab.append("".join(merge)) - vocab.extend(["<|startoftext|>", "<|endoftext|>"]) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {"<|startoftext|>": "<|startoftext|>", "<|endoftext|>": "<|endoftext|>"} - self.pat = re.compile( - r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", - re.IGNORECASE, - ) - - @property - def start_token(self): - return self.encoder["<|startoftext|>"] - - @property - def end_token(self): - return self.encoder["<|endoftext|>"] - - def padded_tokens_and_len(self, tokens: List[int], text_ctx: int) -> Tuple[List[int], int]: - tokens = [self.start_token] + tokens[: text_ctx - 2] + [self.end_token] - text_len = len(tokens) - padding = text_ctx - len(tokens) - padded_tokens = tokens + [0] * padding - return padded_tokens, text_len - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + (token[-1] + "",) - pairs = get_pairs(word) - - if not pairs: - return token + "" - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: # pylint: disable=bare-except - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder[token] for token in tokens]) - text = ( - bytearray([self.byte_decoder[c] for c in text]) - .decode("utf-8", errors="replace") - .replace("", " ") - ) - return text diff --git a/spaces/Godrose0728/sound-link/text/shanghainese.py b/spaces/Godrose0728/sound-link/text/shanghainese.py deleted file mode 100644 index 1c28c17d0dc0d920fd222c909a53d703c95e043b..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/sound-link/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/utils.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/utils.py deleted file mode 100644 index e65b8824d3f240e869ca073a8264f32cb224813c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/utils.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Common utilities for data pipeline tools.""" -import contextlib -import shutil -import tempfile -import time -from typing import Optional - -from absl import logging - - -@contextlib.contextmanager -def tmpdir_manager(base_dir: Optional[str] = None): - """Context manager that deletes a temporary directory on exit.""" - tmpdir = tempfile.mkdtemp(dir=base_dir) - try: - yield tmpdir - finally: - shutil.rmtree(tmpdir, ignore_errors=True) - - -@contextlib.contextmanager -def timing(msg: str): - logging.info('Started %s', msg) - tic = time.time() - yield - toc = time.time() - logging.info('Finished %s in %.3f seconds', msg, toc - tic) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py deleted file mode 100644 index a496204bdb061d975c40cb7ef2aaada40e020a13..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/gcnet_r50-d8.py', - '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_20k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/H0n3y/Honeystesting/Dockerfile b/spaces/H0n3y/Honeystesting/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/H0n3y/Honeystesting/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Hallucinate/demo/midas/backbones/utils.py b/spaces/Hallucinate/demo/midas/backbones/utils.py deleted file mode 100644 index 0558899dddcfccec5f01a764d4f21738eb612149..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/midas/backbones/utils.py +++ /dev/null @@ -1,249 +0,0 @@ -import torch - -import torch.nn as nn - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index:] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index:] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index:]) - features = torch.cat((x[:, self.start_index:], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def forward_default(pretrained, x, function_name="forward_features"): - exec(f"pretrained.model.{function_name}(x)") - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - if hasattr(pretrained, "act_postprocess1"): - layer_1 = pretrained.act_postprocess1(layer_1) - if hasattr(pretrained, "act_postprocess2"): - layer_2 = pretrained.act_postprocess2(layer_2) - if hasattr(pretrained, "act_postprocess3"): - layer_3 = pretrained.act_postprocess3(layer_3) - if hasattr(pretrained, "act_postprocess4"): - layer_4 = pretrained.act_postprocess4(layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def forward_adapted_unflatten(pretrained, x, function_name="forward_features"): - b, c, h, w = x.shape - - exec(f"glob = pretrained.model.{function_name}(x)") - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3: len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3: len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3: len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3: len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def make_backbone_default( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, - start_index_readout=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index_readout) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - return pretrained diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_token_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_token_dataset.py deleted file mode 100644 index fd1331f4c44c1595eb9bb78baa0cf5cf3bcce9ad..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_token_dataset.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class PrependTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - self.token = token - if token is not None: - self._sizes = np.array(dataset.sizes) + 1 - else: - self._sizes = dataset.sizes - - def __getitem__(self, idx): - item = self.dataset[idx] - if self.token is not None: - item = torch.cat([item.new([self.token]), item]) - return item - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - n = self.dataset.num_tokens(index) - if self.token is not None: - n += 1 - return n - - def size(self, index): - n = self.dataset.size(index) - if self.token is not None: - n += 1 - return n diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py deleted file mode 100644 index 5ee9c1be4a59ad3d072412827ab4e9b62dc7434e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import List - -import torch.optim.lr_scheduler -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class ReduceLROnPlateauLRScheduleConfig(FairseqDataclass): - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - lr_threshold: float = field( - default=1e-4, - metadata={ - "help": ( - "threshold for measuring the new optimum, to only focus on " - "significant changes" - ) - }, - ) - lr_patience: int = field( - default=0, - metadata={ - "help": ( - "number of epochs with no improvement after which learning rate will " - "be reduced" - ) - }, - ) - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = II("optimization.lr") - maximize_best_checkpoint_metric: bool = II( - "checkpoint.maximize_best_checkpoint_metric" - ) - - -@register_lr_scheduler( - "reduce_lr_on_plateau", dataclass=ReduceLROnPlateauLRScheduleConfig -) -class ReduceLROnPlateauLRSchedule(FairseqLRScheduler): - """ - Decay the LR by a factor every time the validation loss plateaus. - Also comes with optional warmup phase, where we linearly increase - the learning rate from some initial learning rate - (``--warmup-init-lr``) until the configured learning rate - (``--lr``). Thereafter the lr is adjusted according to original - reduce_on_plateau scheme. - - During warmup:: - - lrs = torch.linspace( - cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates - ) - lr = lrs[update_num] - """ - - def __init__(self, cfg: ReduceLROnPlateauLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with reduce_lr_on_plateau." - " Consider --lr-scheduler=fixed instead." - ) - self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( - self.optimizer.optimizer, - patience=cfg.lr_patience, - factor=cfg.lr_shrink, - mode="max" if cfg.maximize_best_checkpoint_metric else "min", - threshold=cfg.lr_threshold, - ) - warmup_end_lr = cfg.lr[0] - # if no warm up, sets initial lr to be cfg.lr[0] - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr - - # linearly warmup for the first cfg.warmup_updates - if cfg.warmup_updates > 0: - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - - # this flag is either set from arg when no warm up, or set by - # step_update() when warmup finishes - self.warmup_end = True if cfg.warmup_updates <= 0 else False - - # initial learning rate - # this self.lr is used only during init and/or warm up period - self.lr = warmup_end_lr if self.warmup_end else cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return { - "best": self.lr_scheduler.best, - "last_epoch": self.lr_scheduler.last_epoch, - } - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.lr_scheduler.best = state_dict["best"] - if "last_epoch" in state_dict: - self.lr_scheduler.last_epoch = state_dict["last_epoch"] - - def step(self, epoch, val_loss=None): - """ - Update the learning rate at the end of the given epoch if warmup - finishes otherwise no update of lr on epoch boundaries - """ - if val_loss is not None and self.warmup_end is True: - self.lr_scheduler.step(val_loss) - else: - self.lr_scheduler.last_epoch = epoch - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """ - Update the learning rate after each update.""" - # if there is warmup - if self.cfg.warmup_updates > 0: - if num_updates <= self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - self.optimizer.set_lr(self.lr) - else: - if self.warmup_end is False: - self.warmup_end = True - # else do nothing - return self.optimizer.get_lr() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/setup.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/setup.py deleted file mode 100644 index 4379b2c31f593134fb027cf01da5fcd706a64e00..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/setup.py +++ /dev/null @@ -1,284 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import subprocess -import sys - -from setuptools import Extension, find_packages, setup - -if sys.version_info < (3, 6): - sys.exit("Sorry, Python >= 3.6 is required for fairseq.") - - -def write_version_py(): - with open(os.path.join("fairseq", "version.txt")) as f: - version = f.read().strip() - - # append latest commit hash to version string - try: - sha = ( - subprocess.check_output(["git", "rev-parse", "HEAD"]) - .decode("ascii") - .strip() - ) - version += "+" + sha[:7] - except Exception: - pass - - # write version info to fairseq/version.py - with open(os.path.join("fairseq", "version.py"), "w") as f: - f.write('__version__ = "{}"\n'.format(version)) - return version - - -version = write_version_py() - - -with open("README.md") as f: - readme = f.read() - - -if sys.platform == "darwin": - extra_compile_args = ["-stdlib=libc++", "-O3"] -else: - extra_compile_args = ["-std=c++11", "-O3"] - - -class NumpyExtension(Extension): - """Source: https://stackoverflow.com/a/54128391""" - - def __init__(self, *args, **kwargs): - self.__include_dirs = [] - super().__init__(*args, **kwargs) - - @property - def include_dirs(self): - import numpy - - return self.__include_dirs + [numpy.get_include()] - - @include_dirs.setter - def include_dirs(self, dirs): - self.__include_dirs = dirs - - -extensions = [ - Extension( - "fairseq.libbleu", - sources=[ - "fairseq/clib/libbleu/libbleu.cpp", - "fairseq/clib/libbleu/module.cpp", - ], - extra_compile_args=extra_compile_args, - ), - NumpyExtension( - "fairseq.data.data_utils_fast", - sources=["fairseq/data/data_utils_fast.pyx"], - language="c++", - extra_compile_args=extra_compile_args, - ), - NumpyExtension( - "fairseq.data.token_block_utils_fast", - sources=["fairseq/data/token_block_utils_fast.pyx"], - language="c++", - extra_compile_args=extra_compile_args, - ), -] - - -cmdclass = {} - - -try: - # torch is not available when generating docs - from torch.utils import cpp_extension - - extensions.extend( - [ - cpp_extension.CppExtension( - "fairseq.libbase", - sources=[ - "fairseq/clib/libbase/balanced_assignment.cpp", - ], - ) - ] - ) - - extensions.extend( - [ - cpp_extension.CppExtension( - "fairseq.libnat", - sources=[ - "fairseq/clib/libnat/edit_dist.cpp", - ], - ), - cpp_extension.CppExtension( - "alignment_train_cpu_binding", - sources=[ - "examples/operators/alignment_train_cpu.cpp", - ], - ), - ] - ) - if "CUDA_HOME" in os.environ: - extensions.extend( - [ - cpp_extension.CppExtension( - "fairseq.libnat_cuda", - sources=[ - "fairseq/clib/libnat_cuda/edit_dist.cu", - "fairseq/clib/libnat_cuda/binding.cpp", - ], - ), - cpp_extension.CppExtension( - "fairseq.ngram_repeat_block_cuda", - sources=[ - "fairseq/clib/cuda/ngram_repeat_block_cuda.cpp", - "fairseq/clib/cuda/ngram_repeat_block_cuda_kernel.cu", - ], - ), - cpp_extension.CppExtension( - "alignment_train_cuda_binding", - sources=[ - "examples/operators/alignment_train_kernel.cu", - "examples/operators/alignment_train_cuda.cpp", - ], - ), - ] - ) - cmdclass["build_ext"] = cpp_extension.BuildExtension - -except ImportError: - pass - - -if "READTHEDOCS" in os.environ: - # don't build extensions when generating docs - extensions = [] - if "build_ext" in cmdclass: - del cmdclass["build_ext"] - - # use CPU build of PyTorch - dependency_links = [ - "https://download.pytorch.org/whl/cpu/torch-1.7.0%2Bcpu-cp36-cp36m-linux_x86_64.whl" - ] -else: - dependency_links = [] - - -if "clean" in sys.argv[1:]: - # Source: https://bit.ly/2NLVsgE - print("deleting Cython files...") - import subprocess - - subprocess.run( - ["rm -f fairseq/*.so fairseq/**/*.so fairseq/*.pyd fairseq/**/*.pyd"], - shell=True, - ) - - -extra_packages = [] -if os.path.exists(os.path.join("fairseq", "model_parallel", "megatron", "mpu")): - extra_packages.append("fairseq.model_parallel.megatron.mpu") - - -def do_setup(package_data): - setup( - name="fairseq", - version=version, - description="Facebook AI Research Sequence-to-Sequence Toolkit", - url="https://github.com/pytorch/fairseq", - classifiers=[ - "Intended Audience :: Science/Research", - "License :: OSI Approved :: MIT License", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - ], - long_description=readme, - long_description_content_type="text/markdown", - setup_requires=[ - "cython", - 'numpy<1.20.0; python_version<"3.7"', - 'numpy; python_version>="3.7"', - "setuptools>=18.0", - ], - install_requires=[ - "cffi", - "cython", - 'dataclasses; python_version<"3.7"', - "hydra-core>=1.0.7,<1.1", - "omegaconf<2.1", - 'numpy<1.20.0; python_version<"3.7"', - 'numpy; python_version>="3.7"', - "regex", - "sacrebleu>=1.4.12", - # "torch", - "tqdm", - "bitarray", - # "torchaudio>=0.8.0", - ], - dependency_links=dependency_links, - packages=find_packages( - exclude=[ - "examples", - "examples.*", - "scripts", - "scripts.*", - "tests", - "tests.*", - ] - ) - + extra_packages, - package_data=package_data, - ext_modules=extensions, - test_suite="tests", - entry_points={ - "console_scripts": [ - "fairseq-eval-lm = fairseq_cli.eval_lm:cli_main", - "fairseq-generate = fairseq_cli.generate:cli_main", - "fairseq-hydra-train = fairseq_cli.hydra_train:cli_main", - "fairseq-interactive = fairseq_cli.interactive:cli_main", - "fairseq-preprocess = fairseq_cli.preprocess:cli_main", - "fairseq-score = fairseq_cli.score:cli_main", - "fairseq-train = fairseq_cli.train:cli_main", - "fairseq-validate = fairseq_cli.validate:cli_main", - ], - }, - cmdclass=cmdclass, - zip_safe=False, - ) - - -def get_files(path, relative_to="fairseq"): - all_files = [] - for root, _dirs, files in os.walk(path, followlinks=True): - root = os.path.relpath(root, relative_to) - for file in files: - if file.endswith(".pyc"): - continue - all_files.append(os.path.join(root, file)) - return all_files - - -if __name__ == "__main__": - try: - # symlink examples into fairseq package so package_data accepts them - fairseq_examples = os.path.join("fairseq", "examples") - if "build_ext" not in sys.argv[1:] and not os.path.exists(fairseq_examples): - os.symlink(os.path.join("..", "examples"), fairseq_examples) - - package_data = { - "fairseq": ( - get_files(fairseq_examples) + get_files(os.path.join("fairseq", "config")) - ) - } - do_setup(package_data) - finally: - if "build_ext" not in sys.argv[1:] and os.path.islink(fairseq_examples): - os.unlink(fairseq_examples) diff --git a/spaces/ICML2022/OFA/fairseq/examples/roberta/README.pretraining.md b/spaces/ICML2022/OFA/fairseq/examples/roberta/README.pretraining.md deleted file mode 100644 index a4e7453529111fdd198be637d911d1764cb96c0e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/roberta/README.pretraining.md +++ /dev/null @@ -1,84 +0,0 @@ -# Pretraining RoBERTa using your own data - -This tutorial will walk you through pretraining RoBERTa over your own data. - -### 1) Preprocess the data - -Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training. - -We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/) -to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course -this dataset is quite small, so the resulting pretrained model will perform -poorly, but it gives the general idea. - -First download the dataset: -```bash -wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip -unzip wikitext-103-raw-v1.zip -``` - -Next encode it with the GPT-2 BPE: -```bash -mkdir -p gpt2_bpe -wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json -wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe -for SPLIT in train valid test; do \ - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json gpt2_bpe/encoder.json \ - --vocab-bpe gpt2_bpe/vocab.bpe \ - --inputs wikitext-103-raw/wiki.${SPLIT}.raw \ - --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \ - --keep-empty \ - --workers 60; \ -done -``` - -Finally preprocess/binarize the data using the GPT-2 fairseq dictionary: -```bash -wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt -fairseq-preprocess \ - --only-source \ - --srcdict gpt2_bpe/dict.txt \ - --trainpref wikitext-103-raw/wiki.train.bpe \ - --validpref wikitext-103-raw/wiki.valid.bpe \ - --testpref wikitext-103-raw/wiki.test.bpe \ - --destdir data-bin/wikitext-103 \ - --workers 60 -``` - -### 2) Train RoBERTa base -```bash -DATA_DIR=data-bin/wikitext-103 - -fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \ ---config-name base task.data=$DATA_DIR -``` - -**Note:** You can optionally resume training the released RoBERTa base model by -adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`. - -**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses -a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to -further increase the batch size by 16x (`optimization.update_freq`), for a total batch size -of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need -to reduce `dataset.batch_size` and increase dataset.update_freq to compensate. -Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly -to increase training speed. - -**Note:** The learning rate and batch size are tightly connected and need to be -adjusted together. We generally recommend increasing the learning rate as you -increase the batch size according to the following table (although it's also -dataset dependent, so don't rely on the following values too closely): - -batch size | peak learning rate ----|--- -256 | 0.0001 -2048 | 0.0005 -8192 | 0.0007 - -### 3) Load your pretrained model -```python -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data') -assert isinstance(roberta.model, torch.nn.Module) -``` diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/cross_entropy.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/cross_entropy.py deleted file mode 100644 index 6f33c24cb56e25f91595009af38e63784c2263a0..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/cross_entropy.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -def _cross_entropy_pytorch(logits, target, ignore_index=None, reduction="mean"): - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - return F.nll_loss( - lprobs, - target, - ignore_index=ignore_index, - reduction=reduction, - ) - - -try: - import xentropy_cuda - from apex.contrib import xentropy - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - if logits.device == torch.device("cpu"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) - else: - if not getattr(cross_entropy, "_has_logged_once", False): - logger.info("using fused cross entropy") - cross_entropy._has_logged_once = True - - half_to_float = logits.dtype == torch.half - losses = xentropy.SoftmaxCrossEntropyLoss.apply( - logits, - target, - 0.0, - ignore_index, - half_to_float, - ) - if reduction == "sum": - return losses.sum() - elif reduction == "mean": - if ignore_index >= 0: - return losses.sum() / target.ne(ignore_index).sum() - else: - return losses.mean() - elif reduction == "none": - return losses - else: - raise NotImplementedError - - -except ImportError: - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) diff --git a/spaces/ICML2022/PointCloudC/util/__init__.py b/spaces/ICML2022/PointCloudC/util/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/IPN/FirstSpaceTEST_Gradio/app.py b/spaces/IPN/FirstSpaceTEST_Gradio/app.py deleted file mode 100644 index 5b2c473a206ff74e146b3bfd14775b79d23b011e..0000000000000000000000000000000000000000 --- a/spaces/IPN/FirstSpaceTEST_Gradio/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/mrm8488/bert-mini-finetuned-age_news-classification").launch(); -print("he importado un modelo"); \ No newline at end of file diff --git a/spaces/Ibtehaj10/cheating-detection/person_detection_video.py b/spaces/Ibtehaj10/cheating-detection/person_detection_video.py deleted file mode 100644 index fbd6f742afca23acd2debe99679f8c70f9153adb..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection/person_detection_video.py +++ /dev/null @@ -1,71 +0,0 @@ -import cv2 -import datetime -import imutils -import numpy as np - -protopath = "MobileNetSSD_deploy.prototxt" -modelpath = "MobileNetSSD_deploy.caffemodel" -detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath) -# Only enable it if you are using OpenVino environment -# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE) -# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) - - -CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", - "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", - "dog", "horse", "motorbike", "person", "pottedplant", "sheep", - "sofa", "train", "tvmonitor"] - - -def main(): - cap = cv2.VideoCapture('test_video.mp4') - - fps_start_time = datetime.datetime.now() - fps = 0 - total_frames = 0 - - while True: - ret, frame = cap.read() - frame = imutils.resize(frame, width=600) - total_frames = total_frames + 1 - - (H, W) = frame.shape[:2] - - blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5) - - detector.setInput(blob) - person_detections = detector.forward() - - for i in np.arange(0, person_detections.shape[2]): - confidence = person_detections[0, 0, i, 2] - if confidence > 0.5: - idx = int(person_detections[0, 0, i, 1]) - - if CLASSES[idx] != "person": - continue - - person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H]) - (startX, startY, endX, endY) = person_box.astype("int") - - cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2) - - fps_end_time = datetime.datetime.now() - time_diff = fps_end_time - fps_start_time - if time_diff.seconds == 0: - fps = 0.0 - else: - fps = (total_frames / time_diff.seconds) - - fps_text = "FPS: {:.2f}".format(fps) - - cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) - - cv2.imshow("Application", frame) - key = cv2.waitKey(1) - if key == ord('q'): - break - - cv2.destroyAllWindows() - - -main() diff --git a/spaces/Ifeanyi/tellme.ai/app.py b/spaces/Ifeanyi/tellme.ai/app.py deleted file mode 100644 index d72aa32d6adc2ff2b97fbea6bc79a11fc7648f08..0000000000000000000000000000000000000000 --- a/spaces/Ifeanyi/tellme.ai/app.py +++ /dev/null @@ -1,13 +0,0 @@ -# import required libraries -from transformers import pipeline -import gradio as gr -import timm - -# build gradio interface -model = pipeline("image-classification") -examples = ["birdA.jpg", "birdB.jpg", "birdC.jpg"] -gr.Interface.from_pipeline(model, - title = "tellme.ai", - examples = examples, - theme = gr.themes.Soft(), - css=".gradio-container {background: url('file=blue.jpg')}").launch() \ No newline at end of file diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/mandarin.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/mandarin.py deleted file mode 100644 index 162e1b912dabec4b448ccd3d00d56306f82ce076..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/mandarin.py +++ /dev/null @@ -1,326 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an -import logging - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/score_sde_ve/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/score_sde_ve/__init__.py deleted file mode 100644 index 000d61f6e9b183728cb6fc137e7180cac3a616df..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/score_sde_ve/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_score_sde_ve import ScoreSdeVePipeline diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/README.md b/spaces/Jacks2003/3D_Photo_Inpainting/README.md deleted file mode 100644 index be64a526ee278adf28a4bd0a1fa61c84b2f0d87a..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 3D_Photo_Inpainting -emoji: 👁 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -duplicated_from: doevent/3D_Photo_Inpainting ---- - -# Configuration diff --git a/spaces/JosephusCheung/ACertainsStrategyTalk/11.html b/spaces/JosephusCheung/ACertainsStrategyTalk/11.html deleted file mode 100644 index 03671b57fec06b1f3ab389e0025abf4ccd625f85..0000000000000000000000000000000000000000 --- a/spaces/JosephusCheung/ACertainsStrategyTalk/11.html +++ /dev/null @@ -1,123 +0,0 @@ - - - - - - - - - -
    - - - - - - - - - - - - - - - -
    -
    - - - - -
    Problems -and -Proposed -Solutions -Some merged models have good performance, such as AnythingV3. -Should I continue to merge? -This is not scientifically sound and will ultimately result in a model -that is overfitted in some cases and normal in others. This model -looks good at first, but you will find that it is not faithful to the -input prompt and suffers from the problem of language drift -mentioned above. -3.Merged Models? -Solution We can use the method mentioned above to train two models together using a -word frequency list with Dreambooth. We can add or replace the training data with the -images generated by the model we want to merge, according to the calculated ratio, and -maintain a dynamic training dataset during training to prevent overfitting, as mentioned -above. We get a balanced model that does not overfit in certain directions. Then choose a -checkpoint that is about to be overfitted but not yet as the final version. This type of -model is popular in the community because it has good output even under poorly written -prompt inputs, such as the CertainThing.
    - - - -
    - - diff --git a/spaces/Juliojuse/human_health_gradio/code/contrast_phys/PhysNetModel.py b/spaces/Juliojuse/human_health_gradio/code/contrast_phys/PhysNetModel.py deleted file mode 100644 index 085419f06af60432dcb0f8abed82f91ebe78f22c..0000000000000000000000000000000000000000 --- a/spaces/Juliojuse/human_health_gradio/code/contrast_phys/PhysNetModel.py +++ /dev/null @@ -1,115 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - - -# ------------------------------------------------------------------------------------------------------------------- -# PhysNet model -# -# the output is an ST-rPPG block rather than a rPPG signal. -# ------------------------------------------------------------------------------------------------------------------- -class PhysNet(nn.Module): - def __init__(self, S=2, in_ch=3): - super().__init__() - - self.S = S # S is the spatial dimension of ST-rPPG block - - self.start = nn.Sequential( - nn.Conv3d(in_channels=in_ch, out_channels=32, kernel_size=(1, 5, 5), stride=1, padding=(0, 2, 2)), - nn.BatchNorm3d(32), - nn.ELU() - ) - - # 1x - self.loop1 = nn.Sequential( - nn.AvgPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2), padding=0), - nn.Conv3d(in_channels=32, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU(), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU() - ) - - # encoder - self.encoder1 = nn.Sequential( - nn.AvgPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=0), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU(), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU(), - ) - self.encoder2 = nn.Sequential( - nn.AvgPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=0), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU(), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU() - ) - - # - self.loop4 = nn.Sequential( - nn.AvgPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2), padding=0), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU(), - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 3, 3), stride=1, padding=(1, 1, 1)), - nn.BatchNorm3d(64), - nn.ELU() - ) - - # decoder to reach back initial temporal length - self.decoder1 = nn.Sequential( - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 1, 1), stride=1, padding=(1, 0, 0)), - nn.BatchNorm3d(64), - nn.ELU(), - ) - self.decoder2 = nn.Sequential( - nn.Conv3d(in_channels=64, out_channels=64, kernel_size=(3, 1, 1), stride=1, padding=(1, 0, 0)), - nn.BatchNorm3d(64), - nn.ELU() - ) - - - self.end = nn.Sequential( - nn.AdaptiveAvgPool3d((None, S, S)), - nn.Conv3d(in_channels=64, out_channels=1, kernel_size=(1, 1, 1), stride=1, padding=(0, 0, 0)) - ) - - def forward(self, x): - print("physet shape = ====================",x.shape) - means = torch.mean(x, dim=(2, 3, 4), keepdim=True) - stds = torch.std(x, dim=(2, 3, 4), keepdim=True) - x = (x - means) / stds # (B, C, T, 128, 128) - - parity = [] - x = self.start(x) # (B, C, T, 128, 128) - x = self.loop1(x) # (B, 64, T, 64, 64) - parity.append(x.size(2) % 2) - x = self.encoder1(x) # (B, 64, T/2, 32, 32) - parity.append(x.size(2) % 2) - x = self.encoder2(x) # (B, 64, T/4, 16, 16) - x = self.loop4(x) # (B, 64, T/4, 8, 8) - - x = F.interpolate(x, scale_factor=(2, 1, 1)) # (B, 64, T/2, 8, 8) - x = self.decoder1(x) # (B, 64, T/2, 8, 8) - x = F.pad(x, (0,0,0,0,0,parity[-1]), mode='replicate') - x = F.interpolate(x, scale_factor=(2, 1, 1)) # (B, 64, T, 8, 8) - x = self.decoder2(x) # (B, 64, T, 8, 8) - x = F.pad(x, (0,0,0,0,0,parity[-2]), mode='replicate') - x = self.end(x) # (B, 1, T, S, S), ST-rPPG block - - x_list = [] - for a in range(self.S): - for b in range(self.S): - x_list.append(x[:,:,:,a,b]) # (B, 1, T) - - x = sum(x_list)/(self.S*self.S) # (B, 1, T) - X = torch.cat(x_list+[x], 1) # (B, N, T), flatten all spatial signals to the second dimension - print("physet shape output = ====================",X.shape) - return X \ No newline at end of file diff --git a/spaces/Juno360219/albert-base-v2/index.html b/spaces/Juno360219/albert-base-v2/index.html deleted file mode 100644 index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000 --- a/spaces/Juno360219/albert-base-v2/index.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - My static Space - - - -
    -

    Welcome to your static Space!

    -

    You can modify this app directly by editing index.html in the Files and versions tab.

    -

    - Also don't forget to check the - Spaces documentation. -

    -
    - - diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/KarmKarma/rvc-models-genshinimpact/README.md b/spaces/KarmKarma/rvc-models-genshinimpact/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/KarmKarma/rvc-models-genshinimpact/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kevin676/AutoGPT/autogpt/cli.py b/spaces/Kevin676/AutoGPT/autogpt/cli.py deleted file mode 100644 index a2e99cb421cad005528cb160e948ce59ccfcdb66..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/cli.py +++ /dev/null @@ -1,145 +0,0 @@ -"""Main script for the autogpt package.""" -import click - - -@click.group(invoke_without_command=True) -@click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode") -@click.option( - "--skip-reprompt", - "-y", - is_flag=True, - help="Skips the re-prompting messages at the beginning of the script", -) -@click.option( - "--ai-settings", - "-C", - help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.", -) -@click.option( - "-l", - "--continuous-limit", - type=int, - help="Defines the number of times to run in continuous mode", -) -@click.option("--speak", is_flag=True, help="Enable Speak Mode") -@click.option("--debug", is_flag=True, help="Enable Debug Mode") -@click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode") -@click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode") -@click.option( - "--use-memory", - "-m", - "memory_type", - type=str, - help="Defines which Memory backend to use", -) -@click.option( - "-b", - "--browser-name", - help="Specifies which web-browser to use when using selenium to scrape the web.", -) -@click.option( - "--allow-downloads", - is_flag=True, - help="Dangerous: Allows Auto-GPT to download files natively.", -) -@click.option( - "--skip-news", - is_flag=True, - help="Specifies whether to suppress the output of latest news on startup.", -) -@click.pass_context -def main( - ctx: click.Context, - continuous: bool, - continuous_limit: int, - ai_settings: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """ - Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI. - - Start an Auto-GPT assistant. - """ - # Put imports inside function to avoid importing everything when starting the CLI - import logging - - from colorama import Fore - - from autogpt.agent.agent import Agent - from autogpt.config import Config, check_openai_api_key - from autogpt.configurator import create_config - from autogpt.logs import logger - from autogpt.memory import get_memory - from autogpt.prompt import construct_prompt - from autogpt.utils import get_current_git_branch, get_latest_bulletin - - if ctx.invoked_subcommand is None: - cfg = Config() - # TODO: fill in llm values here - check_openai_api_key() - create_config( - continuous, - continuous_limit, - ai_settings, - skip_reprompt, - speak, - debug, - gpt3only, - gpt4only, - memory_type, - browser_name, - allow_downloads, - skip_news, - ) - logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO) - ai_name = "" - if not cfg.skip_news: - motd = get_latest_bulletin() - if motd: - logger.typewriter_log("NEWS: ", Fore.GREEN, motd) - git_branch = get_current_git_branch() - if git_branch and git_branch != "stable": - logger.typewriter_log( - "WARNING: ", - Fore.RED, - f"You are running on `{git_branch}` branch " - "- this is not a supported branch.", - ) - system_prompt = construct_prompt() - # print(prompt) - # Initialize variables - full_message_history = [] - next_action_count = 0 - # Make a constant: - triggering_prompt = ( - "Determine which next command to use, and respond using the" - " format specified above:" - ) - # Initialize memory and make sure it is empty. - # this is particularly important for indexing and referencing pinecone memory - memory = get_memory(cfg, init=True) - logger.typewriter_log( - "Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}" - ) - logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser) - agent = Agent( - ai_name=ai_name, - memory=memory, - full_message_history=full_message_history, - next_action_count=next_action_count, - system_prompt=system_prompt, - triggering_prompt=triggering_prompt, - ) - agent.start_interaction_loop() - - -if __name__ == "__main__": - main() diff --git a/spaces/LanguageBind/LanguageBind/scripts/video_language/eval.sh b/spaces/LanguageBind/LanguageBind/scripts/video_language/eval.sh deleted file mode 100644 index ea634982ae30b0aee97df55e3ed0b33a0c278b87..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/scripts/video_language/eval.sh +++ /dev/null @@ -1,23 +0,0 @@ - -CACHE_DIR="path/to/pretrained/weight" -RESUME="video_language.pt" -TRAIN_DATA="path/to/data" -# this script is for 768 total batch_size (n(16) GPUs * batch_size(24) * accum_freq(2)) -cd /path/to/LanguageBind -TORCH_DISTRIBUTED_DEBUG=DETAIL HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 torchrun --nnodes=$HOST_NUM --node_rank=$INDEX --nproc_per_node $HOST_GPU_NUM --master_addr $CHIEF_IP \ - -m main \ - --train-data ${TRAIN_DATA} \ - --train-num-samples 3020000 \ - --clip-type "vl" \ - --lock-text --lock-image --text-type "polish_mplug" \ - --init-temp 0.07 --learn-temp \ - --model "ViT-L-14" --cache-dir ${CACHE_DIR} \ - --convert_to_lora --lora_r 64 \ - --lr 1e-4 --coef-lr 1e-3 \ - --beta1 0.9 --beta2 0.98 --wd 0.2 --eps 1e-6 \ - --num-frames 8 --force-patch-dropout 0.5 \ - --epochs 1 --batch-size 24 --accum-freq 2 --warmup 200 \ - --precision "amp" --workers 10 --video-decode-backend "imgs" \ - --save-frequency 1 --log-every-n-steps 20 --report-to "tensorboard" --resume ${RESUME} \ - --do_eval \ - --val_vl_ret_data "msrvtt" "msvd" "activity" "didemo" \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/observers/buysell.py b/spaces/Lianjd/stock_dashboard/backtrader/observers/buysell.py deleted file mode 100644 index b78637fae687c36c93543fb72ecf455a501444ec..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/observers/buysell.py +++ /dev/null @@ -1,118 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import math - -from ..observer import Observer - - -class BuySell(Observer): - ''' - This observer keeps track of the individual buy/sell orders (individual - executions) and will plot them on the chart along the data around the - execution price level - - Params: - - ``barplot`` (default: ``False``) Plot buy signals below the minimum and - sell signals above the maximum. - - If ``False`` it will plot on the average price of executions during a - bar - - - ``bardist`` (default: ``0.015`` 1.5%) Distance to max/min when - ``barplot`` is ``True`` - ''' - lines = ('buy', 'sell',) - - plotinfo = dict(plot=True, subplot=False, plotlinelabels=True) - plotlines = dict( - buy=dict(marker='^', markersize=8.0, color='lime', - fillstyle='full', ls=''), - sell=dict(marker='v', markersize=8.0, color='red', - fillstyle='full', ls='') - ) - - params = ( - ('barplot', False), # plot above/below max/min for clarity in bar plot - ('bardist', 0.015), # distance to max/min in absolute perc - ) - - def next(self): - buy = list() - sell = list() - - for order in self._owner._orderspending: - if order.data is not self.data or not order.executed.size: - continue - - if order.isbuy(): - buy.append(order.executed.price) - else: - sell.append(order.executed.price) - - # Take into account replay ... something could already be in there - # Write down the average buy/sell price - - # BUY - curbuy = self.lines.buy[0] - if curbuy != curbuy: # NaN - curbuy = 0.0 - self.curbuylen = curbuylen = 0 - else: - curbuylen = self.curbuylen - - buyops = (curbuy + math.fsum(buy)) - buylen = curbuylen + len(buy) - - value = buyops / float(buylen or 'NaN') - if not self.p.barplot: - self.lines.buy[0] = value - elif value == value: # Not NaN - pbuy = self.data.low[0] * (1 - self.p.bardist) - self.lines.buy[0] = pbuy - - # Update buylen values - curbuy = buyops - self.curbuylen = buylen - - # SELL - cursell = self.lines.sell[0] - if cursell != cursell: # NaN - cursell = 0.0 - self.curselllen = curselllen = 0 - else: - curselllen = self.curselllen - - sellops = (cursell + math.fsum(sell)) - selllen = curselllen + len(sell) - - value = sellops / float(selllen or 'NaN') - if not self.p.barplot: - self.lines.sell[0] = value - elif value == value: # Not NaN - psell = self.data.high[0] * (1 + self.p.bardist) - self.lines.sell[0] = psell - - # Update selllen values - cursell = sellops - self.curselllen = selllen diff --git a/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/app.py b/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/app.py deleted file mode 100644 index 4186bbda3a8b207b355639a27644137e56596dec..0000000000000000000000000000000000000000 --- a/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/app.py +++ /dev/null @@ -1,246 +0,0 @@ -import gradio as gr -import torch -import random -import requests -from io import BytesIO -from diffusers import StableDiffusionPipeline -from diffusers import DDIMScheduler -from utils import * -from inversion_utils import * -from torch import autocast, inference_mode -import re - -def randomize_seed_fn(seed, randomize_seed): - if randomize_seed: - seed = random.randint(0, np.iinfo(np.int32).max) - torch.manual_seed(seed) - return seed - -def invert(x0, prompt_src="", num_diffusion_steps=100, cfg_scale_src = 3.5, eta = 1): - - # inverts a real image according to Algorihm 1 in https://arxiv.org/pdf/2304.06140.pdf, - # based on the code in https://github.com/inbarhub/DDPM_inversion - - # returns wt, zs, wts: - # wt - inverted latent - # wts - intermediate inverted latents - # zs - noise maps - - sd_pipe.scheduler.set_timesteps(num_diffusion_steps) - - # vae encode image - with autocast("cuda"), inference_mode(): - w0 = (sd_pipe.vae.encode(x0).latent_dist.mode() * 0.18215).float() - - # find Zs and wts - forward process - wt, zs, wts = inversion_forward_process(sd_pipe, w0, etas=eta, prompt=prompt_src, cfg_scale=cfg_scale_src, prog_bar=False, num_inference_steps=num_diffusion_steps) - return zs, wts - - - -def sample(zs, wts, prompt_tar="", skip=36, cfg_scale_tar=15, eta = 1): - - # reverse process (via Zs and wT) - w0, _ = inversion_reverse_process(sd_pipe, xT=wts[skip], etas=eta, prompts=[prompt_tar], cfg_scales=[cfg_scale_tar], prog_bar=False, zs=zs[skip:]) - - # vae decode image - with autocast("cuda"), inference_mode(): - x0_dec = sd_pipe.vae.decode(1 / 0.18215 * w0).sample - if x0_dec.dim()<4: - x0_dec = x0_dec[None,:,:,:] - img = image_grid(x0_dec) - return img - -# load pipelines -sd_model_id = "runwayml/stable-diffusion-v1-5" -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -sd_pipe = StableDiffusionPipeline.from_pretrained(sd_model_id).to(device) -sd_pipe.scheduler = DDIMScheduler.from_config(sd_model_id, subfolder = "scheduler") - - - -def get_example(): - case = [ - [ - 'Examples/gnochi_mirror.jpeg', - 'Watercolor painting of a cat sitting next to a mirror', - 'Examples/gnochi_mirror_watercolor_painting.png', - '', - 100, - 3.5, - 36, - 15, - - ], - [ - 'Examples/source_an_old_man.png', - 'A bronze statue of an old man', - 'Examples/ddpm_a_bronze_statue_of_an_old_man.png', - '', - 100, - 3.5, - 36, - 15, - - ], - [ - 'Examples/source_a_ceramic_vase_with_yellow_flowers.jpeg', - 'A pink ceramic vase with a wheat bouquet', - 'Examples/ddpm_a_pink_ceramic_vase_with_a_wheat_bouquet.png', - '', - 100, - 3.5, - 36, - 15, - - ], - - [ - 'Examples/source_a_model_on_a_runway.jpeg', - 'A zebra on the runway', - 'Examples/ddpm_a_zebra_on_the_run_way.png', - '', - 100, - 3.5, - 36, - 15, - - ] - - - ] - return case - - - - - - - -######## -# demo # -######## - -intro = """ -

    - Edit Friendly DDPM Inversion -

    -

    -Based on the work introduced in: -An Edit Friendly DDPM Noise Space: -Inversion and Manipulations -

    -

    -For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. - -Duplicate Space -

    """ -with gr.Blocks(css='style.css') as demo: - - def reset_do_inversion(): - do_inversion = True - return do_inversion - - - def edit(input_image, - do_inversion, - wts, zs, - src_prompt ="", - tar_prompt="", - steps=100, - cfg_scale_src = 3.5, - cfg_scale_tar = 15, - skip=36, - seed = 0, - randomize_seed = True): - - x0 = load_512(input_image, device=device) - - if do_inversion or randomize_seed: - zs_tensor, wts_tensor = invert(x0 =x0 , prompt_src=src_prompt, num_diffusion_steps=steps, cfg_scale_src=cfg_scale_src) - wts = gr.State(value=wts_tensor) - zs = gr.State(value=zs_tensor) - do_inversion = False - - output = sample(zs.value, wts.value, prompt_tar=tar_prompt, skip=skip, cfg_scale_tar=cfg_scale_tar) - return output, wts, zs, do_inversion - - gr.HTML(intro) - wts = gr.State() - zs = gr.State() - do_inversion = gr.State(value=True) - with gr.Row(): - input_image = gr.Image(label="Input Image", interactive=True) - input_image.style(height=365, width=365) - output_image = gr.Image(label=f"Edited Image", interactive=False) - output_image.style(height=365, width=365) - - with gr.Row(): - tar_prompt = gr.Textbox(lines=1, label="Describe your desired edited output", interactive=True) - - with gr.Row(): - with gr.Column(scale=1, min_width=100): - edit_button = gr.Button("Run") - - - - with gr.Accordion("Advanced Options", open=False): - with gr.Row(): - with gr.Column(): - #inversion - src_prompt = gr.Textbox(lines=1, label="Source Prompt", interactive=True, placeholder="describe the original image") - steps = gr.Number(value=100, precision=0, label="Num Diffusion Steps", interactive=True) - cfg_scale_src = gr.Slider(minimum=1, maximum=15, value=3.5, label=f"Source Guidance Scale", interactive=True) - with gr.Column(): - # reconstruction - skip = gr.Slider(minimum=0, maximum=60, value=36, step = 1, label="Skip Steps", interactive=True) - cfg_scale_tar = gr.Slider(minimum=7, maximum=18,value=15, label=f"Target Guidance Scale", interactive=True) - seed = gr.Number(value=0, precision=0, label="Seed", interactive=True) - randomize_seed = gr.Checkbox(label='Randomize seed', value=False) - - - edit_button.click( - fn = randomize_seed_fn, - inputs = [seed, randomize_seed], - outputs = [seed], queue = False).then( - fn=edit, - inputs=[input_image, - do_inversion, wts, zs, - src_prompt, - tar_prompt, - steps, - cfg_scale_src, - cfg_scale_tar, - skip, - seed,randomize_seed - ], - outputs=[output_image, wts, zs, do_inversion], - ) - - input_image.change( - fn = reset_do_inversion, - outputs = [do_inversion] - ) - - src_prompt.change( - fn = reset_do_inversion, - outputs = [do_inversion] - ) - - - gr.Examples( - label='Examples', - examples=get_example(), - inputs=[input_image, tar_prompt,output_image, src_prompt,steps, - cfg_scale_tar, - skip, - cfg_scale_tar - - ], - outputs=[output_image ], - ) - - - -demo.queue() -demo.launch(share=False) \ No newline at end of file diff --git "a/spaces/LuxOAI/ChatGpt-Web/.github/ISSUE_TEMPLATE/\345\212\237\350\203\275\345\273\272\350\256\256.md" "b/spaces/LuxOAI/ChatGpt-Web/.github/ISSUE_TEMPLATE/\345\212\237\350\203\275\345\273\272\350\256\256.md" deleted file mode 100644 index 9ed1c845d53f067265724359c8149284a22deddf..0000000000000000000000000000000000000000 --- "a/spaces/LuxOAI/ChatGpt-Web/.github/ISSUE_TEMPLATE/\345\212\237\350\203\275\345\273\272\350\256\256.md" +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: 功能建议 -about: 请告诉我们你的灵光一闪 -title: "[Feature] " -labels: '' -assignees: '' - ---- - -**这个功能与现有的问题有关吗?** -如果有关,请在此列出链接或者描述问题。 - -**你想要什么功能或者有什么建议?** -尽管告诉我们。 - -**有没有可以参考的同类竞品?** -可以给出参考产品的链接或者截图。 - -**其他信息** -可以说说你的其他考虑。 diff --git a/spaces/MRiwu/Collection/attentions.py b/spaces/MRiwu/Collection/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/MRiwu/Collection/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Marne/MockingBird/app.py b/spaces/Marne/MockingBird/app.py deleted file mode 100644 index 224162a824377e751663b85ec7e415f05998253b..0000000000000000000000000000000000000000 --- a/spaces/Marne/MockingBird/app.py +++ /dev/null @@ -1,86 +0,0 @@ -import os -import httpx -import torch -import gradio as gr -from tempfile import NamedTemporaryFile -from pathlib import Path - -from mockingbirdforuse import MockingBird - - -mockingbird = MockingBird() -mockingbird_path = Path(os.path.dirname(__file__)) / "data" -base_url = "https://al.smoe.top/d/Home/source/mockingbird/" - -for sy in ["encoder.pt", "g_hifigan.pt", "wavernn.pt"]: - if not os.path.exists(os.path.join(mockingbird_path, sy)): - torch.hub.download_url_to_file(f"{base_url}/{sy}", mockingbird_path / sy) - -for model in ["azusa", "nanmei", "ltyai", "tianyi"]: - model_path = mockingbird_path / model - model_path.mkdir(parents=True, exist_ok=True) - for file_name in ["record.wav", f"{model}.pt"]: - if not os.path.exists(os.path.join(model_path, file_name)): - torch.hub.download_url_to_file( - f"{base_url}/{model}/{file_name}", model_path / file_name - ) - -mockingbird.load_model( - Path(os.path.join(mockingbird_path, "encoder.pt")), - Path(os.path.join(mockingbird_path, "g_hifigan.pt")), - Path(os.path.join(mockingbird_path, "wavernn.pt")), -) - - -def inference( - text: str, - model_name: str, - vocoder_type: str = "HifiGan", - style_idx: int = 0, - min_stop_token: int = 9, - steps: int = 2000, -): - model_path = mockingbird_path / model_name - mockingbird.set_synthesizer(Path(os.path.join(model_path, f"{model_name}.pt"))) - fd = NamedTemporaryFile(suffix=".wav", delete=False) - record = mockingbird.synthesize( - text=str(text), - input_wav=model_path / "record.wav", - vocoder_type=vocoder_type, - style_idx=style_idx, - min_stop_token=min_stop_token, - steps=steps, - ) - with open(fd.name, "wb") as file: - file.write(record.getvalue()) - return fd.name - - -title = "MockingBird" -description = "🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time" -article = "Github Repo

    " - -gr.Interface( - inference, - [ - gr.Textbox(label="Input"), - gr.Radio( - ["azusa", "nanmei", "ltyai", "tianyi"], - label="model type", - value="azusa", - ), - gr.Radio( - ["HifiGan", "WaveRNN"], - label="Vocoder type", - value="HifiGan", - ), - gr.Slider(minimum=-1, maximum=9, step=1, label="style idx", value=0), - gr.Slider(minimum=3, maximum=9, label="min stop token", value=9), - gr.Slider(minimum=200, maximum=2000, label="steps", value=2000), - ], - gr.Audio(type="filepath", label="Output"), - title=title, - description=description, - article=article, - examples=[["阿梓不是你的电子播放器", "azusa", "HifiGan", 0, 9, 2000], ["不是", "nanmei", "HifiGan", 0, 9, 2000]], -).launch() diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/__init__.py deleted file mode 100644 index 73199b01dec52820dc6ca0139903536344d5a1eb..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .io import Cache, VideoReader, frames2video -from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread, - flowwrite, quantize_flow, sparse_flow_from_bytes) -from .processing import concat_video, convert_video, cut_video, resize_video - -__all__ = [ - 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video', - 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow', - 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes' -] diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/visualization/color.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/visualization/color.py deleted file mode 100644 index 9041e0e6b7581c3356795d6a3c5e84667c88f025..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/visualization/color.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum - -import numpy as np - -from annotator.uniformer.mmcv.utils import is_str - - -class Color(Enum): - """An enum that defines common colors. - - Contains red, green, blue, cyan, yellow, magenta, white and black. - """ - red = (0, 0, 255) - green = (0, 255, 0) - blue = (255, 0, 0) - cyan = (255, 255, 0) - yellow = (0, 255, 255) - magenta = (255, 0, 255) - white = (255, 255, 255) - black = (0, 0, 0) - - -def color_val(color): - """Convert various input to color tuples. - - Args: - color (:obj:`Color`/str/tuple/int/ndarray): Color inputs - - Returns: - tuple[int]: A tuple of 3 integers indicating BGR channels. - """ - if is_str(color): - return Color[color].value - elif isinstance(color, Color): - return color.value - elif isinstance(color, tuple): - assert len(color) == 3 - for channel in color: - assert 0 <= channel <= 255 - return color - elif isinstance(color, int): - assert 0 <= color <= 255 - return color, color, color - elif isinstance(color, np.ndarray): - assert color.ndim == 1 and color.size == 3 - assert np.all((color >= 0) & (color <= 255)) - color = color.astype(np.uint8) - return tuple(color) - else: - raise TypeError(f'Invalid type for color: {type(color)}') diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/dataset_wrappers.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/dataset_wrappers.py deleted file mode 100644 index d6a5e957ec3b44465432617cf6e8f0b86a8a5efa..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/dataset_wrappers.py +++ /dev/null @@ -1,50 +0,0 @@ -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - """ - - def __init__(self, datasets): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = datasets[0].PALETTE - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = dataset.PALETTE - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - """Get item from original dataset.""" - return self.dataset[idx % self._ori_len] - - def __len__(self): - """The length is multiplied by ``times``""" - return self.times * self._ori_len diff --git a/spaces/MirageML/sjc/adapt_gddpm.py b/spaces/MirageML/sjc/adapt_gddpm.py deleted file mode 100644 index f71db9e6f8e3dff6906f690046dec4e33a2e5ea2..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/adapt_gddpm.py +++ /dev/null @@ -1,562 +0,0 @@ -from pathlib import Path -from math import sin, pi, sqrt -from functools import partial - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from easydict import EasyDict -from guided_diffusion.script_util import ( - create_model_and_diffusion, - model_and_diffusion_defaults, - - NUM_CLASSES, - create_classifier, - classifier_defaults, - - sr_create_model_and_diffusion, - sr_model_and_diffusion_defaults, -) - -from adapt import ScoreAdapter - -from my.registry import Registry - -PRETRAINED_REGISTRY = Registry("pretrained") - - -device = torch.device("cuda") - - -def load_ckpt(path, **kwargs): - # with bf.BlobFile(path, "rb") as f: - # data = f.read() - return torch.load(path, **kwargs) - - -def pick_out_cfgs(src, target_ks): - return {k: src[k] for k in target_ks} - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_64(): - return dict( - attention_resolutions="32,16,8", - class_cond=True, - diffusion_steps=1000, - dropout=0.1, - image_size=64, - learn_sigma=True, - noise_schedule="cosine", - num_channels=192, - num_head_channels=64, - num_res_blocks=3, - resblock_updown=True, - use_new_attention_order=True, - use_fp16=True, - use_scale_shift_norm=True, - - classifier_depth=4, - - classifier_scale=1.0, - model_path="models/64x64_diffusion.pt", - classifier_path="models/64x64_classifier.pt", - ) - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_128(): - return dict( - attention_resolutions="32,16,8", - class_cond=True, - diffusion_steps=1000, - image_size=128, - learn_sigma=True, - noise_schedule="linear", - num_channels=256, - num_heads=4, - num_res_blocks=2, - resblock_updown=True, - use_fp16=True, - use_scale_shift_norm=True, - - classifier_scale=0.5, - model_path="models/128x128_diffusion.pt", - classifier_path="models/128x128_classifier.pt", - ) - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_256(): - return dict( - attention_resolutions="32,16,8", - class_cond=True, - diffusion_steps=1000, - image_size=256, - learn_sigma=True, - noise_schedule="linear", - num_channels=256, - num_head_channels=64, - num_res_blocks=2, - resblock_updown=True, - use_fp16=True, - use_scale_shift_norm=True, - - classifier_scale=1.0, - model_path="models/256x256_diffusion.pt", - classifier_path="models/256x256_classifier.pt" - ) - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_256_uncond(): - return dict( - attention_resolutions="32,16,8", - class_cond=False, - diffusion_steps=1000, - image_size=256, - learn_sigma=True, - noise_schedule="linear", - num_channels=256, - num_head_channels=64, - num_res_blocks=2, - resblock_updown=True, - use_fp16=True, - use_scale_shift_norm=True, - - classifier_scale=10.0, - model_path="models/256x256_diffusion_uncond.pt", - classifier_path="models/256x256_classifier.pt", - ) - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_512(): - return dict( - attention_resolutions="32,16,8", - class_cond=True, - diffusion_steps=1000, - image_size=512, - learn_sigma=True, - noise_schedule="linear", - num_channels=256, - num_head_channels=64, - num_res_blocks=2, - resblock_updown=True, - use_fp16=False, - use_scale_shift_norm=True, - - classifier_scale=4.0, - model_path="models/512x512_diffusion.pt", - classifier_path="models/512x512_classifier.pt" - ) - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_64_256(base_samples="64_samples.npz"): - return dict( - attention_resolutions="32,16,8", - class_cond=True, - diffusion_steps=1000, - large_size=256, - small_size=64, - learn_sigma=True, - noise_schedule="linear", - num_channels=192, - num_heads=4, - num_res_blocks=2, - resblock_updown=True, - use_fp16=True, - use_scale_shift_norm=True, - - model_path="models/64_256_upsampler.pt", - - base_samples=base_samples, - ) - - -@PRETRAINED_REGISTRY.register() -def m_imgnet_128_512(base_samples="128_samples.npz",): - return dict( - attention_resolutions="32,16", - class_cond=True, - diffusion_steps=1000, - large_size=512, - small_size=128, - learn_sigma=True, - noise_schedule="linear", - num_channels=192, - num_head_channels=64, - num_res_blocks=2, - resblock_updown=True, - use_fp16=True, - use_scale_shift_norm=True, - - model_path="models/128_512_upsampler.pt", - - base_samples=base_samples, - ) - - -@PRETRAINED_REGISTRY.register() -def m_lsun_256(category="bedroom"): - return dict( - attention_resolutions="32,16,8", - class_cond=False, - diffusion_steps=1000, - dropout=0.1, - image_size=256, - learn_sigma=True, - noise_schedule="linear", - num_channels=256, - num_head_channels=64, - num_res_blocks=2, - resblock_updown=True, - use_fp16=True, - use_scale_shift_norm=True, - - model_path=f"models/lsun_{category}.pt" - ) - - -def img_gen(specific_cfgs, num_samples=16, batch_size=16, load_only=False, ckpt_root=Path("")): - cfgs = EasyDict( - clip_denoised=True, - num_samples=num_samples, - batch_size=batch_size, - use_ddim=False, - model_path="", - classifier_path="", - classifier_scale=1.0, - ) - cfgs.update(model_and_diffusion_defaults()) - cfgs.update(classifier_defaults()) - cfgs.update(specific_cfgs) - - use_classifier_guidance = bool(cfgs.classifier_path) - class_aware = cfgs.class_cond or use_classifier_guidance - - model, diffusion = create_model_and_diffusion( - **pick_out_cfgs(cfgs, model_and_diffusion_defaults().keys()) - ) - model.load_state_dict( - load_ckpt(str(ckpt_root / cfgs.model_path), map_location="cpu") - ) - model.to(device) - if cfgs.use_fp16: - model.convert_to_fp16() - model.eval() - - def model_fn(x, t, y=None): - return model(x, t, y if cfgs.class_cond else None) - - classifier = None - cond_fn = None - if use_classifier_guidance: - classifier = create_classifier( - **pick_out_cfgs(cfgs, classifier_defaults().keys()) - ) - classifier.load_state_dict( - load_ckpt(str(ckpt_root / cfgs.classifier_path), map_location="cpu") - ) - classifier.to(device) - if cfgs.classifier_use_fp16: - classifier.convert_to_fp16() - classifier.eval() - - def cond_fn(x, t, y=None): - assert y is not None - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - logits = classifier(x_in, t) - log_probs = F.log_softmax(logits, dim=-1) - selected = log_probs[range(len(logits)), y.view(-1)] - return torch.autograd.grad(selected.sum(), x_in)[0] * cfgs.classifier_scale - - if load_only: - return model, classifier - - all_images = [] - all_labels = [] - - while len(all_images) * cfgs.batch_size < cfgs.num_samples: - model_kwargs = {} - - if class_aware: - classes = torch.randint( - low=0, high=NUM_CLASSES, size=(cfgs.batch_size,), device=device - ) - model_kwargs["y"] = classes - - sample_fn = ( - diffusion.p_sample_loop if not cfgs.use_ddim else diffusion.ddim_sample_loop - ) - sample = sample_fn( - model_fn, - (cfgs.batch_size, 3, cfgs.image_size, cfgs.image_size), - clip_denoised=cfgs.clip_denoised, - model_kwargs=model_kwargs, - cond_fn=cond_fn, - device=device, - progress=True - ) - sample = ((sample + 1) * 127.5).clamp(0, 255).to(torch.uint8) - sample = sample.permute(0, 2, 3, 1) - sample = sample.contiguous() - - all_images.append(sample.cpu().numpy()) - if class_aware: - all_labels.append(classes.cpu().numpy()) - - arr = np.concatenate(all_images, axis=0) - arr = arr[:cfgs.num_samples] - - if class_aware: - all_labels = np.concatenate(all_labels, axis=0) - all_labels = all_labels[:cfgs.num_samples] - - shape_str = "x".join([str(x) for x in arr.shape]) - out_path = Path("./out") / f"samples_{shape_str}.npz" - np.savez(out_path, arr, all_labels) - - -def img_upsamp(specific_cfgs, num_samples=16, batch_size=16, load_only=False): - """note that here the ckpt root is not configured properly; will break but easy fix""" - cfgs = EasyDict( - clip_denoised=True, - num_samples=num_samples, - batch_size=batch_size, - use_ddim=False, - base_samples="", - model_path="", - ) - cfgs.update(sr_model_and_diffusion_defaults()) - cfgs.update(specific_cfgs) - - model, diffusion = sr_create_model_and_diffusion( - **pick_out_cfgs(cfgs, sr_model_and_diffusion_defaults().keys()) - ) - model.load_state_dict(load_ckpt(cfgs.model_path, map_location="cpu")) - model.to(device) - if cfgs.use_fp16: - model.convert_to_fp16() - model.eval() - - if load_only: - return model - - data = load_low_res_samples( - cfgs.base_samples, cfgs.batch_size, cfgs.class_cond - ) - - all_images = [] - while len(all_images) * cfgs.batch_size < cfgs.num_samples: - model_kwargs = next(data) - model_kwargs = {k: v.to(device) for k, v in model_kwargs.items()} - samples = diffusion.p_sample_loop( - model, - (cfgs.batch_size, 3, cfgs.large_size, cfgs.large_size), - clip_denoised=cfgs.clip_denoised, - model_kwargs=model_kwargs, - progress=True - ) - samples = ((samples + 1) * 127.5).clamp(0, 255).to(torch.uint8) - samples = samples.permute(0, 2, 3, 1) - samples = samples.contiguous() - - all_images.append(samples.cpu().numpy()) - - arr = np.concatenate(all_images, axis=0) - arr = arr[: cfgs.num_samples] - - shape_str = "x".join([str(x) for x in arr.shape]) - out_path = Path("./out") / f"samples_{shape_str}.npz" - np.savez(out_path, arr) - - -def load_low_res_samples(base_samples, batch_size, class_cond): - obj = np.load(base_samples) - image_arr = obj["arr_0"] - if class_cond: - label_arr = obj["arr_1"] - - buffer = [] - label_buffer = [] - while True: - for i in range(len(image_arr)): - buffer.append(image_arr[i]) - if class_cond: - label_buffer.append(label_arr[i]) - - if len(buffer) == batch_size: - batch = torch.from_numpy(np.stack(buffer)).float() - batch = batch / 127.5 - 1.0 - batch = batch.permute(0, 3, 1, 2) - res = {} - res["low_res"] = batch - if class_cond: - res["y"] = torch.from_numpy(np.stack(label_buffer)) - yield res - buffer, label_buffer = [], [] - - -def class_cond_info(imgnet_cat): - - def rand_cond_fn(batch_size): - cats = torch.randint( - low=0, high=NUM_CLASSES, size=(batch_size,), device=device - ) - return {"y": cats} - - def class_specific_cond(batch_size): - cats = torch.tensor([imgnet_cat, ] * batch_size, device=device) - return {"y": cats} - - if imgnet_cat == -1: - return rand_cond_fn - else: - return class_specific_cond - - -def _sqrt(x): - if isinstance(x, float): - return sqrt(x) - else: - assert isinstance(x, torch.Tensor) - return torch.sqrt(x) - - -class GuidedDDPM(ScoreAdapter): - def __init__(self, model, lsun_cat, imgnet_cat): - print(PRETRAINED_REGISTRY) - cfgs = PRETRAINED_REGISTRY.get(model)( - **({"category": lsun_cat} if model.startswith("m_lsun") else {}) - ) - - self.unet, self.classifier = img_gen( - cfgs, load_only=True, ckpt_root=self.checkpoint_root() / "guided_ddpm" - ) - - H, W = cfgs['image_size'], cfgs['image_size'] - self._data_shape = (3, H, W) - - if cfgs['class_cond'] or (self.classifier is not None): - cond_func = class_cond_info(imgnet_cat) - else: - cond_func = lambda *args, **kwargs: {} - self.cond_func = cond_func - - self._unet_is_cond = bool(cfgs['class_cond']) - - noise_schedule = cfgs['noise_schedule'] - assert noise_schedule in ("linear", "cosine") - self.M = 1000 - if noise_schedule == "linear": - self.us = self.linear_us(self.M) - self._σ_min = 0.01 - else: - self.us = self.cosine_us(self.M) - self._σ_min = 0.0064 - self.noise_schedule = noise_schedule - - self._device = next(self.unet.parameters()).device - - def data_shape(self): - return self._data_shape - - @property - def σ_max(self): - return self.us[0] - - @property - def σ_min(self): - return self.us[-1] - - @torch.no_grad() - def denoise(self, xs, σ, **model_kwargs): - N = xs.shape[0] - cond_t, σ = self.time_cond_vec(N, σ) - output = self.unet( - xs / _sqrt(1 + σ**2), cond_t, **model_kwargs - ) - # not using the var pred - n_hat = torch.split(output, xs.shape[1], dim=1)[0] - Ds = xs - σ * n_hat - return Ds - - def cond_info(self, batch_size): - return self.cond_func(batch_size) - - def unet_is_cond(self): - return self._unet_is_cond - - def use_cls_guidance(self): - return (self.classifier is not None) - - @torch.no_grad() - def classifier_grad(self, xs, σ, ys): - N = xs.shape[0] - cond_t, σ = self.time_cond_vec(N, σ) - with torch.enable_grad(): - x_in = xs.detach().requires_grad_(True) - logits = self.classifier(x_in, cond_t) - log_probs = F.log_softmax(logits, dim=-1) - selected = log_probs[range(len(logits)), ys.view(-1)] - grad = torch.autograd.grad(selected.sum(), x_in)[0] - - grad = grad * (1 / sqrt(1 + σ**2)) - return grad - - def snap_t_to_nearest_tick(self, t): - j = np.abs(t - self.us).argmin() - return self.us[j], j - - def time_cond_vec(self, N, σ): - if isinstance(σ, float): - σ, j = self.snap_t_to_nearest_tick(σ) # σ might change due to snapping - cond_t = (self.M - 1) - j - cond_t = torch.tensor([cond_t] * N, device=self.device) - return cond_t, σ - else: - assert isinstance(σ, torch.Tensor) - σ = σ.reshape(-1).cpu().numpy() - σs = [] - js = [] - for elem in σ: - _σ, _j = self.snap_t_to_nearest_tick(elem) - σs.append(_σ) - js.append((self.M - 1) - _j) - - cond_t = torch.tensor(js, device=self.device) - σs = torch.tensor(σs, device=self.device, dtype=torch.float32).reshape(-1, 1, 1, 1) - return cond_t, σs - - @staticmethod - def cosine_us(M=1000): - assert M == 1000 - - def α_bar(j): - return sin(pi / 2 * j / (M * (0.008 + 1))) ** 2 - - us = [0, ] - for j in reversed(range(0, M)): # [M-1, 0], inclusive - u_j = sqrt(((us[-1] ** 2) + 1) / (max(α_bar(j) / α_bar(j+1), 0.001)) - 1) - us.append(u_j) - - us = np.array(us) - us = us[1:] - us = us[::-1] - return us - - @staticmethod - def linear_us(M=1000): - assert M == 1000 - β_start = 0.0001 - β_end = 0.02 - βs = np.linspace(β_start, β_end, M, dtype=np.float64) - αs = np.cumprod(1 - βs) - us = np.sqrt((1 - αs) / αs) - us = us[::-1] - return us diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/evaluator/multi_datasets_evaluator.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/evaluator/multi_datasets_evaluator.py deleted file mode 100644 index f01aa70f645d5a9f61fe02386ff214dc72bcffb4..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/evaluator/multi_datasets_evaluator.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import OrderedDict -from typing import Sequence, Union - -from mmengine.dist import (broadcast_object_list, collect_results, - is_main_process) -from mmengine.evaluator import BaseMetric, Evaluator -from mmengine.evaluator.metric import _to_cpu - -from mmocr.registry import EVALUATOR -from mmocr.utils.typing_utils import ConfigType - - -@EVALUATOR.register_module() -class MultiDatasetsEvaluator(Evaluator): - """Wrapper class to compose class: `ConcatDataset` and multiple - :class:`BaseMetric` instances. - The metrics will be evaluated on each dataset slice separately. The name of - the each metric is the concatenation of the dataset prefix, the metric - prefix and the key of metric - e.g. - `dataset_prefix/metric_prefix/accuracy`. - - Args: - metrics (dict or BaseMetric or Sequence): The config of metrics. - dataset_prefixes (Sequence[str]): The prefix of each dataset. The - length of this sequence should be the same as the length of the - datasets. - """ - - def __init__(self, metrics: Union[ConfigType, BaseMetric, Sequence], - dataset_prefixes: Sequence[str]) -> None: - super().__init__(metrics) - self.dataset_prefixes = dataset_prefixes - - def evaluate(self, size: int) -> dict: - """Invoke ``evaluate`` method of each metric and collect the metrics - dictionary. - - Args: - size (int): Length of the entire validation dataset. When batch - size > 1, the dataloader may pad some data samples to make - sure all ranks have the same length of dataset slice. The - ``collect_results`` function will drop the padded data based on - this size. - - Returns: - dict: Evaluation results of all metrics. The keys are the names - of the metrics, and the values are corresponding results. - """ - metrics_results = OrderedDict() - dataset_slices = self.dataset_meta.get('cumulative_sizes', [size]) - assert len(dataset_slices) == len(self.dataset_prefixes) - for metric in self.metrics: - if len(metric.results) == 0: - warnings.warn( - f'{metric.__class__.__name__} got empty `self.results`.' - 'Please ensure that the processed results are properly ' - 'added into `self.results` in `process` method.') - - results = collect_results(metric.results, size, - metric.collect_device) - - if is_main_process(): - # cast all tensors in results list to cpu - results = _to_cpu(results) - for start, end, dataset_prefix in zip([0] + - dataset_slices[:-1], - dataset_slices, - self.dataset_prefixes): - metric_results = metric.compute_metrics( - results[start:end]) # type: ignore - # Add prefix to metric names - - if metric.prefix: - final_prefix = '/'.join( - (dataset_prefix, metric.prefix)) - else: - final_prefix = dataset_prefix - metric_results = { - '/'.join((final_prefix, k)): v - for k, v in metric_results.items() - } - - # Check metric name conflicts - for name in metric_results.keys(): - if name in metrics_results: - raise ValueError( - 'There are multiple evaluation results with ' - f'the same metric name {name}. Please make ' - 'sure all metrics have different prefixes.') - metrics_results.update(metric_results) - metric.results.clear() - if is_main_process(): - metrics_results = [metrics_results] - else: - metrics_results = [None] # type: ignore - broadcast_object_list(metrics_results) - - return metrics_results[0] diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/abinet.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/abinet.py deleted file mode 100644 index f8ee3a5cafd021d6072d33b1648a9722a91bcf10..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/abinet.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmocr.registry import MODELS -from .encoder_decoder_recognizer import EncoderDecoderRecognizer - - -@MODELS.register_module() -class ABINet(EncoderDecoderRecognizer): - """Implementation of `Read Like Humans: Autonomous, Bidirectional and - Iterative LanguageModeling for Scene Text Recognition. - - `_ - """ diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/kie/closeset_to_openset.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/kie/closeset_to_openset.py deleted file mode 100644 index 2057e9797bd0586fd8820ef3ae161486bea22d32..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/kie/closeset_to_openset.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import json -from functools import partial - -import mmengine - -from mmocr.utils import list_from_file, list_to_file - - -def convert(closeset_line, merge_bg_others=False, ignore_idx=0, others_idx=25): - """Convert line-json str of closeset to line-json str of openset. Note that - this function is designed for closeset-wildreceipt to openset-wildreceipt. - It may not be suitable to your own dataset. - - Args: - closeset_line (str): The string to be deserialized to - the closeset dictionary object. - merge_bg_others (bool): If True, give the same label to "background" - class and "others" class. - ignore_idx (int): Index for ``ignore`` class. - others_idx (int): Index for ``others`` class. - """ - # Two labels at the same index of the following two lists - # make up a key-value pair. For example, in wildreceipt, - # closeset_key_inds[0] maps to "Store_name_key" - # and closeset_value_inds[0] maps to "Store_addr_value". - closeset_key_inds = list(range(2, others_idx, 2)) - closeset_value_inds = list(range(1, others_idx, 2)) - - openset_node_label_mapping = {'bg': 0, 'key': 1, 'value': 2, 'others': 3} - if merge_bg_others: - openset_node_label_mapping['others'] = openset_node_label_mapping['bg'] - - closeset_obj = json.loads(closeset_line) - openset_obj = { - 'file_name': closeset_obj['file_name'], - 'height': closeset_obj['height'], - 'width': closeset_obj['width'], - 'annotations': [] - } - - edge_idx = 1 - label_to_edge = {} - for anno in closeset_obj['annotations']: - label = anno['label'] - if label == ignore_idx: - anno['label'] = openset_node_label_mapping['bg'] - anno['edge'] = edge_idx - edge_idx += 1 - elif label == others_idx: - anno['label'] = openset_node_label_mapping['others'] - anno['edge'] = edge_idx - edge_idx += 1 - else: - edge = label_to_edge.get(label, None) - if edge is not None: - anno['edge'] = edge - if label in closeset_key_inds: - anno['label'] = openset_node_label_mapping['key'] - elif label in closeset_value_inds: - anno['label'] = openset_node_label_mapping['value'] - else: - tmp_key = 'key' - if label in closeset_key_inds: - label_with_same_edge = closeset_value_inds[ - closeset_key_inds.index(label)] - elif label in closeset_value_inds: - label_with_same_edge = closeset_key_inds[ - closeset_value_inds.index(label)] - tmp_key = 'value' - edge_counterpart = label_to_edge.get(label_with_same_edge, - None) - if edge_counterpart is not None: - anno['edge'] = edge_counterpart - else: - anno['edge'] = edge_idx - edge_idx += 1 - anno['label'] = openset_node_label_mapping[tmp_key] - label_to_edge[label] = anno['edge'] - - openset_obj['annotations'] = closeset_obj['annotations'] - - return json.dumps(openset_obj, ensure_ascii=False) - - -def process(closeset_file, openset_file, merge_bg_others=False, n_proc=10): - closeset_lines = list_from_file(closeset_file) - - convert_func = partial(convert, merge_bg_others=merge_bg_others) - - openset_lines = mmengine.track_parallel_progress( - convert_func, closeset_lines, nproc=n_proc) - - list_to_file(openset_file, openset_lines) - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument('in_file', help='Annotation file for closeset.') - parser.add_argument('out_file', help='Annotation file for openset.') - parser.add_argument( - '--merge', - action='store_true', - help='Merge two classes: "background" and "others" in closeset ' - 'to one class in openset.') - parser.add_argument( - '--n_proc', type=int, default=10, help='Number of process.') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - - process(args.in_file, args.out_file, args.merge, args.n_proc) - - print('finish') - - -if __name__ == '__main__': - main() diff --git a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_convert.py b/spaces/MuGeminorum/insecta/khandy/boxes/boxes_convert.py deleted file mode 100644 index 6d1f6a955cf1aeadbe1c829220e8bb887ae850a1..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_convert.py +++ /dev/null @@ -1,101 +0,0 @@ -import numpy as np - - -def convert_xyxy_to_xywh(boxes, copy=True): - """Convert [x_min, y_min, x_max, y_max] format to [x_min, y_min, width, height] format. - """ - if copy: - boxes = boxes.copy() - boxes[..., 2:4] -= boxes[..., 0:2] - return boxes - - -def convert_xywh_to_xyxy(boxes, copy=True): - """Convert [x_min, y_min, width, height] format to [x_min, y_min, x_max, y_max] format. - """ - if copy: - boxes = boxes.copy() - boxes[..., 2:4] += boxes[..., 0:2] - return boxes - - -def convert_xywh_to_cxcywh(boxes, copy=True): - """Convert [x_min, y_min, width, height] format to [cx, cy, width, height] format. - """ - if copy: - boxes = boxes.copy() - boxes[..., 0:2] += boxes[..., 2:4] * 0.5 - return boxes - - -def convert_cxcywh_to_xywh(boxes, copy=True): - """Convert [cx, cy, width, height] format to [x_min, y_min, width, height] format. - """ - if copy: - boxes = boxes.copy() - boxes[..., 0:2] -= boxes[..., 2:4] * 0.5 - return boxes - - -def convert_xyxy_to_cxcywh(boxes, copy=True): - """Convert [x_min, y_min, x_max, y_max] format to [cx, cy, width, height] format. - """ - if copy: - boxes = boxes.copy() - boxes[..., 2:4] -= boxes[..., 0:2] - boxes[..., 0:2] += boxes[..., 2:4] * 0.5 - return boxes - - -def convert_cxcywh_to_xyxy(boxes, copy=True): - """Convert [cx, cy, width, height] format to [x_min, y_min, x_max, y_max] format. - """ - if copy: - boxes = boxes.copy() - boxes[..., 0:2] -= boxes[..., 2:4] * 0.5 - boxes[..., 2:4] += boxes[..., 0:2] - return boxes - - -def convert_boxes_format(boxes, in_fmt, out_fmt, copy=True): - """Converts boxes from given in_fmt to out_fmt. - - Supported in_fmt and out_fmt are: - 'xyxy': boxes are represented via corners, x1, y1 being top left and x2, y2 being bottom right. - 'xywh' : boxes are represented via corner, width and height, x1, y2 being top left, w, h being width and height. - 'cxcywh' : boxes are represented via centre, width and height, cx, cy being center of box, w, h - being width and height. - - Args: - boxes: boxes which will be converted. - in_fmt (str): Input format of given boxes. Supported formats are ['xyxy', 'xywh', 'cxcywh']. - out_fmt (str): Output format of given boxes. Supported formats are ['xyxy', 'xywh', 'cxcywh'] - - Returns: - boxes: Boxes into converted format. - - References: - torchvision.ops.box_convert - """ - allowed_fmts = ("xyxy", "xywh", "cxcywh") - if in_fmt not in allowed_fmts or out_fmt not in allowed_fmts: - raise ValueError("Unsupported Bounding Box Conversions for given in_fmt and out_fmt") - if copy: - boxes = boxes.copy() - if in_fmt == out_fmt: - return boxes - - if (in_fmt, out_fmt) == ("xyxy", "xywh"): - boxes = convert_xyxy_to_xywh(boxes, copy=False) - elif (in_fmt, out_fmt) == ("xywh", "xyxy"): - boxes = convert_xywh_to_xyxy(boxes, copy=False) - elif (in_fmt, out_fmt) == ("xywh", "cxcywh"): - boxes = convert_xywh_to_cxcywh(boxes, copy=False) - elif (in_fmt, out_fmt) == ("cxcywh", "xywh"): - boxes = convert_cxcywh_to_xywh(boxes, copy=False) - elif (in_fmt, out_fmt) == ("xyxy", "cxcywh"): - boxes = convert_xyxy_to_cxcywh(boxes, copy=False) - elif (in_fmt, out_fmt) == ("cxcywh", "xyxy"): - boxes = convert_cxcywh_to_xyxy(boxes, copy=False) - return boxes - \ No newline at end of file diff --git a/spaces/MuGeminorum/insecta/khandy/image/misc.py b/spaces/MuGeminorum/insecta/khandy/image/misc.py deleted file mode 100644 index 8d6cc6e17cdf1ed4856368a6d588da498e758ea9..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/image/misc.py +++ /dev/null @@ -1,329 +0,0 @@ -import os -import imghdr -import numbers -import warnings -from io import BytesIO - -import cv2 -import khandy -import numpy as np -from PIL import Image - - -def imread(file_or_buffer, flags=-1): - """Improvement on cv2.imread, make it support filename including chinese character. - """ - try: - if isinstance(file_or_buffer, bytes): - return cv2.imdecode(np.frombuffer(file_or_buffer, dtype=np.uint8), flags) - else: - # support type: file or str or Path - return cv2.imdecode(np.fromfile(file_or_buffer, dtype=np.uint8), flags) - except Exception as e: - print(e) - return None - - -def imread_cv(file_or_buffer, flags=-1): - warnings.warn('khandy.imread_cv will be deprecated, use khandy.imread instead!') - return imread(file_or_buffer, flags) - - -def imwrite(filename, image, params=None): - """Improvement on cv2.imwrite, make it support filename including chinese character. - """ - cv2.imencode(os.path.splitext(filename)[-1], image, params)[1].tofile(filename) - - -def imwrite_cv(filename, image, params=None): - warnings.warn('khandy.imwrite_cv will be deprecated, use khandy.imwrite instead!') - return imwrite(filename, image, params) - - -def imread_pil(file_or_buffer, to_mode=None): - """Improvement on Image.open to avoid ResourceWarning. - """ - try: - if isinstance(file_or_buffer, bytes): - buffer = BytesIO() - buffer.write(file_or_buffer) - buffer.seek(0) - file_or_buffer = buffer - - if hasattr(file_or_buffer, 'read'): - image = Image.open(file_or_buffer) - if to_mode is not None: - image = image.convert(to_mode) - else: - # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835) - with open(file_or_buffer, 'rb') as f: - image = Image.open(f) - # If convert outside with statement, will raise "seek of closed file" as - # https://github.com/microsoft/Swin-Transformer/issues/66 - if to_mode is not None: - image = image.convert(to_mode) - return image - except Exception as e: - print(e) - return None - - -def imwrite_bytes(filename, image_bytes: bytes, update_extension: bool = True): - """Write image bytes to file. - - Args: - filename: str - filename which image_bytes is written into. - image_bytes: bytes - image content to be written. - update_extension: bool - whether update extension according to image_bytes or not. - the cost of update extension is smaller than update image format. - """ - extension = imghdr.what('', image_bytes) - file_extension = khandy.get_path_extension(filename) - # imghdr.what fails to determine image format sometimes! - # so when its return value is None, never update extension. - if extension is None: - image = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), -1) - image_bytes = cv2.imencode(file_extension, image)[1] - elif (extension.lower() != file_extension.lower()[1:]): - if update_extension: - filename = khandy.replace_path_extension(filename, extension) - else: - image = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), -1) - image_bytes = cv2.imencode(file_extension, image)[1] - - with open(filename, "wb") as f: - f.write(image_bytes) - return filename - - -def rescale_image(image: np.ndarray, rescale_factor='auto', dst_dtype=np.float32): - """Rescale image by rescale_factor. - - Args: - img (ndarray): Image to be rescaled. - rescale_factor (str, int or float, *optional*, defaults to `'auto'`): - rescale the image by the specified scale factor. When is `'auto'`, - rescale the image to [0, 1). - dtype (np.dtype, *optional*, defaults to `np.float32`): - The dtype of the output image. Defaults to `np.float32`. - - Returns: - ndarray: The rescaled image. - """ - if rescale_factor == 'auto': - if np.issubdtype(image.dtype, np.unsignedinteger): - rescale_factor = 1. / np.iinfo(image.dtype).max - else: - raise TypeError(f'Only support uint dtype ndarray when `rescale_factor` is `auto`, got {image.dtype}') - elif issubclass(rescale_factor, (int, float)): - pass - else: - raise TypeError('rescale_factor must be "auto", int or float') - image = image.astype(dst_dtype, copy=True) - image *= rescale_factor - image = image.astype(dst_dtype) - return image - - -def normalize_image_value(image: np.ndarray, mean, std, rescale_factor=None): - """Normalize an image with mean and std, rescale optionally. - - Args: - image (ndarray): Image to be normalized. - mean (int, float, Sequence[int], Sequence[float], ndarray): The mean to be used for normalize. - std (int, float, Sequence[int], Sequence[float], ndarray): The std to be used for normalize. - rescale_factor (None, 'auto', int or float, *optional*, defaults to `None`): - rescale the image by the specified scale factor. When is `'auto'`, - rescale the image to [0, 1); When is `None`, do not rescale. - - Returns: - ndarray: The normalized image which dtype is np.float32. - """ - dst_dtype = np.float32 - mean = np.array(mean, dtype=dst_dtype).flatten() - std = np.array(std, dtype=dst_dtype).flatten() - if rescale_factor == 'auto': - if np.issubdtype(image.dtype, np.unsignedinteger): - mean *= np.iinfo(image.dtype).max - std *= np.iinfo(image.dtype).max - else: - raise TypeError(f'Only support uint dtype ndarray when `rescale_factor` is `auto`, got {image.dtype}') - elif isinstance(rescale_factor, (int, float)): - mean *= rescale_factor - std *= rescale_factor - image = image.astype(dst_dtype, copy=True) - image -= mean - image /= std - return image - - -def normalize_image_dtype(image, keep_num_channels=False): - """Normalize image dtype to uint8 (usually for visualization). - - Args: - image : ndarray - Input image. - keep_num_channels : bool, optional - If this is set to True, the result is an array which has - the same shape as input image, otherwise the result is - an array whose channels number is 3. - - Returns: - out: ndarray - Image whose dtype is np.uint8. - """ - assert (image.ndim == 3 and image.shape[-1] in [1, 3]) or (image.ndim == 2) - - image = image.astype(np.float32) - image = khandy.minmax_normalize(image, axis=None, copy=False) - image = np.array(image * 255, dtype=np.uint8) - - if not keep_num_channels: - if image.ndim == 2: - image = np.expand_dims(image, -1) - if image.shape[-1] == 1: - image = np.tile(image, (1,1,3)) - return image - - -def normalize_image_channel(image, swap_rb=False): - """Normalize image channel number and order to RGB or BGR. - - Args: - image : ndarray - Input image. - swap_rb : bool, optional - whether swap red and blue channel or not - - Returns: - out: ndarray - Image whose shape is (..., 3). - """ - if image.ndim == 2: - image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR) - elif image.ndim == 3: - num_channels = image.shape[-1] - if num_channels == 1: - image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR) - elif num_channels == 3: - if swap_rb: - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - elif num_channels == 4: - if swap_rb: - image = cv2.cvtColor(image, cv2.COLOR_BGRA2RGB) - else: - image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR) - else: - raise ValueError(f'Unsupported image channel number, only support 1, 3 and 4, got {num_channels}!') - else: - raise ValueError(f'Unsupported image ndarray ndim, only support 2 and 3, got {image.ndim}!') - return image - - -def normalize_image_shape(image, swap_rb=False): - warnings.warn('khandy.normalize_image_shape will be deprecated, use khandy.normalize_image_channel instead!') - return normalize_image_channel(image, swap_rb) - - -def stack_image_list(image_list, dtype=np.float32): - """Join a sequence of image along a new axis before first axis. - - References: - `im_list_to_blob` in `py-faster-rcnn-master/lib/utils/blob.py` - """ - assert isinstance(image_list, (tuple, list)) - - max_dimension = np.array([image.ndim for image in image_list]).max() - assert max_dimension in [2, 3] - max_shape = np.array([image.shape[:2] for image in image_list]).max(axis=0) - - num_channels = [] - for image in image_list: - if image.ndim == 2: - num_channels.append(1) - else: - num_channels.append(image.shape[-1]) - assert len(set(num_channels) - set([1])) in [0, 1] - max_num_channels = np.max(num_channels) - - blob = np.empty((len(image_list), max_shape[0], max_shape[1], max_num_channels), dtype=dtype) - for k, image in enumerate(image_list): - blob[k, :image.shape[0], :image.shape[1], :] = np.atleast_3d(image).astype(dtype, copy=False) - if max_dimension == 2: - blob = np.squeeze(blob, axis=-1) - return blob - - -def is_numpy_image(image): - return isinstance(image, np.ndarray) and image.ndim in {2, 3} - - -def is_gray_image(image, tol=3): - assert is_numpy_image(image) - if image.ndim == 2: - return True - elif image.ndim == 3: - num_channels = image.shape[-1] - if num_channels == 1: - return True - elif num_channels == 3: - gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - gray3 = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR) - mae = np.mean(cv2.absdiff(image, gray3)) - return mae <= tol - elif num_channels == 4: - rgb = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR) - gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY) - gray3 = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR) - mae = np.mean(cv2.absdiff(rgb, gray3)) - return mae <= tol - else: - return False - else: - return False - - -def is_solid_color_image(image, tol=4): - assert is_numpy_image(image) - mean = np.array(cv2.mean(image)[:-1], dtype=np.float32) - - if image.ndim == 2: - mae = np.mean(np.abs(image - mean[0])) - return mae <= tol - elif image.ndim == 3: - num_channels = image.shape[-1] - if num_channels == 1: - mae = np.mean(np.abs(image - mean[0])) - return mae <= tol - elif num_channels == 3: - mae = np.mean(np.abs(image - mean)) - return mae <= tol - elif num_channels == 4: - mae = np.mean(np.abs(image[:,:,:-1] - mean)) - return mae <= tol - else: - return False - else: - return False - - -def create_solid_color_image(image_width, image_height, color, dtype=None): - if isinstance(color, numbers.Real): - image = np.full((image_height, image_width), color, dtype=dtype) - elif isinstance(color, (tuple, list)): - if len(color) == 1: - image = np.full((image_height, image_width), color[0], dtype=dtype) - elif len(color) in (3, 4): - image = np.full((1, 1, len(color)), color, dtype=dtype) - image = cv2.copyMakeBorder(image, 0, image_height-1, 0, image_width-1, - cv2.BORDER_CONSTANT, value=color) - else: - color = np.asarray(color, dtype=dtype) - image = np.empty((image_height, image_width, len(color)), dtype=dtype) - image[:] = color - else: - raise TypeError(f'Invalid type {type(color)} for `color`.') - return image diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/CaptionModel.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/CaptionModel.py deleted file mode 100644 index 221ecd1e173d2e20e0103d4cde328d82bfd6b66c..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/CaptionModel.py +++ /dev/null @@ -1,407 +0,0 @@ -# This file contains ShowAttendTell and AllImg model - -# ShowAttendTell is from Show, Attend and Tell: Neural Image Caption Generation with Visual Attention -# https://arxiv.org/abs/1502.03044 - -# AllImg is a model where -# img feature is concatenated with word embedding at every time step as the input of lstm -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import * -from ..utils import misc as utils -from . import utils as model_utils - - -class CaptionModel(nn.Module): - def __init__(self): - super(CaptionModel, self).__init__() - - # implements beam search - # calls beam_step and returns the final set of beams - # augments log-probabilities with diversity terms when number of groups > 1 - - def forward(self, *args, **kwargs): - mode = kwargs.get('mode', 'forward') - if 'mode' in kwargs: - del kwargs['mode'] - return getattr(self, '_'+mode)(*args, **kwargs) - - def beam_search(self, init_state, init_logprobs, *args, **kwargs): - - # function computes the similarity score to be augmented - def add_diversity(beam_seq_table, logprobs, t, divm, diversity_lambda, bdash): - local_time = t - divm - unaug_logprobs = logprobs.clone() - batch_size = beam_seq_table[0].shape[0] - - if divm > 0: - change = logprobs.new_zeros(batch_size, logprobs.shape[-1]) - for prev_choice in range(divm): - prev_decisions = beam_seq_table[prev_choice][:, :, local_time] # Nxb - for prev_labels in range(bdash): - change.scatter_add_(1, prev_decisions[:, prev_labels].unsqueeze(-1), change.new_ones(batch_size, 1)) - - if local_time == 0: - logprobs = logprobs - change * diversity_lambda - else: - logprobs = logprobs - self.repeat_tensor(bdash, change) * diversity_lambda - - return logprobs, unaug_logprobs - - - # does one step of classical beam search - - def beam_step(logprobs, unaug_logprobs, beam_size, t, beam_seq, beam_seq_logprobs, beam_logprobs_sum, state): - #INPUTS: - #logprobs: probabilities augmented after diversity N*bxV - #beam_size: obvious - #t : time instant - #beam_seq : tensor contanining the beams - #beam_seq_logprobs: tensor contanining the beam logprobs - #beam_logprobs_sum: tensor contanining joint logprobs - #OUPUTS: - #beam_seq : tensor containing the word indices of the decoded captions Nxbxl - #beam_seq_logprobs : log-probability of each decision made, NxbxlxV - #beam_logprobs_sum : joint log-probability of each beam Nxb - - batch_size = beam_logprobs_sum.shape[0] - vocab_size = logprobs.shape[-1] - logprobs = logprobs.reshape(batch_size, -1, vocab_size) # NxbxV - if t == 0: - assert logprobs.shape[1] == 1 - beam_logprobs_sum = beam_logprobs_sum[:, :1] - candidate_logprobs = beam_logprobs_sum.unsqueeze(-1) + logprobs # beam_logprobs_sum Nxb logprobs is NxbxV - ys, ix = torch.sort(candidate_logprobs.reshape(candidate_logprobs.shape[0], -1), -1, True) - ys, ix = ys[:,:beam_size], ix[:,:beam_size] - beam_ix = ix // vocab_size # Nxb which beam - selected_ix = ix % vocab_size # Nxb # which world - state_ix = (beam_ix + torch.arange(batch_size).type_as(beam_ix).unsqueeze(-1) * logprobs.shape[1]).reshape(-1) # N*b which in Nxb beams - - - if t > 0: - # gather according to beam_ix - assert (beam_seq.gather(1, beam_ix.unsqueeze(-1).expand_as(beam_seq)) == beam_seq.reshape(-1, beam_seq.shape[-1])[state_ix].view_as(beam_seq)).all() - beam_seq = beam_seq.gather(1, beam_ix.unsqueeze(-1).expand_as(beam_seq)) - - beam_seq_logprobs = beam_seq_logprobs.gather(1, beam_ix.unsqueeze(-1).unsqueeze(-1).expand_as(beam_seq_logprobs)) - - beam_seq = torch.cat([beam_seq, selected_ix.unsqueeze(-1)], -1) # beam_seq Nxbxl - beam_logprobs_sum = beam_logprobs_sum.gather(1, beam_ix) + \ - logprobs.reshape(batch_size, -1).gather(1, ix) - assert (beam_logprobs_sum == ys).all() - _tmp_beam_logprobs = unaug_logprobs[state_ix].reshape(batch_size, -1, vocab_size) - beam_logprobs = unaug_logprobs.reshape(batch_size, -1, vocab_size).gather(1, beam_ix.unsqueeze(-1).expand(-1, -1, vocab_size)) # NxbxV - assert (_tmp_beam_logprobs == beam_logprobs).all() - beam_seq_logprobs = torch.cat([ - beam_seq_logprobs, - beam_logprobs.reshape(batch_size, -1, 1, vocab_size)], 2) - - new_state = [None for _ in state] - for _ix in range(len(new_state)): - # copy over state in previous beam q to new beam at vix - new_state[_ix] = state[_ix][:, state_ix] - state = new_state - return beam_seq,beam_seq_logprobs,beam_logprobs_sum,state - - # Start diverse_beam_search - opt = kwargs['opt'] - temperature = opt.get('temperature', 1) # This should not affect beam search, but will affect dbs - beam_size = opt.get('beam_size', 10) - group_size = opt.get('group_size', 1) - diversity_lambda = opt.get('diversity_lambda', 0.5) - decoding_constraint = opt.get('decoding_constraint', 0) - remove_bad_endings = opt.get('remove_bad_endings', 0) - suppress_UNK = opt.get('suppress_UNK', 0) - length_penalty = utils.penalty_builder(opt.get('length_penalty', '')) - bdash = beam_size // group_size # beam per group - - batch_size = init_logprobs.shape[0] - device = init_logprobs.device - # INITIALIZATIONS - beam_seq_table = [torch.LongTensor(batch_size, bdash, 0).to(device) for _ in range(group_size)] - beam_seq_logprobs_table = [torch.FloatTensor(batch_size, bdash, 0, self.vocab_size + 1).to(device) for _ in range(group_size)] - beam_logprobs_sum_table = [torch.zeros(batch_size, bdash).to(device) for _ in range(group_size)] - - # logprobs # logprobs predicted in last time step, shape (beam_size, vocab_size+1) - done_beams_table = [[[] for __ in range(group_size)] for _ in range(batch_size)] - # state_table = [list(torch.unbind(_)) for _ in torch.stack(init_state).chunk(group_size, 2)] - # state_table = list(zip(*[_.reshape(-1, batch_size * bdash, group_size, *_.shape[2:]).chunk(group_size, 2) for _ in init_state])) - state_table = [[_.clone() for _ in init_state] for _ in range(group_size)] - # logprobs_table = list(init_logprobs.reshape(batch_size * bdash, group_size, -1).chunk(group_size, 0)) - logprobs_table = [init_logprobs.clone() for _ in range(group_size)] - # END INIT - - # Chunk elements in the args - args = list(args) - args = model_utils.split_tensors(group_size, args) # For each arg, turn (Bbg)x... to (Bb)x(g)x... - if self.__class__.__name__ == 'AttEnsemble': - args = [[[args[j][i][k] for i in range(len(self.models))] for j in range(len(args))] for k in range(group_size)] # group_name, arg_name, model_name - else: - args = [[args[i][j] for i in range(len(args))] for j in range(group_size)] - - for t in range(self.seq_length + group_size - 1): - for divm in range(group_size): - if t >= divm and t <= self.seq_length + divm - 1: - # add diversity - logprobs = logprobs_table[divm] - # suppress previous word - if decoding_constraint and t-divm > 0: - logprobs.scatter_(1, beam_seq_table[divm][:, :, t-divm-1].reshape(-1, 1).to(device), float('-inf')) - if remove_bad_endings and t-divm > 0: - logprobs[torch.from_numpy(np.isin(beam_seq_table[divm][:, :, t-divm-1].cpu().numpy(), self.bad_endings_ix)).reshape(-1), 0] = float('-inf') - # suppress UNK tokens in the decoding - if suppress_UNK and hasattr(self, 'vocab') and self.vocab[str(logprobs.size(1)-1)] == 'UNK': - logprobs[:,logprobs.size(1)-1] = logprobs[:, logprobs.size(1)-1] - 1000 - # diversity is added here - # the function directly modifies the logprobs values and hence, we need to return - # the unaugmented ones for sorting the candidates in the end. # for historical - # reasons :-) - logprobs, unaug_logprobs = add_diversity(beam_seq_table,logprobs,t,divm,diversity_lambda,bdash) - - # infer new beams - beam_seq_table[divm],\ - beam_seq_logprobs_table[divm],\ - beam_logprobs_sum_table[divm],\ - state_table[divm] = beam_step(logprobs, - unaug_logprobs, - bdash, - t-divm, - beam_seq_table[divm], - beam_seq_logprobs_table[divm], - beam_logprobs_sum_table[divm], - state_table[divm]) - - # if time's up... or if end token is reached then copy beams - for b in range(batch_size): - is_end = beam_seq_table[divm][b, :, t-divm] == self.eos_idx - assert beam_seq_table[divm].shape[-1] == t-divm+1 - if t == self.seq_length + divm - 1: - is_end.fill_(1) - for vix in range(bdash): - if is_end[vix]: - final_beam = { - 'seq': beam_seq_table[divm][b, vix].clone(), - 'logps': beam_seq_logprobs_table[divm][b, vix].clone(), - 'unaug_p': beam_seq_logprobs_table[divm][b, vix].sum().item(), - 'p': beam_logprobs_sum_table[divm][b, vix].item() - } - final_beam['p'] = length_penalty(t-divm+1, final_beam['p']) - done_beams_table[b][divm].append(final_beam) - beam_logprobs_sum_table[divm][b, is_end] -= 1000 - - # move the current group one step forward in time - - it = beam_seq_table[divm][:, :, t-divm].reshape(-1).to(logprobs.device) - logprobs_table[divm], state_table[divm] = self.get_logprobs_state(it, *(args[divm] + [state_table[divm]])) - logprobs_table[divm] = F.log_softmax(logprobs_table[divm] / temperature, dim=-1) - - # all beams are sorted by their log-probabilities - done_beams_table = [[sorted(done_beams_table[b][i], key=lambda x: -x['p'])[:bdash] for i in range(group_size)] for b in range(batch_size)] - done_beams = [sum(_, []) for _ in done_beams_table] - return done_beams - - def old_beam_search(self, init_state, init_logprobs, *args, **kwargs): - - # function computes the similarity score to be augmented - def add_diversity(beam_seq_table, logprobsf, t, divm, diversity_lambda, bdash): - local_time = t - divm - unaug_logprobsf = logprobsf.clone() - for prev_choice in range(divm): - prev_decisions = beam_seq_table[prev_choice][local_time] - for sub_beam in range(bdash): - for prev_labels in range(bdash): - logprobsf[sub_beam][prev_decisions[prev_labels]] = logprobsf[sub_beam][prev_decisions[prev_labels]] - diversity_lambda - return unaug_logprobsf - - # does one step of classical beam search - - def beam_step(logprobsf, unaug_logprobsf, beam_size, t, beam_seq, beam_seq_logprobs, beam_logprobs_sum, state): - #INPUTS: - #logprobsf: probabilities augmented after diversity - #beam_size: obvious - #t : time instant - #beam_seq : tensor contanining the beams - #beam_seq_logprobs: tensor contanining the beam logprobs - #beam_logprobs_sum: tensor contanining joint logprobs - #OUPUTS: - #beam_seq : tensor containing the word indices of the decoded captions - #beam_seq_logprobs : log-probability of each decision made, same size as beam_seq - #beam_logprobs_sum : joint log-probability of each beam - - ys,ix = torch.sort(logprobsf,1,True) - candidates = [] - cols = min(beam_size, ys.size(1)) - rows = beam_size - if t == 0: - rows = 1 - for c in range(cols): # for each column (word, essentially) - for q in range(rows): # for each beam expansion - #compute logprob of expanding beam q with word in (sorted) position c - local_logprob = ys[q,c].item() - candidate_logprob = beam_logprobs_sum[q] + local_logprob - # local_unaug_logprob = unaug_logprobsf[q,ix[q,c]] - candidates.append({'c':ix[q,c], 'q':q, 'p':candidate_logprob, 'r':unaug_logprobsf[q]}) - candidates = sorted(candidates, key=lambda x: -x['p']) - - new_state = [_.clone() for _ in state] - #beam_seq_prev, beam_seq_logprobs_prev - if t >= 1: - #we''ll need these as reference when we fork beams around - beam_seq_prev = beam_seq[:t].clone() - beam_seq_logprobs_prev = beam_seq_logprobs[:t].clone() - for vix in range(beam_size): - v = candidates[vix] - #fork beam index q into index vix - if t >= 1: - beam_seq[:t, vix] = beam_seq_prev[:, v['q']] - beam_seq_logprobs[:t, vix] = beam_seq_logprobs_prev[:, v['q']] - #rearrange recurrent states - for state_ix in range(len(new_state)): - # copy over state in previous beam q to new beam at vix - new_state[state_ix][:, vix] = state[state_ix][:, v['q']] # dimension one is time step - #append new end terminal at the end of this beam - beam_seq[t, vix] = v['c'] # c'th word is the continuation - beam_seq_logprobs[t, vix] = v['r'] # the raw logprob here - beam_logprobs_sum[vix] = v['p'] # the new (sum) logprob along this beam - state = new_state - return beam_seq,beam_seq_logprobs,beam_logprobs_sum,state,candidates - - # Start diverse_beam_search - opt = kwargs['opt'] - temperature = opt.get('temperature', 1) # This should not affect beam search, but will affect dbs - beam_size = opt.get('beam_size', 10) - group_size = opt.get('group_size', 1) - diversity_lambda = opt.get('diversity_lambda', 0.5) - decoding_constraint = opt.get('decoding_constraint', 0) - remove_bad_endings = opt.get('remove_bad_endings', 0) - suppress_UNK = opt.get('suppress_UNK', 0) - length_penalty = utils.penalty_builder(opt.get('length_penalty', '')) - bdash = beam_size // group_size # beam per group - - # INITIALIZATIONS - beam_seq_table = [torch.LongTensor(self.seq_length, bdash).zero_() for _ in range(group_size)] - beam_seq_logprobs_table = [torch.FloatTensor(self.seq_length, bdash, self.vocab_size + 1).zero_() for _ in range(group_size)] - beam_logprobs_sum_table = [torch.zeros(bdash) for _ in range(group_size)] - - # logprobs # logprobs predicted in last time step, shape (beam_size, vocab_size+1) - done_beams_table = [[] for _ in range(group_size)] - # state_table = [list(torch.unbind(_)) for _ in torch.stack(init_state).chunk(group_size, 2)] - state_table = list(zip(*[_.chunk(group_size, 1) for _ in init_state])) - logprobs_table = list(init_logprobs.chunk(group_size, 0)) - # END INIT - - # Chunk elements in the args - args = list(args) - if self.__class__.__name__ == 'AttEnsemble': - args = [[_.chunk(group_size) if _ is not None else [None]*group_size for _ in args_] for args_ in args] # arg_name, model_name, group_name - args = [[[args[j][i][k] for i in range(len(self.models))] for j in range(len(args))] for k in range(group_size)] # group_name, arg_name, model_name - else: - args = [_.chunk(group_size) if _ is not None else [None]*group_size for _ in args] - args = [[args[i][j] for i in range(len(args))] for j in range(group_size)] - - for t in range(self.seq_length + group_size - 1): - for divm in range(group_size): - if t >= divm and t <= self.seq_length + divm - 1: - # add diversity - logprobsf = logprobs_table[divm] - # suppress previous word - if decoding_constraint and t-divm > 0: - logprobsf.scatter_(1, beam_seq_table[divm][t-divm-1].unsqueeze(1).to(logprobsf.device), float('-inf')) - if remove_bad_endings and t-divm > 0: - logprobsf[torch.from_numpy(np.isin(beam_seq_table[divm][t-divm-1].cpu().numpy(), self.bad_endings_ix)), 0] = float('-inf') - # suppress UNK tokens in the decoding - if suppress_UNK and hasattr(self, 'vocab') and self.vocab[str(logprobsf.size(1)-1)] == 'UNK': - logprobsf[:,logprobsf.size(1)-1] = logprobsf[:, logprobsf.size(1)-1] - 1000 - # diversity is added here - # the function directly modifies the logprobsf values and hence, we need to return - # the unaugmented ones for sorting the candidates in the end. # for historical - # reasons :-) - unaug_logprobsf = add_diversity(beam_seq_table,logprobsf,t,divm,diversity_lambda,bdash) - - # infer new beams - beam_seq_table[divm],\ - beam_seq_logprobs_table[divm],\ - beam_logprobs_sum_table[divm],\ - state_table[divm],\ - candidates_divm = beam_step(logprobsf, - unaug_logprobsf, - bdash, - t-divm, - beam_seq_table[divm], - beam_seq_logprobs_table[divm], - beam_logprobs_sum_table[divm], - state_table[divm]) - - # if time's up... or if end token is reached then copy beams - for vix in range(bdash): - if beam_seq_table[divm][t-divm,vix] == self.eos_idx or t == self.seq_length + divm - 1: - final_beam = { - 'seq': beam_seq_table[divm][:, vix].clone(), - 'logps': beam_seq_logprobs_table[divm][:, vix].clone(), - 'unaug_p': beam_seq_logprobs_table[divm][:, vix].sum().item(), - 'p': beam_logprobs_sum_table[divm][vix].item() - } - final_beam['p'] = length_penalty(t-divm+1, final_beam['p']) - done_beams_table[divm].append(final_beam) - # don't continue beams from finished sequences - beam_logprobs_sum_table[divm][vix] = -1000 - - # move the current group one step forward in time - - it = beam_seq_table[divm][t-divm].to(logprobsf.device) - logprobs_table[divm], state_table[divm] = self.get_logprobs_state(it, *(args[divm] + [state_table[divm]])) - logprobs_table[divm] = F.log_softmax(logprobs_table[divm] / temperature, dim=-1) - - # all beams are sorted by their log-probabilities - done_beams_table = [sorted(done_beams_table[i], key=lambda x: -x['p'])[:bdash] for i in range(group_size)] - done_beams = sum(done_beams_table, []) - return done_beams - - def sample_next_word(self, logprobs, sample_method, temperature): - if sample_method == 'greedy': - sampleLogprobs, it = torch.max(logprobs.data, 1) - it = it.view(-1).long() - elif sample_method == 'gumbel': # gumbel softmax - # ref: https://gist.github.com/yzh119/fd2146d2aeb329d067568a493b20172f - def sample_gumbel(shape, eps=1e-20): - U = torch.rand(shape).to(logprobs.device) - return -torch.log(-torch.log(U + eps) + eps) - def gumbel_softmax_sample(logits, temperature): - y = logits + sample_gumbel(logits.size()) - return F.log_softmax(y / temperature, dim=-1) - _logprobs = gumbel_softmax_sample(logprobs, temperature) - _, it = torch.max(_logprobs.data, 1) - sampleLogprobs = logprobs.gather(1, it.unsqueeze(1)) # gather the logprobs at sampled positions - else: - logprobs = logprobs / temperature - if sample_method.startswith('top'): # topk sampling - top_num = float(sample_method[3:]) - if 0 < top_num < 1: - # nucleus sampling from # The Curious Case of Neural Text Degeneration - probs = F.softmax(logprobs, dim=1) - sorted_probs, sorted_indices = torch.sort(probs, descending=True, dim=1) - _cumsum = sorted_probs.cumsum(1) - mask = _cumsum < top_num - mask = torch.cat([torch.ones_like(mask[:,:1]), mask[:,:-1]], 1) - sorted_probs = sorted_probs * mask.to(sorted_probs) - sorted_probs = sorted_probs / sorted_probs.sum(1, keepdim=True) - logprobs.scatter_(1, sorted_indices, sorted_probs.log()) - else: - the_k = int(top_num) - tmp = torch.empty_like(logprobs).fill_(float('-inf')) - topk, indices = torch.topk(logprobs, the_k, dim=1) - tmp = tmp.scatter(1, indices, topk) - logprobs = tmp - it = torch.distributions.Categorical(logits=logprobs.detach()).sample() - sampleLogprobs = logprobs.gather(1, it.unsqueeze(1)) # gather the logprobs at sampled positions - return it, sampleLogprobs - - - def decode_sequence(self, seq): - return utils.decode_sequence(self.vocab, seq) diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/save/README.md b/spaces/NAACL2022/CLIP-Caption-Reward/save/README.md deleted file mode 100644 index 91547b46ffedc91d209fec4c7ac0b8cfb9e447de..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/save/README.md +++ /dev/null @@ -1 +0,0 @@ -Directory for checkpoints \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/tasks/sentence_prediction.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/tasks/sentence_prediction.py deleted file mode 100644 index b2eb0bf47de273408459e35cf45ff01ac69a9d2c..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/tasks/sentence_prediction.py +++ /dev/null @@ -1,190 +0,0 @@ -# Lint as: python3 -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Sentence prediction (classification) task.""" -from absl import logging -import dataclasses -import numpy as np -from scipy import stats -from sklearn import metrics as sklearn_metrics -import tensorflow as tf -import tensorflow_hub as hub - -from official.core import base_task -from official.modeling.hyperparams import config_definitions as cfg -from official.nlp.configs import bert -from official.nlp.data import sentence_prediction_dataloader -from official.nlp.modeling import losses as loss_lib -from official.nlp.tasks import utils - - -@dataclasses.dataclass -class SentencePredictionConfig(cfg.TaskConfig): - """The model config.""" - # At most one of `init_checkpoint` and `hub_module_url` can - # be specified. - init_checkpoint: str = '' - hub_module_url: str = '' - metric_type: str = 'accuracy' - network: bert.BertPretrainerConfig = bert.BertPretrainerConfig( - num_masked_tokens=0, # No masked language modeling head. - cls_heads=[ - bert.ClsHeadConfig( - inner_dim=768, - num_classes=3, - dropout_rate=0.1, - name='sentence_prediction') - ]) - train_data: cfg.DataConfig = cfg.DataConfig() - validation_data: cfg.DataConfig = cfg.DataConfig() - - -@base_task.register_task_cls(SentencePredictionConfig) -class SentencePredictionTask(base_task.Task): - """Task object for sentence_prediction.""" - - def __init__(self, params=cfg.TaskConfig): - super(SentencePredictionTask, self).__init__(params) - if params.hub_module_url and params.init_checkpoint: - raise ValueError('At most one of `hub_module_url` and ' - '`pretrain_checkpoint_dir` can be specified.') - if params.hub_module_url: - self._hub_module = hub.load(params.hub_module_url) - else: - self._hub_module = None - self.metric_type = params.metric_type - - def build_model(self): - if self._hub_module: - encoder_from_hub = utils.get_encoder_from_hub(self._hub_module) - return bert.instantiate_bertpretrainer_from_cfg( - self.task_config.network, encoder_network=encoder_from_hub) - else: - return bert.instantiate_bertpretrainer_from_cfg(self.task_config.network) - - def build_losses(self, labels, model_outputs, aux_losses=None) -> tf.Tensor: - loss = loss_lib.weighted_sparse_categorical_crossentropy_loss( - labels=labels, - predictions=tf.nn.log_softmax( - tf.cast(model_outputs['sentence_prediction'], tf.float32), axis=-1)) - - if aux_losses: - loss += tf.add_n(aux_losses) - return loss - - def build_inputs(self, params, input_context=None): - """Returns tf.data.Dataset for sentence_prediction task.""" - if params.input_path == 'dummy': - - def dummy_data(_): - dummy_ids = tf.zeros((1, params.seq_length), dtype=tf.int32) - x = dict( - input_word_ids=dummy_ids, - input_mask=dummy_ids, - input_type_ids=dummy_ids) - y = tf.ones((1, 1), dtype=tf.int32) - return (x, y) - - dataset = tf.data.Dataset.range(1) - dataset = dataset.repeat() - dataset = dataset.map( - dummy_data, num_parallel_calls=tf.data.experimental.AUTOTUNE) - return dataset - - return sentence_prediction_dataloader.SentencePredictionDataLoader( - params).load(input_context) - - def build_metrics(self, training=None): - del training - metrics = [tf.keras.metrics.SparseCategoricalAccuracy(name='cls_accuracy')] - return metrics - - def process_metrics(self, metrics, labels, model_outputs): - for metric in metrics: - metric.update_state(labels, model_outputs['sentence_prediction']) - - def process_compiled_metrics(self, compiled_metrics, labels, model_outputs): - compiled_metrics.update_state(labels, model_outputs['sentence_prediction']) - - def validation_step(self, inputs, model: tf.keras.Model, metrics=None): - if self.metric_type == 'accuracy': - return super(SentencePredictionTask, - self).validation_step(inputs, model, metrics) - features, labels = inputs - outputs = self.inference_step(features, model) - loss = self.build_losses( - labels=labels, model_outputs=outputs, aux_losses=model.losses) - if self.metric_type == 'matthews_corrcoef': - return { - self.loss: - loss, - 'sentence_prediction': - tf.expand_dims( - tf.math.argmax(outputs['sentence_prediction'], axis=1), - axis=0), - 'labels': - labels, - } - if self.metric_type == 'pearson_spearman_corr': - return { - self.loss: loss, - 'sentence_prediction': outputs['sentence_prediction'], - 'labels': labels, - } - - def aggregate_logs(self, state=None, step_outputs=None): - if state is None: - state = {'sentence_prediction': [], 'labels': []} - state['sentence_prediction'].append( - np.concatenate([v.numpy() for v in step_outputs['sentence_prediction']], - axis=0)) - state['labels'].append( - np.concatenate([v.numpy() for v in step_outputs['labels']], axis=0)) - return state - - def reduce_aggregated_logs(self, aggregated_logs): - if self.metric_type == 'matthews_corrcoef': - preds = np.concatenate(aggregated_logs['sentence_prediction'], axis=0) - labels = np.concatenate(aggregated_logs['labels'], axis=0) - return { - self.metric_type: sklearn_metrics.matthews_corrcoef(preds, labels) - } - if self.metric_type == 'pearson_spearman_corr': - preds = np.concatenate(aggregated_logs['sentence_prediction'], axis=0) - labels = np.concatenate(aggregated_logs['labels'], axis=0) - pearson_corr = stats.pearsonr(preds, labels)[0] - spearman_corr = stats.spearmanr(preds, labels)[0] - corr_metric = (pearson_corr + spearman_corr) / 2 - return {self.metric_type: corr_metric} - - def initialize(self, model): - """Load a pretrained checkpoint (if exists) and then train from iter 0.""" - ckpt_dir_or_file = self.task_config.init_checkpoint - if tf.io.gfile.isdir(ckpt_dir_or_file): - ckpt_dir_or_file = tf.train.latest_checkpoint(ckpt_dir_or_file) - if not ckpt_dir_or_file: - return - - pretrain2finetune_mapping = { - 'encoder': - model.checkpoint_items['encoder'], - 'next_sentence.pooler_dense': - model.checkpoint_items['sentence_prediction.pooler_dense'], - } - ckpt = tf.train.Checkpoint(**pretrain2finetune_mapping) - status = ckpt.restore(ckpt_dir_or_file) - status.expect_partial().assert_existing_objects_matched() - logging.info('finished loading pretrained checkpoint from %s', - ckpt_dir_or_file) diff --git a/spaces/NSect/VALL-E-X/utils/prompt_making.py b/spaces/NSect/VALL-E-X/utils/prompt_making.py deleted file mode 100644 index 93e4a3d647052df4899253fea41be22f09e006b8..0000000000000000000000000000000000000000 --- a/spaces/NSect/VALL-E-X/utils/prompt_making.py +++ /dev/null @@ -1,115 +0,0 @@ -import os -import torch -import torchaudio -import logging -import langid -import whisper -langid.set_languages(['en', 'zh', 'ja']) - -import numpy as np -from data.tokenizer import ( - AudioTokenizer, - tokenize_audio, -) -from data.collation import get_text_token_collater -from utils.g2p import PhonemeBpeTokenizer - -from macros import * - -text_tokenizer = PhonemeBpeTokenizer(tokenizer_path="./utils/g2p/bpe_69.json") -text_collater = get_text_token_collater() - -device = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda", 0) - -codec = AudioTokenizer(device) - -whisper_model = None - -@torch.no_grad() -def transcribe_one(model, audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(temperature=1.0, best_of=5, fp16=False if device == torch.device("cpu") else True, sample_len=150) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - - text_pr = result.text - if text_pr.strip(" ")[-1] not in "?!.,。,?!。、": - text_pr += "." - return lang, text_pr - -def make_prompt(name, audio_prompt_path, transcript=None): - global model, text_collater, text_tokenizer, codec - wav_pr, sr = torchaudio.load(audio_prompt_path) - # check length - if wav_pr.size(-1) / sr > 15: - raise ValueError(f"Prompt too long, expect length below 15 seconds, got {wav_pr / sr} seconds.") - if wav_pr.size(0) == 2: - wav_pr = wav_pr.mean(0, keepdim=True) - text_pr, lang_pr = make_transcript(name, wav_pr, sr, transcript) - - # tokenize audio - encoded_frames = tokenize_audio(codec, (wav_pr, sr)) - audio_tokens = encoded_frames[0][0].transpose(2, 1).cpu().numpy() - - # tokenize text - phonemes, langs = text_tokenizer.tokenize(text=f"{text_pr}".strip()) - text_tokens, enroll_x_lens = text_collater( - [ - phonemes - ] - ) - - message = f"Detected language: {lang_pr}\n Detected text {text_pr}\n" - - # save as npz file - save_path = os.path.join("./customs/", f"{name}.npz") - np.savez(save_path, audio_tokens=audio_tokens, text_tokens=text_tokens, lang_code=lang2code[lang_pr]) - logging.info(f"Successful. Prompt saved to {save_path}") - - -def make_transcript(name, wav, sr, transcript=None): - - if not isinstance(wav, torch.FloatTensor): - wav = torch.tensor(wav) - if wav.abs().max() > 1: - wav /= wav.abs().max() - if wav.size(-1) == 2: - wav = wav.mean(-1, keepdim=False) - if wav.ndim == 1: - wav = wav.unsqueeze(0) - assert wav.ndim and wav.size(0) == 1 - if transcript is None or transcript == "": - logging.info("Transcript not given, using Whisper...") - global whisper_model - if whisper_model is None: - whisper_model = whisper.load_model("medium") - whisper_model.to(device) - torchaudio.save(f"./prompts/{name}.wav", wav, sr) - lang, text = transcribe_one(whisper_model, f"./prompts/{name}.wav") - lang_token = lang2token[lang] - text = lang_token + text + lang_token - os.remove(f"./prompts/{name}.wav") - whisper_model.cpu() - else: - text = transcript - lang, _ = langid.classify(text) - lang_token = lang2token[lang] - text = lang_token + text + lang_token - - torch.cuda.empty_cache() - return text, lang \ No newline at end of file diff --git a/spaces/NiuTaipu/moe-tts-test01/text/shanghainese.py b/spaces/NiuTaipu/moe-tts-test01/text/shanghainese.py deleted file mode 100644 index 1c28c17d0dc0d920fd222c909a53d703c95e043b..0000000000000000000000000000000000000000 --- a/spaces/NiuTaipu/moe-tts-test01/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/OAOA/DifFace/facelib/utils/misc.py b/spaces/OAOA/DifFace/facelib/utils/misc.py deleted file mode 100644 index 4e8c7c0a2bd261135ae8c52c20c1ab2072d1049f..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/facelib/utils/misc.py +++ /dev/null @@ -1,138 +0,0 @@ -import cv2 -import os -import os.path as osp -import torch -from torch.hub import download_url_to_file, get_dir -from urllib.parse import urlparse -import gdown - - -ROOT_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) - - -def download_pretrained_models(file_ids, save_path_root): - os.makedirs(save_path_root, exist_ok=True) - - for file_name, file_id in file_ids.items(): - file_url = 'https://drive.google.com/uc?id='+file_id - save_path = osp.abspath(osp.join(save_path_root, file_name)) - if osp.exists(save_path): - user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\n') - if user_response.lower() == 'y': - print(f'Covering {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - elif user_response.lower() == 'n': - print(f'Skipping {file_name}') - else: - raise ValueError('Wrong input. Only accepts Y/N.') - else: - print(f'Downloading {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv's :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = os.path.abspath(os.path.dirname(file_path)) - os.makedirs(dir_name, exist_ok=True) - return cv2.imwrite(file_path, img, params) - - -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - else: - return _totensor(imgs, bgr2rgb, float32) - - -def load_file_from_url(url, model_dir=None, progress=True, file_name=None): - """Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py - """ - if model_dir is None: - hub_dir = get_dir() - model_dir = os.path.join(hub_dir, 'checkpoints') - - os.makedirs(os.path.join(ROOT_DIR, model_dir), exist_ok=True) - - parts = urlparse(url) - filename = os.path.basename(parts.path) - if file_name is not None: - filename = file_name - cached_file = os.path.abspath(os.path.join(ROOT_DIR, model_dir, filename)) - if not os.path.exists(cached_file): - print(f'Downloading: "{url}" to {cached_file}\n') - download_url_to_file(url, cached_file, hash_prefix=None, progress=progress) - return cached_file - - -def scandir(dir_path, suffix=None, recursive=False, full_path=False): - """Scan a directory to find the interested files. - Args: - dir_path (str): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - full_path (bool, optional): If set to True, include the dir_path. - Default: False. - Returns: - A generator for all the interested files with relative paths. - """ - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - root = dir_path - - def _scandir(dir_path, suffix, recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - if full_path: - return_path = entry.path - else: - return_path = osp.relpath(entry.path, root) - - if suffix is None: - yield return_path - elif return_path.endswith(suffix): - yield return_path - else: - if recursive: - yield from _scandir(entry.path, suffix=suffix, recursive=recursive) - else: - continue - - return _scandir(dir_path, suffix=suffix, recursive=recursive) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index 93c8668041f8a7af29e4c11e905d8b56b946dd51..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -name: 🚀 Feature Request -about: Submit a proposal/request for a new feature -labels: 'enhancement, help wanted, needs triage' ---- - -## 🚀 Feature Request - - -### Motivation - - - -### Pitch - - - -### Alternatives - - - -### Additional context - - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/fully_sharded_data_parallel/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/fully_sharded_data_parallel/README.md deleted file mode 100644 index b9e44fef48bee5faeee27b3d1d1b1eb96b6a477f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/fully_sharded_data_parallel/README.md +++ /dev/null @@ -1,177 +0,0 @@ -# Fully Sharded Data Parallel (FSDP) - -## Overview -Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and -[Google](https://arxiv.org/abs/2004.13336) has shown that data parallel -training can be made significantly more efficient by sharding the model -parameters and optimizer state across data parallel workers. These ideas are -encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper provided -by [fairscale](https://github.com/facebookresearch/fairscale/). - -Compared to PyTorch DDP: -* FSDP produces identical results as PyTorch DDP (it's still synchronous data parallel training) -* FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs -* FSDP is faster than PyTorch DDP because the optimizer step is sharded, and the communication can be overlapped with the forward pass -* FSDP enables training 13B parameter models on 8 GPUs and 175B parameter models on 128 GPUs - -FSDP is fully supported in fairseq via the following new arguments: -* `--ddp-backend=fully_sharded`: enables full sharding via FSDP -* `--cpu-offload`: offloads the optimizer state and FP32 model copy to CPU (combine with `--optimizer=cpu_adam`) -* `--no-reshard-after-forward`: increases training speed for large models (1B+ params) and is similar to ZeRO stage 2 -* other popular options (`--fp16`, `--update-freq`, `--checkpoint-activations`, `--offload-activations`, etc.) continue to work as normal - -
    Limitations

    - -FSDP currently has several limitations compared to fairseq's default DDP backend (PyTorch DDP): -* while FSDP is full compatible with pointwise Optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.), it is not currently compatible with non-pointwise Optimizers (e.g., Adagrad, Adafactor, LAMB, etc.) -* FSDP depends on flattening the parameters, so models that currently require `--fp16-no-flatten-grads` may not be supported - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of these and other limitations. - -

    - -
    How it works

    - -Fully Sharded Data Parallel - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of how FSDP works. - -

    - -## Example usage - -The following examples illustrate how to train a very large language model with -13 billion parameters on 1 GPU by offloading parameters and optimizer states to -CPU, or on 8 GPUs by fully sharding the params and optimizer states across GPUs. - -These examples use the WikiText-103 dataset for demonstration purposes, but -in practice a much larger dataset will be needed to achieve good results. -Follow the [instructions here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.pretraining.md#1-preprocess-the-data) -to preprocess the WikiText-103 dataset using the GPT-2/RoBERTa vocabulary. - -### 13B params on 1 V100 GPU (with CPU offloading) - -The following command trains a 13B parameter GPT-3 model on a single V100 GPU -using the `--cpu-offload` feature to offload parameters and optimizer states to -CPU. In this setting, the optimizer step (Adam) happens on CPU. We also use the -`--checkpoint-activations` feature (sometimes called [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html)), -which further saves memory in exchange for a small increase in computation. - -**Requirements:** -- Install the latest master version of fairscale: `pip install git+https://github.com/facebookresearch/fairscale.git@master` -- You'll need 32GB of GPU memory and ~256GB of system memory to train the 13B param model. -- If you have less system memory, the 6.7B param model can be trained with ~128GB of system memory, just set `--arch transformer_lm_gpt3_6_7` -- We use the CPU Adam optimizer from [DeepSpeed](https://github.com/microsoft/DeepSpeed), so you'll need to `pip install deepspeed` before running the command. - -**Notes:** -- The command will take ~5 minutes to start training, during which time it will appear to be hung, since randomly initializing 13B weights can be slow. -- The `--cpu-offload` feature requires training in mixed precision (`--fp16`). -- Tune the `OMP_NUM_THREADS` env variable for best performance with CPU offloading. -- The example command below stops training after 10 steps (`--max-update 10`) and does not save checkpoints (`--no-save`). - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
    Example output

    - -``` -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | training on 1 devices (GPUs/TPUs) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 12:31:36 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.475", "ppl": "91120.8", "wps": "0", "ups": "0", "wpb": "16384", "bsz": "8", "num_updates": "1", "lr": "2e-05", "gnorm": "20.751", "loss_scale": "4", "train_wall": "99", "gb_free": "9.3", "wall": "105"} -2021-03-08 12:32:33 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.446", "ppl": "89281.6", "wps": "288.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "2", "lr": "4e-05", "gnorm": "19.777", "loss_scale": "4", "train_wall": "57", "gb_free": "9.3", "wall": "161"} -2021-03-08 12:33:12 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 12:33:51 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 12:34:45 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "25.22", "ppl": "3.90691e+07", "wps": "123.4", "ups": "0.01", "wpb": "16384", "bsz": "8", "num_updates": "3", "lr": "6e-05", "gnorm": "131.281", "loss_scale": "1", "train_wall": "133", "gb_free": "9.3", "wall": "294"} -2021-03-08 12:35:43 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.079", "ppl": "276809", "wps": "285.5", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "4", "lr": "8e-05", "gnorm": "13.776", "loss_scale": "1", "train_wall": "57", "gb_free": "9.3", "wall": "351"} -2021-03-08 12:36:35 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "23.729", "ppl": "1.39088e+07", "wps": "316.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "72.774", "loss_scale": "1", "train_wall": "52", "gb_free": "9.3", "wall": "403"} -2021-03-08 12:37:28 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "20.429", "ppl": "1.41203e+06", "wps": "307.6", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "6", "lr": "8e-05", "gnorm": "60.846", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "456"} -2021-03-08 12:38:27 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.965", "ppl": "511684", "wps": "279.4", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "7", "lr": "6e-05", "gnorm": "22.687", "loss_scale": "1", "train_wall": "59", "gb_free": "9.3", "wall": "515"} -2021-03-08 12:39:18 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.345", "ppl": "332887", "wps": "319.1", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "8", "lr": "4e-05", "gnorm": "8.451", "loss_scale": "1", "train_wall": "51", "gb_free": "9.3", "wall": "566"} -2021-03-08 12:40:11 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "18.262", "ppl": "314336", "wps": "305.9", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "9", "lr": "2e-05", "gnorm": "6.457", "loss_scale": "1", "train_wall": "54", "gb_free": "9.3", "wall": "620"} -2021-03-08 12:41:04 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "17.556", "ppl": "192686", "wps": "311.8", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "10", "lr": "0", "gnorm": "5.796", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "673"} -2021-03-08 12:41:04 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 12:41:04 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 12:43:15 | INFO | valid | {"epoch": 1, "valid_loss": "17.953", "valid_ppl": "253807", "valid_wps": "1868.4", "valid_wpb": "15400.2", "valid_bsz": "7.6", "valid_num_updates": "10"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 12:43:15 | INFO | train | {"epoch": 1, "train_loss": "19.351", "train_ppl": "668509", "train_wps": "210.9", "train_ups": "0.01", "train_wpb": "16384", "train_bsz": "8", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "36.26", "train_loss_scale": "1", "train_train_wall": "667", "train_gb_free": "9.3", "train_wall": "804"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | done training in 798.6 seconds -``` - -

    - -### 13B params on 8 V100 GPUs (with full parameter + optimizer state sharding) - -FSDP can also shard the parameters and optimizer states across multiple GPUs, -reducing memory requirements significantly. On 8 x 32GB GPUs, sharding enables -training the same 13B parameter model *without offloading the parameters to -CPU*. However, without CPU offloading we'd only be able to fit a batch size of -1 per GPU, which would cause training speed to suffer. - -We obtain the best performance on 8 GPUs by combining full sharding and CPU -offloading. The following command trains the same 13B parameter GPT-3 model as -before on 8 x 32GB V100 GPUs; training speed increases superlinearly from ~310 -words per second to ~3200 words per second. - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
    Example output

    - -``` -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | training on 8 devices (GPUs/TPUs) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 18:05:06 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "16.408", "ppl": "86945.6", "wps": "0", "ups": "0", "wpb": "131072", "bsz": "64", "num_updates": "1", "lr": "2e-05", "gnorm": "18.27", "loss_scale": "4", "train_wall": "47", "gb_free": "9.3", "wall": "56"} -2021-03-08 18:05:45 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "16.352", "ppl": "83644.3", "wps": "3283.4", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "2", "lr": "4e-05", "gnorm": "18.411", "loss_scale": "4", "train_wall": "40", "gb_free": "9.3", "wall": "96"} -2021-03-08 18:06:21 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 18:06:56 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 18:07:37 | INFO | train_inner | {"epoch": 1, "update": 0.006, "loss": "23.682", "ppl": "1.34537e+07", "wps": "1176.6", "ups": "0.01", "wpb": "131072", "bsz": "64", "num_updates": "3", "lr": "6e-05", "gnorm": "119.682", "loss_scale": "1", "train_wall": "111", "gb_free": "9.3", "wall": "208"} -2021-03-08 18:08:18 | INFO | train_inner | {"epoch": 1, "update": 0.007, "loss": "18.988", "ppl": "519921", "wps": "3189.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "4", "lr": "8e-05", "gnorm": "14.934", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "249"} -2021-03-08 18:08:59 | INFO | train_inner | {"epoch": 1, "update": 0.008, "loss": "20.08", "ppl": "1.10798e+06", "wps": "3223.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "5", "lr": "0.0001", "gnorm": "59.92", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "289"} -2021-03-08 18:09:39 | INFO | train_inner | {"epoch": 1, "update": 0.009, "loss": "18.323", "ppl": "327980", "wps": "3256.6", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "6", "lr": "8e-05", "gnorm": "37.425", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "330"} -2021-03-08 18:10:20 | INFO | train_inner | {"epoch": 1, "update": 0.01, "loss": "17.264", "ppl": "157354", "wps": "3188.7", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "7", "lr": "6e-05", "gnorm": "10.824", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "371"} -2021-03-08 18:11:01 | INFO | train_inner | {"epoch": 1, "update": 0.011, "loss": "16.794", "ppl": "113647", "wps": "3230", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "8", "lr": "4e-05", "gnorm": "5.616", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "411"} -2021-03-08 18:11:39 | INFO | train_inner | {"epoch": 1, "update": 0.012, "loss": "16.706", "ppl": "106938", "wps": "3384", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "9", "lr": "2e-05", "gnorm": "5.318", "loss_scale": "1", "train_wall": "39", "gb_free": "9.3", "wall": "450"} -2021-03-08 18:12:19 | INFO | train_inner | {"epoch": 1, "update": 0.013, "loss": "16.548", "ppl": "95796.2", "wps": "3274.4", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "10", "lr": "0", "gnorm": "5.22", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "490"} -2021-03-08 18:12:19 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 18:12:19 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 18:12:45 | INFO | valid | {"epoch": 1, "valid_loss": "16.624", "valid_ppl": "101000", "valid_wps": "10855.9", "valid_wpb": "123202", "valid_bsz": "60.5", "valid_num_updates": "10"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 18:12:45 | INFO | train | {"epoch": 1, "train_loss": "18.114", "train_ppl": "283776", "train_wps": "2567.8", "train_ups": "0.02", "train_wpb": "131072", "train_bsz": "64", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "29.562", "train_loss_scale": "1", "train_train_wall": "480", "train_gb_free": "9.3", "train_wall": "516"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | done training in 509.9 seconds -``` - -

    diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/transformer_layer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/transformer_layer.py deleted file mode 100644 index 347b8118daa2818af5e0230a793f2fa8fcd63b3a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/transformer_layer.py +++ /dev/null @@ -1,459 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm, MultiheadAttention -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -class TransformerEncoderLayerBase(nn.Module): - """Encoder layer block. - - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.encoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - self.embed_dim = cfg.encoder.embed_dim - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - self.self_attn = self.build_self_attention(self.embed_dim, cfg) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.encoder.normalize_before - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.encoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.encoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_self_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.encoder.attention_heads, - dropout=cfg.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def residual_connection(self, x, residual): - return residual + x - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), - -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x - - -# backward compatible with the legacy argparse format -class TransformerEncoderLayer(TransformerEncoderLayerBase): - def __init__(self, args): - super().__init__(TransformerConfig.from_namespace(args)) - self.args = args - - def build_self_attention(self, embed_dim, args): - return super().build_self_attention( - embed_dim, TransformerConfig.from_namespace(args) - ) - - -class TransformerDecoderLayerBase(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.decoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, cfg, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = cfg.decoder.embed_dim - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - - self.cross_self_attention = cfg.cross_self_attention - - self.self_attn = self.build_self_attention( - self.embed_dim, - cfg, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.decoder.normalize_before - - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, cfg) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.decoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.decoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.need_attn = True - - self.onnx_trace = False - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, embed_dim, cfg, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - dropout=cfg.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not cfg.cross_self_attention, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def build_encoder_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - kdim=cfg.encoder.embed_dim, - vdim=cfg.encoder.embed_dim, - dropout=cfg.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def residual_connection(self, x, residual): - return residual + x - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None - - def make_generation_fast_(self, need_attn: bool = False, **kwargs): - self.need_attn = need_attn - - -# backward compatible with the legacy argparse format -class TransformerDecoderLayer(TransformerDecoderLayerBase): - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__( - TransformerConfig.from_namespace(args), - no_encoder_attn=no_encoder_attn, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.args = args - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return super().build_self_attention( - embed_dim, - TransformerConfig.from_namespace(args), - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - def build_encoder_attention(self, embed_dim, args): - return super().build_encoder_attention( - embed_dim, - TransformerConfig.from_namespace(args), - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/README.md deleted file mode 100644 index 0b213fd202d04bce2149936ec149c23c6d483745..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/README.md +++ /dev/null @@ -1,103 +0,0 @@ -# wav2vec Unsupervised (wav2vec-U) - -Wav2vec Unsupervised (wav2vec-U) is a framework for building speech recognition systems without any labeled training data as described in [Unsupervised Speech Recognition (Baevski et al., 2021)](https://ai.facebook.com/research/publications/unsupervised-speech-recognition). The model takes as input wav2vec 2.0 or XLSR representations (see [pretrained models](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec)) as well as unlabeled speech and text data. - - The wav2vec-U training procedure consists of three consecutive main steps: -* Preparation of speech representations and text data -* Generative adversarial training (GAN) -* Iterative self-training + Kaldi LM-decoding - -## Preparation of speech and text data -Similar to [wav2vec 2.0](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md), data folders contain {train,valid,test}.{tsv,wrd,phn} files, where audio paths are stored in tsv files, and word, letter or phoneme transcriptions are stored in .{wrd,ltr,phn}. - -In **/path/to/data/with_silence** you need a *train.tsv* file as well as (optionally) *{valid,test}.{tsv,wrd,phn}*. It is nice to have *10h.{tsv,phn}* files there too for reproducing the ablation study on layer selection. In **/path/to/data/without_silence** you have the same files, except *.tsv* files contain audios with silences removed using rVAD. - -Pre-requisites: -* set FAIRSEQ_ROOT environmental variable to your fairseq installation -* set RVAD_ROOT environmental variable to a checkout of [rVADfast](https://github.com/zhenghuatan/rVADfast) -* set KENLM_ROOT environmental variable to the location of [KenLM](https://github.com/kpu/kenlm) binaries -* install [PyKaldi](https://github.com/pykaldi/pykaldi) and set KALDI_ROOT environmental variable to the location of your kaldi installation. To use the version bundled with PyKaldi, you can use /path/to/pykaldi/tools/kaldi - -Create new audio files without silences: -```shell -# create a manifest file for the set original of audio files -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0 - -python scripts/vads.py -r $RVAD_ROOT < /path/to/train.tsv > train.vads - -python scripts/remove_silence.py --tsv /path/to/train.tsv --vads train.vads --out /dir/to/save/audio/files - -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0.01 -``` - -Next, we need to preprocess the audio data to better match phonemized text data: - -```shell -zsh scripts/prepare_audio.sh /dir/with/{train,test,valid}.tsv /output/dir /path/to/wav2vec2/model.pt 512 14 -``` -Note that if you have splits different than train/valid/test, you will need to modify this script. The last two arguments are the PCA dimensionality and the 0-based index of the layer from which to extract representations. - -Now we need to prepare text data: -```shell -zsh scripts/prepare_text.sh language /path/to/text/file /output/dir 1000 espeak /path/to/fasttext/lid/model -``` - -The fourth argument is minimum number observations of phones to keep. If your text corpus is small, you might want to reduce this number. - -The fifth argument is which phonemizer to use. Supported values are [espeak](http://espeak.sourceforge.net/), [espeak-ng](https://github.com/espeak-ng/espeak-ng), and [G2P](https://github.com/Kyubyong/g2p) (english only). - -Pre-trained fasttext LID models can be downloaded [here](https://fasttext.cc/docs/en/language-identification.html). - -### Prepare TIMIT data -TIMIT transcripts include silence. Therefore VAD is not used for audio preprocessing, and we do not wrap transcripts with silences or insert random silence in between words. - -To prepare TIMIT data for both the matched an unmatched setup: -```shell -bash scripts/prepare_timit.sh /dir/to/timit/raw/data /output/dir /path/to/wav2vec2/model.pt -``` - -Note that we assume the TIMIT distribution with capitalized directories and filenames are used (e.g., `TRAIN/DR1/FCJF0/SA1.PHN`). - -## Generative adversarial training (GAN) - -We then use a GAN model to build a first unsupervised ASR model. The data preparation above of both speech features and text data is a necessary procedure that enables the generator to match speech to text in an unsupervised way. - -Launching GAN training on top of preprocessed features, with default hyperparameters can be done with: - -``` -PREFIX=w2v_unsup_gan_xp -TASK_DATA=/path/to/features/precompute_unfiltered_pca512_cls128_mean_pooled -TEXT_DATA=/path/to/data/phones # path to fairseq-preprocessed GAN data (phones dir) -KENLM_PATH=/path/to/data/phones/kenlm.phn.o4.bin # KenLM 4-gram phoneme language model (LM data = GAN data here) - -PYTHONPATH=$FAIRSEQ_ROOT PREFIX=$PREFIX fairseq-hydra-train \ - -m --config-dir config/gan \ - --config-name w2vu \ - task.data=${TASK_DATA} \ - task.text_data=${TEXT_DATA} \ - task.kenlm_path=${KENLM_PATH} \ - common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ - model.code_penalty=2,4 model.gradient_penalty=1.5,2.0 \ - model.smoothness_weight=0.5,0.75,1.0 'common.seed=range(0,5)' -``` - - -Once we find the best checkpoint (chosen using unsupervised metric that combined language model perplexity and vocabulary usage), we can use it to generate phone labels (or word labels with an appropriate kaldi WFST): - -```shell -python w2vu_generate.py --config-dir config/generate --config-name viterbi \ -fairseq.common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ -fairseq.task.data=/path/to/dir/with/features \ -fairseq.common_eval.path=/path/to/gan/checkpoint \ -fairseq.dataset.gen_subset=valid results_path=/where/to/save/transcriptions -``` - -The decoding without LM works best on the same adjacent-mean-pooled features that the gan was trained on, while decoding with LM works better on features before the adjacent timestep mean-pooling step (without the "_pooled" suffix). - -## Iterative self-training + Kaldi LM-decoding -After the GAN training provides a first unsupervised model, we can then progressively refine the quality of transcriptions using several iterations of semi-supervised learning. We perform two iterations: first, pseudo-label the training data with the unsupervised GAN model and train an HMM on the pseudo-labels. Second, we relabel the training data with the HMM and then fine-tune the original wav2vec 2.0 model using the HMM pseudo-labels with a CTC loss. Note that HMM models use phonemes as output, while wav2vec 2.0 use letter. Both are decoded using WFST decoders into words. - - -Please see [this README](kaldi_self_train/README.md) for more instructions on how to do iterative self-training + Kaldi LM-decoding. - -*** Note: these instructions are a work in progress and will be updated over the next few days diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multilingual/multilingual_data_manager.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multilingual/multilingual_data_manager.py deleted file mode 100644 index 137481b449b9cb5b2b486950c6cea669ac507c48..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multilingual/multilingual_data_manager.py +++ /dev/null @@ -1,1136 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import json -import logging -import math -import os -from collections import OrderedDict, defaultdict -from argparse import ArgumentError - -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - Dictionary, - LanguagePairDataset, - PrependTokenDataset, - SampledMultiDataset, - SampledMultiEpochDataset, - StripTokenDataset, - TransformEosLangPairDataset, - TruncateDataset, - data_utils, - indexed_dataset, -) -from fairseq.data.multilingual.multilingual_utils import ( - EncoderLangtok, - LangTokSpec, - LangTokStyle, - augment_dictionary, - get_lang_tok, -) -from fairseq.data.multilingual.sampled_multi_dataset import CollateFormat -from fairseq.file_io import PathManager -from fairseq.utils import FileContentsAction, csv_str_list, eval_str_dict - - -logger = logging.getLogger(__name__) - -SRC_DICT_NAME = 'src' -TGT_DICT_NAME = 'tgt' - - -def _lang_id(dic: Dictionary, lang: str): - """Return language ID index.""" - idx = dic.index(lang) - assert idx != dic.unk_index, "cannot find language ID for lang {}".format(lang) - return idx - - -def load_sampling_weights(from_file): - with open(from_file) as f: - weights = json.load(f) - return weights - - -class MultilingualDatasetManager(object): - def __init__(self, args, lang_pairs, langs, dicts, sampling_method): - super().__init__() - self.args = args - self.seed = args.seed - self.lang_pairs = lang_pairs - self.extra_lang_pairs = ( - list( - {p for _, v in args.extra_lang_pairs.items() for p in v.split(",")} - ) - if args.extra_lang_pairs - else [] - ) - self.src_langs = {p.split("-")[0] for p in args.lang_pairs + self.extra_lang_pairs} - self.tgt_langs = {p.split("-")[1] for p in args.lang_pairs + self.extra_lang_pairs} - self.langs = langs - self.dicts = dicts - self.lang_dict = self.create_lang_dictionary(self.langs) - self.sampling_method = sampling_method - self.sampling_scheduler = None - self._has_sharded_data = False - self._num_shards_dict = {} - self._training_data_sizes = defaultdict(lambda: {}) - - @classmethod - def setup_data_manager(cls, args, lang_pairs, langs, dicts, sampling_method): - return MultilingualDatasetManager( - args, lang_pairs, langs, dicts, sampling_method - ) - - @staticmethod - def add_args(parser): - parser.add_argument( - "data", - help="colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner", - action=FileContentsAction, - ) - parser.add_argument( - "--langs", - default=None, - type=csv_str_list, - help="a list of languages comma sperated languages which can appear in lang-pairs; " - "note that the ordering determines language token IDs", - ) - parser.add_argument( - "--lang-dict", - default=None, - type=str, - help="an external file which contains a list of " - "languages which can appear in lang-pairs; " - "note that the ordering determines language token IDs; " - "--langs and --lang-dict are two exclusive options", - ) - parser.add_argument('--source-dict', default=None, type=str, - help='path to source dictionary; if specified it will override per language dictionary loading') - parser.add_argument('--target-dict', default=None, type=str, - help='path to target dictionary; if specified it will override per language dictionary loading') - parser.add_argument( - "--lang-tok-style", - default=LangTokStyle.multilingual.value, - type=str, - choices=[LangTokStyle.multilingual.value, LangTokStyle.mbart.value], - help="language token styles", - ) - - parser.add_argument( - "--load-alignments", - action="store_true", - help="load the binarized alignments", - ) - parser.add_argument( - "--left-pad-source", - default="True", - type=str, - metavar="BOOL", - help="pad the source on the left", - ) - parser.add_argument( - "--left-pad-target", - default="False", - type=str, - metavar="BOOL", - help="pad the target on the left", - ) - try: - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - parser.add_argument( - "--upsample-primary", - default=1, - type=int, - help="amount to upsample primary dataset", - ) - parser.add_argument( - "--truncate-source", - action="store_true", - default=False, - help="truncate source to max-source-positions", - ) - parser.add_argument( - "--encoder-langtok", - default=None, - type=str, - choices=[EncoderLangtok.src.value, EncoderLangtok.tgt.value], - metavar="SRCTGT", - help="prepend to the beginning of source sentence the source or target " - "language token. (src/tgt)", - ) - parser.add_argument( - "--decoder-langtok", - action="store_true", - help="prepend to the beginning of target sentence the target language token", - ) - parser.add_argument( - "--lang-tok-replacing-bos-eos", action="store_true", default=False - ) - parser.add_argument( - "--enable-lang-ids", - default=False, - action="store_true", - help="whether to include language IDs in samples", - ) - parser.add_argument( - "--enable-reservsed-directions-shared-datasets", - default=False, - action="store_true", - help="whether to allow datasets be used in reversed directions", - ) - - parser.add_argument( - "--extra-data", - help='a dictionary of data name to this path, \ - e.g. {"mined", path_to_mined_data, "denoised": path_to_denoised_data}', - type=lambda uf: eval_str_dict(uf, type=str), - default=None, - ) - parser.add_argument( - "--extra-lang-pairs", - help='a dictionary of data name to the language pairs they serve, \ - e.g. {"mined": comma-separated-lang-pairs, "denoised": comma-separated-lang-pairs}', - type=lambda uf: eval_str_dict(uf, type=str), - default=None, - ) - parser.add_argument( - "--fixed-dictionary", - help="Fixed dictionary to use with model path", - default=None, - type=str, - ) - parser.add_argument( - "--langtoks-specs", - help='a list of comma separated data types that a set of language tokens to be specialized for, \ - e.g. "main,dae,mined". There will be a set of language tokens added to the vocab to \ - distinguish languages in different training data types. If not specified, default language \ - tokens per languages will be added', - default=LangTokSpec.main.value, - type=csv_str_list, - ) - parser.add_argument( - "--langtoks", - help='a dictionary of how to add language tokens, \ - e.g. {"mined": (None, "tgt"), "mono_dae": ("src.dae", "tgt"), "main": \ - ("src", "tgt")}, or {"mined": ("src.mined", "tgt")}', - default=None, - type=lambda uf: eval_str_dict(uf, type=str), - ) - parser.add_argument( - "--sampling-weights-from-file", - help='a file contain a python dictionary of how to sample data sets, \ - e.g. { "main:en_XX-es_XX": 0.2, "mined:en_XX-pt_XX": 0.5, \ - "mono_dae:es_XX-es_XX: 0.3, "main:en_xx-fr_XX": 0.8 }', - default=None, - type=str, - ) - parser.add_argument( - "--sampling-weights", - help='a dictionary of how to sample data sets, \ - e.g. { "main:en_XX-es_XX": 0.2, "mined:en_XX-pt_XX": 0.5, \ - "mono_dae:es_XX-es_XX: 0.3, "main:en_xx-fr_XX": 0.8 }', - default=None, - type=lambda uf: eval_str_dict(uf, type=str), - ) - parser.add_argument( - "--virtual-epoch-size", - default=None, - type=int, - help="virtual epoch size to speed up data loading", - ) - parser.add_argument( - "--virtual-data-size", - default=None, - type=int, - help="virtual data size of the whole joint dataset to speed" - "up data loading and have specific dynamic sampling strategy interval", - ) - - @classmethod - def load_langs(cls, args, **kwargs): - if args.lang_dict and args.langs: - raise ValueError("--langs and --lang-dict can not both be specified") - if args.lang_dict is None and args.langs is None: - logger.warning( - "External language dictionary is not provided; " - "use lang-pairs to infer the set of supported languages. " - "The language ordering is not stable which might cause " - "misalignment in pretraining and finetuning." - ) - # infer from lang_pairs as it is - langs = list( - {x for lang_pair in args.lang_pairs for x in lang_pair.split("-")} - ) - langs = sorted(langs) - logger.info(f"inferred language list: {langs}") - elif args.lang_dict: - with open( - PathManager.get_local_path(args.lang_dict), "r", encoding="utf-8" - ) as f: - langs = [lang.strip() for lang in f.readlines() if lang.strip()] - logger.info( - f"loaded language list from {args.lang_dict} as they are ordered in file" - ) - elif args.langs: - langs = args.langs - logger.info( - f"parsed the language list as they are ordered in the option: {langs}" - ) - return langs - - def has_sharded_data(self, split): - return self._has_sharded_data and split == getattr( - self.args, "train_subset", None - ) - - def _shared_collater(self): - return not (self.args.extra_data and "mono_dae" in self.args.extra_data) and ( - not self.args.lang_tok_replacing_bos_eos - ) - - def estimate_global_pass_epoch(self, epoch): - if self.args.virtual_epoch_size is None or self.args.virtual_data_size is None: - return None - # one epoch more for remaining data in each shard - virtual_epochs_per_shard = math.ceil( - self.args.virtual_data_size / self.args.virtual_epoch_size - ) - # note that fairseq epoch / shard_epoch starts from 1 - shard_epoch = (epoch - 1) // virtual_epochs_per_shard + 1 - return shard_epoch - - @classmethod - def prepare(cls, load_dictionary, args, **kargs): - args.left_pad_source = utils.eval_bool(args.left_pad_source) - args.left_pad_target = utils.eval_bool(args.left_pad_target) - - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - if args.langtoks is None: - args.langtoks = {} - if "main" not in args.langtoks: - src_langtok_spec = args.encoder_langtok if args.encoder_langtok else None - tgt_langtok_spec = "tgt" if args.decoder_langtok else None - args.langtoks["main"] = (src_langtok_spec, tgt_langtok_spec) - - def check_langs(langs, pairs): - messages = [] - for src, tgt in pairs: - if src not in langs or tgt not in langs: - messages.append( - f"language pair {src}-{tgt} contains languages " - "that are not in the language dictionary" - ) - if len(messages) > 0: - raise ValueError(" ".join(messages) + f"; langs: {langs}") - - if args.lang_pairs is None: - raise ValueError( - "--lang-pairs is required. List all the language pairs in the training objective." - ) - if isinstance(args.lang_pairs, str): - args.lang_pairs = args.lang_pairs.split(",") - if args.source_lang is not None or args.target_lang is not None: - training = False - else: - training = True - language_list = cls.load_langs(args, **kargs) - check_langs( - language_list, - ( - [p.split("-") for p in args.lang_pairs] - if training - else [(args.source_lang, args.target_lang)] - ), - ) - - def load_dictionary_and_postproc(path): - d = load_dictionary(path) - augment_dictionary( - dictionary=d, - language_list=language_list, - lang_tok_style=args.lang_tok_style, - langtoks_specs=args.langtoks_specs, - extra_data=args.extra_data, - ) - return d - - dicts = cls.load_all_dictionaries(args, language_list, load_dictionary_and_postproc, training) - return language_list, dicts, training - - @classmethod - def load_all_dictionaries(cls, args, language_list, load_dictionary, training): - dicts = OrderedDict() - if args.source_dict is not None: - dicts[SRC_DICT_NAME] = load_dictionary(args.source_dict) - if args.target_dict is not None: - dicts[TGT_DICT_NAME] = load_dictionary(args.target_dict) - - if training: - extra_lang_pairs = ( - list( - {p for _, v in args.extra_lang_pairs.items() for p in v.split(",")} - ) - if args.extra_lang_pairs - else [] - ) - src_langs_to_load_dicts = sorted( - {p.split("-")[0] for p in (args.lang_pairs + extra_lang_pairs)} - ) - tgt_langs_to_load_dicts = sorted( - {p.split("-")[1] for p in (args.lang_pairs + extra_lang_pairs)} - ) - else: - src_langs_to_load_dicts = [args.source_lang] - tgt_langs_to_load_dicts = [args.target_lang] - - paths = utils.split_paths(args.data) - assert len(paths) > 0 - - def load_dicts(langs_to_load_dicts): - for lang in langs_to_load_dicts: - dicts[lang] = load_dictionary( - os.path.join(paths[0], "dict.{}.txt".format(lang)) - ) - if len(dicts) > 0: - dict0 = next(iter(dicts.values())) - assert dicts[lang].pad() == dict0.pad() - assert dicts[lang].eos() == dict0.eos() - assert dicts[lang].unk() == dict0.unk() - logger.info("[{}] dictionary: {} types".format(lang, len(dicts[lang]))) - - if args.fixed_dictionary is not None: - fixed_dict = load_dictionary(args.fixed_dictionary) - dicts = {lang: fixed_dict for lang in src_langs_to_load_dicts + tgt_langs_to_load_dicts} - else: - if args.source_dict is None: - load_dicts(src_langs_to_load_dicts) - if args.target_dict is None: - load_dicts(tgt_langs_to_load_dicts) - return dicts - - def get_source_dictionary(self, lang): - if self.args.source_dict is not None: - return self.dicts[SRC_DICT_NAME] - else: - return self.dicts[lang] - - def get_target_dictionary(self, lang): - if self.args.target_dict is not None: - return self.dicts[TGT_DICT_NAME] - else: - return self.dicts[lang] - - @classmethod - def create_lang_dictionary(cls, langs): - unk = "" - # hack to remove symbols other than unk as they are not needed by lang dict - lang_dict = Dictionary(pad=unk, eos=unk, unk=unk, bos=unk) - for lang in langs: - lang_dict.add_symbol(lang) - return lang_dict - - @classmethod - def get_langtok_index(cls, lang_tok, dic): - idx = dic.index(lang_tok) - assert ( - idx != dic.unk_index - ), "cannot find language token {} in the dictionary".format(lang_tok) - return idx - - def get_encoder_langtok(self, src_lang, tgt_lang, spec=None): - if spec is None: - return None - if spec and spec.startswith("src"): - if src_lang is None: - return None - langtok = get_lang_tok( - lang=src_lang, lang_tok_style=self.args.lang_tok_style, spec=spec - ) - else: - if tgt_lang is None: - return None - langtok = get_lang_tok( - lang=tgt_lang, lang_tok_style=self.args.lang_tok_style, spec=spec - ) - return self.get_langtok_index( - langtok, self.get_source_dictionary(src_lang) if src_lang else self.get_target_dictionary(tgt_lang) - ) - - def get_decoder_langtok(self, tgt_lang, spec=None): - if spec is None: - return None - langtok = get_lang_tok( - lang=tgt_lang, lang_tok_style=self.args.lang_tok_style, spec=spec - ) - return self.get_langtok_index(langtok, self.get_target_dictionary(tgt_lang)) - - @classmethod - def load_data(cls, path, vdict, impl): - dataset = data_utils.load_indexed_dataset(path, vdict, impl) - return dataset - - @classmethod - def split_exists(cls, split, src, tgt, lang, data_path, dataset_impl): - filename = os.path.join(data_path, "{}.{}-{}.{}".format(split, src, tgt, lang)) - return indexed_dataset.dataset_exists(filename, impl=dataset_impl) - - def load_lang_dataset( - self, - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - max_source_positions, - prepend_bos=False, - load_alignments=False, - truncate_source=False, - ): - - src_datasets = [] - tgt_datasets = [] - - for k in itertools.count(): - split_k = split + (str(k) if k > 0 else "") - - # infer langcode - if self.split_exists(split_k, src, tgt, src, data_path, dataset_impl): - prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, src, tgt)) - elif self.split_exists(split_k, tgt, src, src, data_path, dataset_impl): - prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, tgt, src)) - else: - if k > 0: - break - else: - logger.error( - f"Dataset not found: {data_path}, {split_k}, {src}, {tgt}" - ) - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, data_path) - ) - - src_dataset = self.load_data(prefix + src, src_dict, dataset_impl) - if truncate_source: - src_dataset = AppendTokenDataset( - TruncateDataset( - StripTokenDataset(src_dataset, src_dict.eos()), - max_source_positions - 1, - ), - src_dict.eos(), - ) - src_datasets.append(src_dataset) - tgt_datasets.append(self.load_data(prefix + tgt, tgt_dict, dataset_impl)) - - logger.info( - "{} {} {}-{} {} examples".format( - data_path, split_k, src, tgt, len(src_datasets[-1]) - ) - ) - - if not combine: - break - - assert len(src_datasets) == len(tgt_datasets) - - if len(src_datasets) == 1: - src_dataset, tgt_dataset = src_datasets[0], tgt_datasets[0] - else: - sample_ratios = [1] * len(src_datasets) - sample_ratios[0] = upsample_primary - src_dataset = ConcatDataset(src_datasets, sample_ratios) - tgt_dataset = ConcatDataset(tgt_datasets, sample_ratios) - - if prepend_bos: - assert hasattr(src_dict, "bos_index") and hasattr(tgt_dict, "bos_index") - src_dataset = PrependTokenDataset(src_dataset, src_dict.bos()) - tgt_dataset = PrependTokenDataset(tgt_dataset, tgt_dict.bos()) - - align_dataset = None - if load_alignments: - align_path = os.path.join( - data_path, "{}.align.{}-{}".format(split, src, tgt) - ) - if indexed_dataset.dataset_exists(align_path, impl=dataset_impl): - align_dataset = data_utils.load_indexed_dataset( - align_path, None, dataset_impl - ) - - return src_dataset, tgt_dataset, align_dataset - - def load_langpair_dataset( - self, - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - left_pad_source, - left_pad_target, - max_source_positions, - max_target_positions, - prepend_bos=False, - load_alignments=False, - truncate_source=False, - src_dataset_transform_func=lambda dataset: dataset, - tgt_dataset_transform_func=lambda dataset: dataset, - src_lang_id=None, - tgt_lang_id=None, - langpairs_sharing_datasets=None, - ): - norm_direction = "-".join(sorted([src, tgt])) - if langpairs_sharing_datasets is not None: - src_dataset = langpairs_sharing_datasets.get( - (data_path, split, norm_direction, src), "NotInCache" - ) - tgt_dataset = langpairs_sharing_datasets.get( - (data_path, split, norm_direction, tgt), "NotInCache" - ) - align_dataset = langpairs_sharing_datasets.get( - (data_path, split, norm_direction, src, tgt), "NotInCache" - ) - - # a hack: any one is not in cache, we need to reload them - if ( - langpairs_sharing_datasets is None - or src_dataset == "NotInCache" - or tgt_dataset == "NotInCache" - or align_dataset == "NotInCache" - or split != getattr(self.args, "train_subset", None) - ): - # source and target datasets can be reused in reversed directions to save memory - # reversed directions of valid and test data will not share source and target datasets - src_dataset, tgt_dataset, align_dataset = self.load_lang_dataset( - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - max_source_positions=max_source_positions, - prepend_bos=prepend_bos, - load_alignments=load_alignments, - truncate_source=truncate_source, - ) - src_dataset = src_dataset_transform_func(src_dataset) - tgt_dataset = tgt_dataset_transform_func(tgt_dataset) - if langpairs_sharing_datasets is not None: - langpairs_sharing_datasets[ - (data_path, split, norm_direction, src) - ] = src_dataset - langpairs_sharing_datasets[ - (data_path, split, norm_direction, tgt) - ] = tgt_dataset - langpairs_sharing_datasets[ - (data_path, split, norm_direction, src, tgt) - ] = align_dataset - if align_dataset is None: - # no align data so flag the reverse direction as well in sharing - langpairs_sharing_datasets[ - (data_path, split, norm_direction, tgt, src) - ] = align_dataset - else: - logger.info( - f"Reusing source and target datasets of [{split}] {tgt}-{src} for reversed direction: " - f"[{split}] {src}-{tgt}: src length={len(src_dataset)}; tgt length={len(tgt_dataset)}" - ) - - return LanguagePairDataset( - src_dataset, - src_dataset.sizes, - src_dict, - tgt_dataset, - tgt_dataset.sizes if tgt_dataset is not None else None, - tgt_dict, - left_pad_source=left_pad_source, - left_pad_target=left_pad_target, - align_dataset=align_dataset, - src_lang_id=src_lang_id, - tgt_lang_id=tgt_lang_id, - ) - - def src_dataset_tranform_func(self, src_lang, tgt_lang, dataset, spec=None): - if self.args.lang_tok_replacing_bos_eos: - # it is handled by self.alter_dataset_langtok - # TODO: Unifiy with alter_dataset_langtok - return dataset - if spec is None: - return dataset - tok = self.get_encoder_langtok(src_lang, tgt_lang, spec) - if tok: - return PrependTokenDataset(dataset, tok) - return dataset - - def tgt_dataset_tranform_func(self, source_lang, target_lang, dataset, spec=None): - if dataset is None: - # note that target dataset can be None during inference time - return None - if self.args.lang_tok_replacing_bos_eos: - # TODO: Unifiy with alter_dataset_langtok - # It is handled by self.alter_dataset_langtok. - # The complication in self.alter_dataset_langtok - # makes a unified framework difficult. - return dataset - # if not self.args.decoder_langtok: - if not spec: - return dataset - tok = self.get_decoder_langtok(target_lang, spec) - if tok: - return PrependTokenDataset(dataset, tok) - return dataset - - def alter_dataset_langtok( - self, - lang_pair_dataset, - src_eos=None, - src_lang=None, - tgt_eos=None, - tgt_lang=None, - src_langtok_spec=None, - tgt_langtok_spec=None, - ): - if src_langtok_spec is None and tgt_langtok_spec is None: - return lang_pair_dataset - - new_src_eos = None - if ( - src_langtok_spec is not None - and src_eos is not None - and (src_lang is not None or tgt_lang is not None) - ): - new_src_eos = self.get_encoder_langtok(src_lang, tgt_lang, src_langtok_spec) - else: - src_eos = None - - new_tgt_bos = None - if tgt_langtok_spec and tgt_eos is not None and tgt_lang is not None: - new_tgt_bos = self.get_decoder_langtok(tgt_lang, tgt_langtok_spec) - else: - tgt_eos = None - - return TransformEosLangPairDataset( - lang_pair_dataset, - src_eos=src_eos, - new_src_eos=new_src_eos, - tgt_bos=tgt_eos, - new_tgt_bos=new_tgt_bos, - ) - - def load_a_dataset( - self, - split, - data_path, - src, - src_dict, - tgt, - tgt_dict, - combine, - prepend_bos=False, - langpairs_sharing_datasets=None, - data_category=None, - **extra_kwargs, - ): - dataset_impl = self.args.dataset_impl - upsample_primary = self.args.upsample_primary - left_pad_source = self.args.left_pad_source - left_pad_target = self.args.left_pad_target - max_source_positions = self.args.max_source_positions - max_target_positions = self.args.max_target_positions - load_alignments = self.args.load_alignments - truncate_source = self.args.truncate_source - src_dataset_transform_func = self.src_dataset_tranform_func - tgt_dataset_transform_func = self.tgt_dataset_tranform_func - enable_lang_ids = self.args.enable_lang_ids - lang_dictionary = self.lang_dict - src_langtok_spec, tgt_langtok_spec = extra_kwargs["langtok_spec"] - - src_langtok = self.get_encoder_langtok(src, tgt, src_langtok_spec) - tgt_langtok = self.get_decoder_langtok(tgt, tgt_langtok_spec) - logger.info( - f"{data_category}:{src}-{tgt} src_langtok: {src_langtok}; tgt_langtok: {tgt_langtok}" - ) - - langpair_ds = self.load_langpair_dataset( - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - left_pad_source, - left_pad_target, - max_source_positions, - max_target_positions, - prepend_bos, - load_alignments, - truncate_source, - src_dataset_transform_func=lambda dataset: src_dataset_transform_func( - src, tgt, dataset, src_langtok_spec - ), - tgt_dataset_transform_func=lambda dataset: tgt_dataset_transform_func( - src, tgt, dataset, tgt_langtok_spec - ), - src_lang_id=_lang_id(lang_dictionary, src) - if enable_lang_ids and lang_dictionary is not None - else None, - tgt_lang_id=_lang_id(lang_dictionary, tgt) - if enable_lang_ids and lang_dictionary is not None - else None, - langpairs_sharing_datasets=langpairs_sharing_datasets, - ) - # TODO: handle modified lang toks for mined data and dae data - if self.args.lang_tok_replacing_bos_eos: - ds = self.alter_dataset_langtok( - langpair_ds, - src_eos=self.get_source_dictionary(src).eos() if src else self.get_target_dictionary(tgt).eos(), - src_lang=src, - tgt_eos=self.get_target_dictionary(tgt).eos(), - tgt_lang=tgt, - src_langtok_spec=src_langtok_spec, - tgt_langtok_spec=tgt_langtok_spec, - ) - else: - ds = langpair_ds - return ds - - def load_split_langpair_datasets(self, split, data_param_list): - datasets = [] - langpairs_sharing_datasets = ( - {} if self.args.enable_reservsed_directions_shared_datasets else None - ) - for param in data_param_list: - ds = self.load_a_dataset( - split=split, - langpairs_sharing_datasets=langpairs_sharing_datasets, - **param, - ) - datasets.append(ds) - return datasets - - def get_data_paths_and_lang_pairs(self, split): - datapaths = {"main": self.args.data} - lang_pairs = {"main": self.lang_pairs} - if split == getattr(self.args, "train_subset", None): - # only training data can have extra data and extra language pairs - if self.args.extra_data: - extra_datapaths = self.args.extra_data - datapaths.update(extra_datapaths) - if self.args.extra_lang_pairs: - extra_lang_pairs = { - k: v.split(",") for k, v in self.args.extra_lang_pairs.items() - } - lang_pairs.update(extra_lang_pairs) - return datapaths, lang_pairs - - @classmethod - def get_dataset_key(cls, data_category, src, tgt): - return f"{data_category}:{src}-{tgt}" - - @classmethod - def _get_shard_num_dict(cls, split, paths): - shards = defaultdict(int) - for path in paths: - files = PathManager.ls(path) - directions = set() - for f in files: - if f.startswith(split) and f.endswith(".idx"): - # idx files of the form "{split}.{src}-{tgt}.{lang}.idx" - direction = f.split(".")[-3] - directions.add(direction) - for direction in directions: - shards[direction] += 1 - return shards - - def get_split_num_data_shards(self, split): - if split in self._num_shards_dict: - return self._num_shards_dict[split] - num_shards_dict = {} - data_paths, lang_pairs = self.get_data_paths_and_lang_pairs(split) - - for data_category, paths in data_paths.items(): - if data_category not in lang_pairs: - continue - paths = utils.split_paths(paths) - shards_dict = self._get_shard_num_dict(split, paths) - lang_dirs = [ - lang_pair.split("-") for lang_pair in lang_pairs[data_category] - ] - lang_dirs = [x if len(x) > 1 else (x[0], x[0]) for x in lang_dirs] - for src, tgt in lang_dirs: - key = self.get_dataset_key(data_category, src, tgt) - if "mono_" in data_category: - # monolingual data requires tgt only - assert src is None or src == tgt, ( - f"error: src={src}, " - "tgt={tgt} for data_category={data_category}" - ) - num_shards_dict[key] = shards_dict[tgt] - else: - if f"{src}-{tgt}" in shards_dict: - num_shards_dict[key] = shards_dict[f"{src}-{tgt}"] - elif f"{tgt}-{src}" in shards_dict: - # follow the fairseq tradition to use reversed direction data if it is not available - num_shards_dict[key] = shards_dict[f"{tgt}-{src}"] - self._num_shards_dict[split] = num_shards_dict - logger.info(f"[{split}] num of shards: {num_shards_dict}") - return num_shards_dict - - @classmethod - def get_shard_id(cls, num_shards, epoch, shard_epoch=None): - shard = epoch if shard_epoch is None else shard_epoch - shard = (shard - 1) % num_shards - return shard - - def get_split_data_path(self, paths, epoch, shard_epoch, num_shards): - path = paths[self.get_shard_id(num_shards, epoch, shard_epoch)] - return path - - def get_split_data_param_list(self, split, epoch, shard_epoch=None): - # TODO: to extend with extra datasets and keys and loop over different shard data paths - param_list = [] - data_paths, lang_pairs = self.get_data_paths_and_lang_pairs(split) - logger.info(f"langtoks settings: {self.args.langtoks}") - split_num_shards_dict = self.get_split_num_data_shards(split) - for data_category, paths in data_paths.items(): - if data_category not in lang_pairs: - continue - paths = utils.split_paths(paths) - assert len(paths) > 0 - if len(paths) > 1: - self._has_sharded_data = True - if split != getattr(self.args, "train_subset", None): - # if not training data set, use the first shard for valid and test - paths = paths[:1] - - if data_category in self.args.langtoks: - lang_tok_spec = self.args.langtoks[data_category] - else: - # default to None - lang_tok_spec = (None, None) - - # infer langcode - lang_dirs = [ - lang_pair.split("-") for lang_pair in lang_pairs[data_category] - ] - lang_dirs = [x if len(x) > 1 else (x[0], x[0]) for x in lang_dirs] - for src, tgt in lang_dirs: - assert src is not None or data_category == "mono_dae", ( - f"error: src={src}, " "tgt={tgt} for data_category={data_category}" - ) - # logger.info(f"preparing param for {data_category}: {src} - {tgt}") - key = self.get_dataset_key(data_category, src, tgt) - data_path = self.get_split_data_path( - paths, epoch, shard_epoch, split_num_shards_dict[key] - ) - param_list.append( - { - "key": key, - "data_path": data_path, - "split": split, - "src": src, - "src_dict": self.get_source_dictionary(src) - if src and data_category != "mono_dae" - else None, - "tgt": tgt, - "tgt_dict": self.get_target_dictionary(tgt), - "data_category": data_category, - "langtok_spec": lang_tok_spec, - } - ) - return param_list - - def get_train_dataset_sizes( - self, data_param_list, datasets, epoch, shard_epoch=None - ): - num_shards = [ - self.get_split_num_data_shards(param["split"])[param["key"]] - for param in data_param_list - ] - data_sizes = [] - for (key, d), num_shard in zip(datasets, num_shards): - my_data_sizes = self._training_data_sizes[key] - shard_ind = self.get_shard_id(num_shard, epoch, shard_epoch) - if shard_ind not in my_data_sizes: - my_data_sizes[shard_ind] = len(d) - known_size = max(my_data_sizes.values()) - data_sizes.append( - # If we don't know the data size of the shard yet, - # use the the max known data size to approximate. - # Note that we preprocess shards by a designated shard size - # and put any remaining data at the end into the last shard so - # the max shard size approximation is almost correct before loading - # the last shard; after loading the last shard, it will have the - # exact data sizes of the whole data size. - (key, sum(my_data_sizes.get(i, known_size) for i in range(num_shard))) - ) - logger.info( - f"estimated total data sizes of all shards used in sampling ratios: {data_sizes}. " - "Note that if the data a shard has not been loaded yet, use the max known data size to approximate" - ) - return [s for _, s in data_sizes] - - def get_train_sampling_ratios( - self, data_param_list, datasets, epoch=1, shard_epoch=None - ): - data_sizes = self.get_train_dataset_sizes( - data_param_list, datasets, epoch, shard_epoch - ) - sampling_func = self.sampling_method.sampling_method_selector() - sample_ratios = sampling_func(data_sizes) if sampling_func is not None else None - return sample_ratios - - def get_sampling_ratios(self, data_param_list, datasets, epoch, shard_epoch=None): - if self.args.sampling_weights_from_file: - weights = load_sampling_weights(self.args.sampling_weights_from_file) - sample_ratios = [weights[k] for k, _ in datasets] - logger.info( - "| ignoring --sampling-weights when loadding sampling weights " - f"from file {self.args.sampling_weights_from_file}" - ) - elif self.args.sampling_weights: - sample_ratios = [self.args.sampling_weights[k] for k, _ in datasets] - else: - sample_ratios = self.get_train_sampling_ratios( - data_param_list, datasets, epoch, shard_epoch - ) - - if sample_ratios is not None: - logger.info( - "| Upsample ratios: {}".format( - list(zip(map(lambda x: x["key"], data_param_list), sample_ratios)) - ) - ) - assert len(sample_ratios) == len(datasets) - return sample_ratios - - def load_split_datasets( - self, split, training, epoch=1, combine=False, shard_epoch=None, **kwargs - ): - data_param_list = self.get_split_data_param_list( - split, epoch, shard_epoch=shard_epoch - ) - langpairs_sharing_datasets = ( - {} if self.args.enable_reservsed_directions_shared_datasets else None - ) - datasets = [ - ( - param["key"], - self.load_a_dataset( - combine=combine, - langpairs_sharing_datasets=langpairs_sharing_datasets, - **param, - ), - ) - for param in data_param_list - ] - return datasets, data_param_list - - def load_into_concat_dataset(self, split, datasets, data_param_list): - if self.args.lang_tok_replacing_bos_eos: - # TODO: to investigate why TransformEosLangPairDataset doesn't work with ConcatDataset - return SampledMultiDataset( - OrderedDict(datasets), - sampling_ratios=None, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=None, - split=split, - ) - return ConcatDataset([d for _, d in datasets]) - - def load_sampled_multi_epoch_dataset( - self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs - ): - datasets, data_param_list = self.load_split_datasets( - split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs - ) - if training and split == getattr(self.args, "train_subset", None): - sample_ratios = self.get_sampling_ratios(data_param_list, datasets, epoch) - return SampledMultiEpochDataset( - OrderedDict(datasets), - epoch=epoch, - shard_epoch=shard_epoch, - # valid and test datasets will be degenerate to concating datasets: - sampling_ratios=sample_ratios, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=self.args.virtual_data_size, - split=split, - virtual_epoch_size=self.args.virtual_epoch_size, - # if not using lang_tok altering, simplified to use the same collater - shared_collater=self._shared_collater(), - ) - else: - return self.load_into_concat_dataset(split, datasets, data_param_list) - - def load_sampled_multi_dataset( - self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs - ): - datasets, data_param_list = self.load_split_datasets( - split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs - ) - if training and split == getattr(self.args, "train_subset", None): - sample_ratios = self.get_sampling_ratios(data_param_list, datasets, epoch) - return SampledMultiDataset( - OrderedDict(datasets), - epoch=epoch, - # valid and test datasets will be degerate to concating datasets: - sampling_ratios=sample_ratios, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=self.args.virtual_data_size, - split=split, - # if not using lang_tok altering, simplified to use the same collater - shared_collater=self._shared_collater(), - ) - else: - return self.load_into_concat_dataset(split, datasets, data_param_list) - - def load_dataset( - self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs - ): - if self.args.virtual_epoch_size is None: - return self.load_sampled_multi_dataset( - split, training, epoch, combine, shard_epoch, **kwargs - ) - else: - return self.load_sampled_multi_epoch_dataset( - split, training, epoch, combine, shard_epoch, **kwargs - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/sentence_prediction.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/sentence_prediction.py deleted file mode 100644 index d5f9302c10b3410e7650433d54f70aad4fd1cfc4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/sentence_prediction.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import contextlib -from dataclasses import dataclass, field -from typing import Optional -from omegaconf import MISSING, II, open_dict, OmegaConf - -import numpy as np -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - OffsetTokensDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - RollDataset, - SortDataset, - StripTokenDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import FairseqDataclass, FairseqTask, register_task -from fairseq.dataclass import ChoiceEnum - - -logger = logging.getLogger(__name__) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) - - -@dataclass -class SentencePredictionConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - num_classes: int = field( - default=-1, - metadata={"help": "number of classes or regression targets"}, - ) - init_token: Optional[int] = field( - default=None, - metadata={"help": "add token at the beginning of each batch item"}, - ) - separator_token: Optional[int] = field( - default=None, - metadata={"help": "add separator token between inputs"}, - ) - no_shuffle: bool = field( - default=False, - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed tokens_per_sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - add_prev_output_tokens: bool = field( - default=False, - metadata={ - "help": "add prev_output_tokens to sample, used for encoder-decoder arch" - }, - ) - max_positions: int = field( - default=512, - metadata={"help": "max tokens per example"}, - ) - - regression_target: bool = II("criterion.regression_target") - classification_head_name: str = II("criterion.classification_head_name") - seed: int = II("common.seed") - - -@register_task("sentence_prediction", dataclass=SentencePredictionConfig) -class SentencePredictionTask(FairseqTask): - """ - Sentence (or sentence pair) prediction (classification or regression) task. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - def __init__(self, cfg, data_dictionary, label_dictionary): - super().__init__(cfg) - self.dictionary = data_dictionary - self._label_dictionary = label_dictionary - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") - return dictionary - - @classmethod - def setup_task(cls, cfg, **kwargs): - assert cfg.num_classes > 0, "Must set task.num_classes" - - # load data dictionary - data_dict = cls.load_dictionary( - os.path.join(cfg.data, "input0", "dict.txt"), - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - - # load label dictionary - if not cfg.regression_target: - label_dict = cls.load_dictionary( - os.path.join(cfg.data, "label", "dict.txt"), - ) - logger.info("[label] dictionary: {} types".format(len(label_dict))) - else: - label_dict = data_dict - return cls(cfg, data_dict, label_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(key, split): - return os.path.join(self.cfg.data, key, split) - - def make_dataset(key, dictionary): - split_path = get_path(key, split) - - try: - dataset = data_utils.load_indexed_dataset( - split_path, - dictionary, - combine=combine, - ) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"dataset {e} not found") - dataset = None - else: - raise e - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - assert input0 is not None, "could not find dataset: {}".format( - get_path("input0", split) - ) - input1 = make_dataset("input1", self.source_dictionary) - - if self.cfg.init_token is not None: - input0 = PrependTokenDataset(input0, self.cfg.init_token) - - if input1 is None: - src_tokens = input0 - else: - if self.cfg.separator_token is not None: - input1 = PrependTokenDataset(input1, self.cfg.separator_token) - - src_tokens = ConcatSentencesDataset(input0, input1) - - with data_utils.numpy_seed(self.cfg.seed): - shuffle = np.random.permutation(len(src_tokens)) - - src_tokens = maybe_shorten_dataset( - src_tokens, - split, - self.cfg.shorten_data_split_list, - self.cfg.shorten_method, - self.max_positions(), - self.cfg.seed, - ) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset(src_tokens, reduce=False), - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - } - - if self.cfg.add_prev_output_tokens: - prev_tokens_dataset = RightPadDataset( - RollDataset(src_tokens, 1), - pad_idx=self.dictionary.pad(), - ) - dataset["net_input"].update( - prev_output_tokens=prev_tokens_dataset, - ) - - if not self.cfg.regression_target: - label_dataset = make_dataset("label", self.label_dictionary) - if label_dataset is not None: - dataset.update( - target=OffsetTokensDataset( - StripTokenDataset( - label_dataset, - id_to_strip=self.label_dictionary.eos(), - ), - offset=-self.label_dictionary.nspecial, - ) - ) - else: - label_path = "{0}.label".format(get_path("label", split)) - if os.path.exists(label_path): - - def parse_regression_target(i, line): - values = line.split() - assert ( - len(values) == self.cfg.num_classes - ), f'expected num_classes={self.cfg.num_classes} regression target values on line {i}, found: "{line}"' - return [float(x) for x in values] - - with open(label_path) as h: - dataset.update( - target=RawLabelDataset( - [ - parse_regression_target(i, line.strip()) - for i, line in enumerate(h.readlines()) - ] - ) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[src_tokens.sizes], - ) - - if self.cfg.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, cfg): - from fairseq import models - - with open_dict(cfg) if OmegaConf.is_config(cfg) else contextlib.ExitStack(): - cfg.max_positions = self.cfg.max_positions - - model = models.build_model(cfg, self) - - model.register_classification_head( - self.cfg.classification_head_name, - num_classes=self.cfg.num_classes, - ) - - return model - - def max_positions(self): - return self.cfg.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - @property - def label_dictionary(self): - return self._label_dictionary diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_amp_optimizer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_amp_optimizer.py deleted file mode 100644 index 3a785e1830e91b7e090e841d428fe4ea61f3a65c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_amp_optimizer.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -import unittest - -import torch -from torch.cuda.amp import autocast, GradScaler -from fairseq.optim import build_optimizer - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestGradientScalingAMP(unittest.TestCase): - def setUp(self): - self.x = torch.tensor([2.0]).cuda().half() - weight = 3.0 - bias = 5.0 - self.error = 1.0 - self.target = torch.tensor([self.x * weight + bias + self.error]).cuda() - self.loss_fn = torch.nn.L1Loss() - - self.model = torch.nn.Linear(1, 1) - self.model.weight.data = torch.tensor([[weight]]) - self.model.bias.data = torch.tensor([bias]) - self.model.cuda() - self.params = list(self.model.parameters()) - - self.namespace_dls = argparse.Namespace( - optimizer="adam", - lr=[0.1], - adam_betas="(0.9, 0.999)", - adam_eps=1e-8, - weight_decay=0.0, - threshold_loss_scale=1, - min_loss_scale=1e-4, - ) - self.scaler = GradScaler( - init_scale=1, - growth_interval=1, - ) - - def run_iter(self, model, params, optimizer): - optimizer.zero_grad() - with autocast(): - y = model(self.x) - loss = self.loss_fn(y, self.target) - self.scaler.scale(loss).backward() - self.assertEqual(loss, torch.tensor(1.0, device="cuda:0", dtype=torch.float16)) - - self.scaler.unscale_(optimizer) - grad_norm = optimizer.clip_grad_norm(0) - self.assertAlmostEqual(grad_norm.item(), 2.2361, 4) - - self.scaler.step(optimizer) - self.scaler.update() - self.assertEqual( - model.weight, - torch.tensor( - [[3.1]], device="cuda:0", requires_grad=True - ), - ) - self.assertEqual( - model.bias, - torch.tensor( - [5.1], device="cuda:0", requires_grad=True - ), - ) - self.assertEqual(self.scaler.get_scale(), 2.0) - - def test_automatic_mixed_precision(self): - model = copy.deepcopy(self.model) - params = list(model.parameters()) - optimizer = build_optimizer(self.namespace_dls, params) - - self.run_iter(model, params, optimizer) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/docs/Makefile b/spaces/OFA-Sys/OFA-vqa/fairseq/docs/Makefile deleted file mode 100644 index c2f5b1a89cfc9e02d1bb09027d9e1e520ba53d53..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = python -msphinx -SPHINXPROJ = fairseq -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_metrics.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_metrics.py deleted file mode 100644 index 2de6969cf4445bc6cda44dacf6de765ea30d5f5b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_metrics.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -import uuid - -from fairseq import metrics - - -class TestMetrics(unittest.TestCase): - def test_nesting(self): - with metrics.aggregate() as a: - metrics.log_scalar("loss", 1) - with metrics.aggregate() as b: - metrics.log_scalar("loss", 2) - - self.assertEqual(a.get_smoothed_values()["loss"], 1.5) - self.assertEqual(b.get_smoothed_values()["loss"], 2) - - def test_new_root(self): - with metrics.aggregate() as a: - metrics.log_scalar("loss", 1) - with metrics.aggregate(new_root=True) as b: - metrics.log_scalar("loss", 2) - - self.assertEqual(a.get_smoothed_values()["loss"], 1) - self.assertEqual(b.get_smoothed_values()["loss"], 2) - - def test_nested_new_root(self): - with metrics.aggregate() as layer1: - metrics.log_scalar("loss", 1) - with metrics.aggregate(new_root=True) as layer2: - metrics.log_scalar("loss", 2) - with metrics.aggregate() as layer3: - metrics.log_scalar("loss", 3) - with metrics.aggregate(new_root=True) as layer4: - metrics.log_scalar("loss", 4) - metrics.log_scalar("loss", 1.5) - - self.assertEqual(layer4.get_smoothed_values()["loss"], 4) - self.assertEqual(layer3.get_smoothed_values()["loss"], 3) - self.assertEqual(layer2.get_smoothed_values()["loss"], 2.5) - self.assertEqual(layer1.get_smoothed_values()["loss"], 1.25) - - def test_named(self): - name = str(uuid.uuid4()) - metrics.reset_meters(name) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 1) - - metrics.log_scalar("loss", 3) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 2) - - self.assertEqual(metrics.get_smoothed_values(name)["loss"], 1.5) - - def test_nested_duplicate_names(self): - name = str(uuid.uuid4()) - metrics.reset_meters(name) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 1) - with metrics.aggregate() as other: - with metrics.aggregate(name): - metrics.log_scalar("loss", 2) - metrics.log_scalar("loss", 6) - - self.assertEqual(metrics.get_smoothed_values(name)["loss"], 3) - self.assertEqual(other.get_smoothed_values()["loss"], 2) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_resampling_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_resampling_dataset.py deleted file mode 100644 index ccb53a253ce6ca0d8e972adfa708144b4299b3cb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_resampling_dataset.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import collections -import unittest - -import numpy as np -from fairseq.data import ListDataset, ResamplingDataset - - -class TestResamplingDataset(unittest.TestCase): - def setUp(self): - self.strings = ["ab", "c", "def", "ghij"] - self.weights = [4.0, 2.0, 7.0, 1.5] - self.size_ratio = 2 - self.dataset = ListDataset( - self.strings, np.array([len(s) for s in self.strings]) - ) - - def _test_common(self, resampling_dataset, iters): - assert len(self.dataset) == len(self.strings) == len(self.weights) - assert len(resampling_dataset) == self.size_ratio * len(self.strings) - - results = {"ordered_by_size": True, "max_distribution_diff": 0.0} - - totalfreqs = 0 - freqs = collections.defaultdict(int) - - for epoch_num in range(iters): - resampling_dataset.set_epoch(epoch_num) - - indices = resampling_dataset.ordered_indices() - assert len(indices) == len(resampling_dataset) - - prev_size = -1 - - for i in indices: - cur_size = resampling_dataset.size(i) - # Make sure indices map to same sequences within an epoch - assert resampling_dataset[i] == resampling_dataset[i] - - # Make sure length of sequence is correct - assert cur_size == len(resampling_dataset[i]) - - freqs[resampling_dataset[i]] += 1 - totalfreqs += 1 - - if prev_size > cur_size: - results["ordered_by_size"] = False - - prev_size = cur_size - - assert set(freqs.keys()) == set(self.strings) - for s, weight in zip(self.strings, self.weights): - freq = freqs[s] / totalfreqs - expected_freq = weight / sum(self.weights) - results["max_distribution_diff"] = max( - results["max_distribution_diff"], abs(expected_freq - freq) - ) - - return results - - def test_resampling_dataset_batch_by_size_false(self): - resampling_dataset = ResamplingDataset( - self.dataset, - self.weights, - size_ratio=self.size_ratio, - batch_by_size=False, - seed=0, - ) - - results = self._test_common(resampling_dataset, iters=1000) - - # For batch_by_size = False, the batches should be returned in - # arbitrary order of size. - assert not results["ordered_by_size"] - - # Allow tolerance in distribution error of 2%. - assert results["max_distribution_diff"] < 0.02 - - def test_resampling_dataset_batch_by_size_true(self): - resampling_dataset = ResamplingDataset( - self.dataset, - self.weights, - size_ratio=self.size_ratio, - batch_by_size=True, - seed=0, - ) - - results = self._test_common(resampling_dataset, iters=1000) - - # For batch_by_size = True, the batches should be returned in - # increasing order of size. - assert results["ordered_by_size"] - - # Allow tolerance in distribution error of 2%. - assert results["max_distribution_diff"] < 0.02 - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ORI-Muchim/NahidaTTS/monotonic_align/core.py b/spaces/ORI-Muchim/NahidaTTS/monotonic_align/core.py deleted file mode 100644 index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/NahidaTTS/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/processing.py b/spaces/OpenGVLab/InternGPT/iGPT/models/processing.py deleted file mode 100644 index b926e2141b8aa6d9075e309c59f569880cf0eb5a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/processing.py +++ /dev/null @@ -1,443 +0,0 @@ -import torch -import torchvision -import random -from PIL import Image, ImageOps -import numpy as np -import numbers -import math - - -class GroupRandomCrop(object): - def __init__(self, size): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - - def __call__(self, img_group): - - w, h = img_group[0].size - th, tw = self.size - - out_images = list() - - x1 = random.randint(0, w - tw) - y1 = random.randint(0, h - th) - - for img in img_group: - assert(img.size[0] == w and img.size[1] == h) - if w == tw and h == th: - out_images.append(img) - else: - out_images.append(img.crop((x1, y1, x1 + tw, y1 + th))) - - return out_images - - -class MultiGroupRandomCrop(object): - def __init__(self, size, groups=1): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - self.groups = groups - - def __call__(self, img_group): - - w, h = img_group[0].size - th, tw = self.size - - out_images = list() - - for i in range(self.groups): - x1 = random.randint(0, w - tw) - y1 = random.randint(0, h - th) - - for img in img_group: - assert(img.size[0] == w and img.size[1] == h) - if w == tw and h == th: - out_images.append(img) - else: - out_images.append(img.crop((x1, y1, x1 + tw, y1 + th))) - - return out_images - - -class GroupCenterCrop(object): - def __init__(self, size): - self.worker = torchvision.transforms.CenterCrop(size) - - def __call__(self, img_group): - return [self.worker(img) for img in img_group] - - -class GroupRandomHorizontalFlip(object): - """Randomly horizontally flips the given PIL.Image with a probability of 0.5 - """ - - def __init__(self, is_flow=False): - self.is_flow = is_flow - - def __call__(self, img_group, is_flow=False): - v = random.random() - if v < 0.5: - ret = [img.transpose(Image.FLIP_LEFT_RIGHT) for img in img_group] - if self.is_flow: - for i in range(0, len(ret), 2): - # invert flow pixel values when flipping - ret[i] = ImageOps.invert(ret[i]) - return ret - else: - return img_group - - -class GroupNormalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, tensor): - rep_mean = self.mean * (tensor.size()[0] // len(self.mean)) - rep_std = self.std * (tensor.size()[0] // len(self.std)) - - # TODO: make efficient - for t, m, s in zip(tensor, rep_mean, rep_std): - t.sub_(m).div_(s) - - return tensor - - -class GroupScale(object): - """ Rescales the input PIL.Image to the given 'size'. - 'size' will be the size of the smaller edge. - For example, if height > width, then image will be - rescaled to (size * height / width, size) - size: size of the smaller edge - interpolation: Default: PIL.Image.BILINEAR - """ - - def __init__(self, size, interpolation=Image.BILINEAR): - self.worker = torchvision.transforms.Resize(size, interpolation) - - def __call__(self, img_group): - return [self.worker(img) for img in img_group] - - -class GroupOverSample(object): - def __init__(self, crop_size, scale_size=None, flip=True): - self.crop_size = crop_size if not isinstance( - crop_size, int) else (crop_size, crop_size) - - if scale_size is not None: - self.scale_worker = GroupScale(scale_size) - else: - self.scale_worker = None - self.flip = flip - - def __call__(self, img_group): - - if self.scale_worker is not None: - img_group = self.scale_worker(img_group) - - image_w, image_h = img_group[0].size - crop_w, crop_h = self.crop_size - - offsets = GroupMultiScaleCrop.fill_fix_offset( - False, image_w, image_h, crop_w, crop_h) - oversample_group = list() - for o_w, o_h in offsets: - normal_group = list() - flip_group = list() - for i, img in enumerate(img_group): - crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h)) - normal_group.append(crop) - flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT) - - if img.mode == 'L' and i % 2 == 0: - flip_group.append(ImageOps.invert(flip_crop)) - else: - flip_group.append(flip_crop) - - oversample_group.extend(normal_group) - if self.flip: - oversample_group.extend(flip_group) - return oversample_group - - -class GroupFullResSample(object): - def __init__(self, crop_size, scale_size=None, flip=True): - self.crop_size = crop_size if not isinstance( - crop_size, int) else (crop_size, crop_size) - - if scale_size is not None: - self.scale_worker = GroupScale(scale_size) - else: - self.scale_worker = None - self.flip = flip - - def __call__(self, img_group): - - if self.scale_worker is not None: - img_group = self.scale_worker(img_group) - - image_w, image_h = img_group[0].size - crop_w, crop_h = self.crop_size - - w_step = (image_w - crop_w) // 4 - h_step = (image_h - crop_h) // 4 - - offsets = list() - offsets.append((0 * w_step, 2 * h_step)) # left - offsets.append((4 * w_step, 2 * h_step)) # right - offsets.append((2 * w_step, 2 * h_step)) # center - - oversample_group = list() - for o_w, o_h in offsets: - normal_group = list() - flip_group = list() - for i, img in enumerate(img_group): - crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h)) - normal_group.append(crop) - if self.flip: - flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT) - - if img.mode == 'L' and i % 2 == 0: - flip_group.append(ImageOps.invert(flip_crop)) - else: - flip_group.append(flip_crop) - - oversample_group.extend(normal_group) - oversample_group.extend(flip_group) - return oversample_group - - -class GroupMultiScaleCrop(object): - - def __init__(self, input_size, scales=None, max_distort=1, - fix_crop=True, more_fix_crop=True): - self.scales = scales if scales is not None else [1, .875, .75, .66] - self.max_distort = max_distort - self.fix_crop = fix_crop - self.more_fix_crop = more_fix_crop - self.input_size = input_size if not isinstance(input_size, int) else [ - input_size, input_size] - self.interpolation = Image.BILINEAR - - def __call__(self, img_group): - - im_size = img_group[0].size - - crop_w, crop_h, offset_w, offset_h = self._sample_crop_size(im_size) - crop_img_group = [ - img.crop( - (offset_w, - offset_h, - offset_w + - crop_w, - offset_h + - crop_h)) for img in img_group] - ret_img_group = [img.resize((self.input_size[0], self.input_size[1]), self.interpolation) - for img in crop_img_group] - return ret_img_group - - def _sample_crop_size(self, im_size): - image_w, image_h = im_size[0], im_size[1] - - # find a crop size - base_size = min(image_w, image_h) - crop_sizes = [int(base_size * x) for x in self.scales] - crop_h = [ - self.input_size[1] if abs( - x - self.input_size[1]) < 3 else x for x in crop_sizes] - crop_w = [ - self.input_size[0] if abs( - x - self.input_size[0]) < 3 else x for x in crop_sizes] - - pairs = [] - for i, h in enumerate(crop_h): - for j, w in enumerate(crop_w): - if abs(i - j) <= self.max_distort: - pairs.append((w, h)) - - crop_pair = random.choice(pairs) - if not self.fix_crop: - w_offset = random.randint(0, image_w - crop_pair[0]) - h_offset = random.randint(0, image_h - crop_pair[1]) - else: - w_offset, h_offset = self._sample_fix_offset( - image_w, image_h, crop_pair[0], crop_pair[1]) - - return crop_pair[0], crop_pair[1], w_offset, h_offset - - def _sample_fix_offset(self, image_w, image_h, crop_w, crop_h): - offsets = self.fill_fix_offset( - self.more_fix_crop, image_w, image_h, crop_w, crop_h) - return random.choice(offsets) - - @staticmethod - def fill_fix_offset(more_fix_crop, image_w, image_h, crop_w, crop_h): - w_step = (image_w - crop_w) // 4 - h_step = (image_h - crop_h) // 4 - - ret = list() - ret.append((0, 0)) # upper left - ret.append((4 * w_step, 0)) # upper right - ret.append((0, 4 * h_step)) # lower left - ret.append((4 * w_step, 4 * h_step)) # lower right - ret.append((2 * w_step, 2 * h_step)) # center - - if more_fix_crop: - ret.append((0, 2 * h_step)) # center left - ret.append((4 * w_step, 2 * h_step)) # center right - ret.append((2 * w_step, 4 * h_step)) # lower center - ret.append((2 * w_step, 0 * h_step)) # upper center - - ret.append((1 * w_step, 1 * h_step)) # upper left quarter - ret.append((3 * w_step, 1 * h_step)) # upper right quarter - ret.append((1 * w_step, 3 * h_step)) # lower left quarter - ret.append((3 * w_step, 3 * h_step)) # lower righ quarter - - return ret - - -class GroupRandomSizedCrop(object): - """Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size - and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio - This is popularly used to train the Inception networks - size: size of the smaller edge - interpolation: Default: PIL.Image.BILINEAR - """ - - def __init__(self, size, interpolation=Image.BILINEAR): - self.size = size - self.interpolation = interpolation - - def __call__(self, img_group): - for attempt in range(10): - area = img_group[0].size[0] * img_group[0].size[1] - target_area = random.uniform(0.08, 1.0) * area - aspect_ratio = random.uniform(3. / 4, 4. / 3) - - w = int(round(math.sqrt(target_area * aspect_ratio))) - h = int(round(math.sqrt(target_area / aspect_ratio))) - - if random.random() < 0.5: - w, h = h, w - - if w <= img_group[0].size[0] and h <= img_group[0].size[1]: - x1 = random.randint(0, img_group[0].size[0] - w) - y1 = random.randint(0, img_group[0].size[1] - h) - found = True - break - else: - found = False - x1 = 0 - y1 = 0 - - if found: - out_group = list() - for img in img_group: - img = img.crop((x1, y1, x1 + w, y1 + h)) - assert(img.size == (w, h)) - out_group.append( - img.resize( - (self.size, self.size), self.interpolation)) - return out_group - else: - # Fallback - scale = GroupScale(self.size, interpolation=self.interpolation) - crop = GroupRandomCrop(self.size) - return crop(scale(img_group)) - - -class ConvertDataFormat(object): - def __init__(self, model_type): - self.model_type = model_type - - def __call__(self, images): - if self.model_type == '2D': - return images - tc, h, w = images.size() - t = tc // 3 - images = images.view(t, 3, h, w) - images = images.permute(1, 0, 2, 3) - return images - - -class Stack(object): - - def __init__(self, roll=False): - self.roll = roll - - def __call__(self, img_group): - if img_group[0].mode == 'L': - return np.concatenate([np.expand_dims(x, 2) - for x in img_group], axis=2) - elif img_group[0].mode == 'RGB': - if self.roll: - return np.concatenate([np.array(x)[:, :, ::-1] - for x in img_group], axis=2) - else: - #print(np.concatenate(img_group, axis=2).shape) - # print(img_group[0].shape) - return np.concatenate(img_group, axis=2) - - -class ToTorchFormatTensor(object): - """ Converts a PIL.Image (RGB) or numpy.ndarray (H x W x C) in the range [0, 255] - to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] """ - - def __init__(self, div=True): - self.div = div - - def __call__(self, pic): - if isinstance(pic, np.ndarray): - # handle numpy array - img = torch.from_numpy(pic).permute(2, 0, 1).contiguous() - else: - # handle PIL Image - img = torch.ByteTensor( - torch.ByteStorage.from_buffer( - pic.tobytes())) - img = img.view(pic.size[1], pic.size[0], len(pic.mode)) - # put it from HWC to CHW format - # yikes, this transpose takes 80% of the loading time/CPU - img = img.transpose(0, 1).transpose(0, 2).contiguous() - return img.float().div(255) if self.div else img.float() - - -class IdentityTransform(object): - - def __call__(self, data): - return data - - -if __name__ == "__main__": - trans = torchvision.transforms.Compose([ - GroupScale(256), - GroupRandomCrop(224), - Stack(), - ToTorchFormatTensor(), - GroupNormalize( - mean=[.485, .456, .406], - std=[.229, .224, .225] - )] - ) - - im = Image.open('../tensorflow-model-zoo.torch/lena_299.png') - - color_group = [im] * 3 - rst = trans(color_group) - - gray_group = [im.convert('L')] * 9 - gray_rst = trans(gray_group) - - trans2 = torchvision.transforms.Compose([ - GroupRandomSizedCrop(256), - Stack(), - ToTorchFormatTensor(), - GroupNormalize( - mean=[.485, .456, .406], - std=[.229, .224, .225]) - ]) - print(trans2(color_group)) \ No newline at end of file diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/say.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/say.py deleted file mode 100644 index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/say.py +++ /dev/null @@ -1,41 +0,0 @@ -""" Text to speech module """ -import threading -from threading import Semaphore - -from autogpt.config import Config -from autogpt.speech.brian import BrianSpeech -from autogpt.speech.eleven_labs import ElevenLabsSpeech -from autogpt.speech.gtts import GTTSVoice -from autogpt.speech.macos_tts import MacOSTTS - -CFG = Config() -DEFAULT_VOICE_ENGINE = GTTSVoice() -VOICE_ENGINE = None -if CFG.elevenlabs_api_key: - VOICE_ENGINE = ElevenLabsSpeech() -elif CFG.use_mac_os_tts == "True": - VOICE_ENGINE = MacOSTTS() -elif CFG.use_brian_tts == "True": - VOICE_ENGINE = BrianSpeech() -else: - VOICE_ENGINE = GTTSVoice() - - -QUEUE_SEMAPHORE = Semaphore( - 1 -) # The amount of sounds to queue before blocking the main thread - - -def say_text(text: str, voice_index: int = 0) -> None: - """Speak the given text using the given voice index""" - - def speak() -> None: - success = VOICE_ENGINE.say(text, voice_index) - if not success: - DEFAULT_VOICE_ENGINE.say(text) - - QUEUE_SEMAPHORE.release() - - QUEUE_SEMAPHORE.acquire(True) - thread = threading.Thread(target=speak) - thread.start() diff --git a/spaces/PeepDaSlan9/Dup_Digital_India/app.py b/spaces/PeepDaSlan9/Dup_Digital_India/app.py deleted file mode 100644 index 179ba5c2ac1a19d389242a56a755766288caa741..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Dup_Digital_India/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import json -from difflib import get_close_matches -import gradio as gr -import openai -import webbrowser -import requests - -ans = "" -def load_knowledge_base(file_path:str) -> dict: - with open(file_path,'r') as file: - data: dict = json.load(file) - return data - -def save_knowldege_base(file_path:str,data:dict): - with open(file_path,'w') as file: - json.dump(data,file,indent=2) - -def find_best_match(user_question:str,questions:list[str]) -> str|None: - matches:list = get_close_matches(user_question,questions,n=1,cutoff=0.7) - return matches[0] if matches else None - -def get_answer_for_question(question:str,knowledge_base:dict) -> str|None: - for q in knowledge_base["questions"]: - if q["questions"] == question: - return q["answer"] - -def chat_bot(input, history): - - knowledge_base: dict = load_knowledge_base('knowledge_base.json') - - while True: - user_input:str = input - if user_input.lower() == "quite": - break - - best_match: str | None = find_best_match(user_input, [q["questions"] for q in knowledge_base["questions"]]) - if best_match: - answer:str = get_answer_for_question(best_match,knowledge_base) - x = answer - return x - else: - input += "word limit 25 words" - messages = [] - openai.api_key = "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" - messages.append({"role": "user", "content": input}) - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages) - reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": reply}) - knowledge_base["questions"].append({"questions":user_input,"answer":reply}) - save_knowldege_base('knowledge_base.json',knowledge_base) - return reply - # for training purpose only -""" else: - print("Can you please teach me the answer to the question: ") - new_answer = input("Please input the answer: ") - knowledge_base["questions"].append({"questions":user_input,"answer":new_answer}) - save_knowldege_base('knowledge_base.json',knowledge_base)""" - -gr.ChatInterface( - chat_bot, - chatbot=gr.Chatbot(height=300), - textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7), - title="Digiसारथी", - description="Ask Digiसारथी any question", - theme="soft", - examples=["Hello", "What is digital India?", "What is digilocker?","Take a quiz"], - cache_examples=True, - undo_btn="Delete Previous", - clear_btn="Clear", - submit_btn="Submit", -).launch() diff --git a/spaces/Pengyey/bingo-chuchu/src/components/chat-history.tsx b/spaces/Pengyey/bingo-chuchu/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
    -
    - 历史记录 -
    -
    -
    -
    -
    -
    -
    - -
    -

    无标题的聊天

    -
    -

    上午1:42

    -
    - - - - - - - - -
    -
    -
    -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/index.ts b/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/default_runtime.py b/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/default_runtime.py deleted file mode 100644 index b564cc4e7e7d9a67dacaaddecb100e4d8f5c005b..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/default_runtime.py +++ /dev/null @@ -1,14 +0,0 @@ -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook', by_epoch=False), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] -cudnn_benchmark = True diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/image/colorspace.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/image/colorspace.py deleted file mode 100644 index 814533952fdfda23d67cb6a3073692d8c1156add..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/image/colorspace.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - - -def imconvert(img, src, dst): - """Convert an image from the src colorspace to dst colorspace. - - Args: - img (ndarray): The input image. - src (str): The source colorspace, e.g., 'rgb', 'hsv'. - dst (str): The destination colorspace, e.g., 'rgb', 'hsv'. - - Returns: - ndarray: The converted image. - """ - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - out_img = cv2.cvtColor(img, code) - return out_img - - -def bgr2gray(img, keepdim=False): - """Convert a BGR image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def rgb2gray(img, keepdim=False): - """Convert a RGB image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def gray2bgr(img): - """Convert a grayscale image to BGR image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted BGR image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - return out_img - - -def gray2rgb(img): - """Convert a grayscale image to RGB image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted RGB image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - return out_img - - -def _convert_input_type_range(img): - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - conversion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError('The img type should be np.float32 or np.uint8, ' - f'but got {img_type}') - return img - - -def _convert_output_type_range(img, dst_type): - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace conversion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError('The dst_type should be np.float32 or np.uint8, ' - f'but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) - - -def rgb2ycbcr(img, y_only=False): - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img, y_only=False): - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img): - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [ - -222.921, 135.576, -276.836 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img): - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [ - -276.836, 135.576, -222.921 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def convert_color_factory(src, dst): - - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - - def convert_color(img): - out_img = cv2.cvtColor(img, code) - return out_img - - convert_color.__doc__ = f"""Convert a {src.upper()} image to {dst.upper()} - image. - - Args: - img (ndarray or str): The input image. - - Returns: - ndarray: The converted {dst.upper()} image. - """ - - return convert_color - - -bgr2rgb = convert_color_factory('bgr', 'rgb') - -rgb2bgr = convert_color_factory('rgb', 'bgr') - -bgr2hsv = convert_color_factory('bgr', 'hsv') - -hsv2bgr = convert_color_factory('hsv', 'bgr') - -bgr2hls = convert_color_factory('bgr', 'hls') - -hls2bgr = convert_color_factory('hls', 'bgr') diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/ROIPool.h b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/ROIPool.h deleted file mode 100644 index 4cf627fde6b9dc0ecd6524a7048e7f8c7d7746b5..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/ROIPool.h +++ /dev/null @@ -1,48 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -#pragma once - -#include "cpu/vision.h" - -#ifdef WITH_CUDA -#include "cuda/vision.h" -#endif - - -std::tuple ROIPool_forward(const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return ROIPool_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - -at::Tensor ROIPool_backward(const at::Tensor& grad, - const at::Tensor& input, - const at::Tensor& rois, - const at::Tensor& argmax, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width) { - if (grad.device().is_cuda()) { -#ifdef WITH_CUDA - return ROIPool_backward_cuda(grad, input, rois, argmax, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - - - diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_log_render.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_log_render.py deleted file mode 100644 index fc16c84437a8a34231c44d3f0a331459ddcb0f34..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_log_render.py +++ /dev/null @@ -1,94 +0,0 @@ -from datetime import datetime -from typing import Iterable, List, Optional, TYPE_CHECKING, Union, Callable - - -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import Console, ConsoleRenderable, RenderableType - from .table import Table - -FormatTimeCallable = Callable[[datetime], Text] - - -class LogRender: - def __init__( - self, - show_time: bool = True, - show_level: bool = False, - show_path: bool = True, - time_format: Union[str, FormatTimeCallable] = "[%x %X]", - omit_repeated_times: bool = True, - level_width: Optional[int] = 8, - ) -> None: - self.show_time = show_time - self.show_level = show_level - self.show_path = show_path - self.time_format = time_format - self.omit_repeated_times = omit_repeated_times - self.level_width = level_width - self._last_time: Optional[Text] = None - - def __call__( - self, - console: "Console", - renderables: Iterable["ConsoleRenderable"], - log_time: Optional[datetime] = None, - time_format: Optional[Union[str, FormatTimeCallable]] = None, - level: TextType = "", - path: Optional[str] = None, - line_no: Optional[int] = None, - link_path: Optional[str] = None, - ) -> "Table": - from .containers import Renderables - from .table import Table - - output = Table.grid(padding=(0, 1)) - output.expand = True - if self.show_time: - output.add_column(style="log.time") - if self.show_level: - output.add_column(style="log.level", width=self.level_width) - output.add_column(ratio=1, style="log.message", overflow="fold") - if self.show_path and path: - output.add_column(style="log.path") - row: List["RenderableType"] = [] - if self.show_time: - log_time = log_time or console.get_datetime() - time_format = time_format or self.time_format - if callable(time_format): - log_time_display = time_format(log_time) - else: - log_time_display = Text(log_time.strftime(time_format)) - if log_time_display == self._last_time and self.omit_repeated_times: - row.append(Text(" " * len(log_time_display))) - else: - row.append(log_time_display) - self._last_time = log_time_display - if self.show_level: - row.append(level) - - row.append(Renderables(renderables)) - if self.show_path and path: - path_text = Text() - path_text.append( - path, style=f"link file://{link_path}" if link_path else "" - ) - if line_no: - path_text.append(":") - path_text.append( - f"{line_no}", - style=f"link file://{link_path}#{line_no}" if link_path else "", - ) - row.append(path_text) - - output.add_row(*row) - return output - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - c = Console() - c.print("[on blue]Hello", justify="right") - c.log("[on blue]hello", justify="right") diff --git a/spaces/RegalHyperus/rvc-anime-game/infer_pack/models.py b/spaces/RegalHyperus/rvc-anime-game/infer_pack/models.py deleted file mode 100644 index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000 --- a/spaces/RegalHyperus/rvc-anime-game/infer_pack/models.py +++ /dev/null @@ -1,982 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y_lengths, ds - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - z_slice, ids_slice = commons.rand_slice_segments( - x, y_lengths, self.segment_size - ) - - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice - - def infer( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o, o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/coco.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/coco.py deleted file mode 100644 index 3a8e1bcfdd7f2854ca381d4f87788e3a63eb568c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/coco.py +++ /dev/null @@ -1,546 +0,0 @@ -import itertools -import logging -import os.path as osp -import tempfile -from collections import OrderedDict - -import mmcv -import numpy as np -import pycocotools -from mmcv.utils import print_log -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from terminaltables import AsciiTable - -from mmdet.core import eval_recalls -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CocoDataset(CustomDataset): - - CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', - 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', - 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', - 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', - 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', - 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', - 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', - 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', - 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush') - - def load_annotations(self, ann_file): - """Load annotation from COCO style annotation file. - - Args: - ann_file (str): Path of annotation file. - - Returns: - list[dict]: Annotation info from COCO api. - """ - if not getattr(pycocotools, '__version__', '0') >= '12.0.2': - raise AssertionError( - 'Incompatible version of pycocotools is installed. ' - 'Run pip uninstall pycocotools first. Then run pip ' - 'install mmpycocotools to install open-mmlab forked ' - 'pycocotools.') - - self.coco = COCO(ann_file) - self.cat_ids = self.coco.get_cat_ids(cat_names=self.CLASSES) - self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)} - self.img_ids = self.coco.get_img_ids() - data_infos = [] - total_ann_ids = [] - for i in self.img_ids: - info = self.coco.load_imgs([i])[0] - info['filename'] = info['file_name'] - data_infos.append(info) - ann_ids = self.coco.get_ann_ids(img_ids=[i]) - total_ann_ids.extend(ann_ids) - assert len(set(total_ann_ids)) == len( - total_ann_ids), f"Annotation ids in '{ann_file}' are not unique!" - return data_infos - - def get_ann_info(self, idx): - """Get COCO annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return self._parse_ann_info(self.data_infos[idx], ann_info) - - def get_cat_ids(self, idx): - """Get COCO category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - img_id = self.data_infos[idx]['id'] - ann_ids = self.coco.get_ann_ids(img_ids=[img_id]) - ann_info = self.coco.load_anns(ann_ids) - return [ann['category_id'] for ann in ann_info] - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = self.img_ids[i] - if self.filter_empty_gt and img_id not in ids_in_cat: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - ann_info (list[dict]): Annotation info of an image. - with_mask (bool): Whether to parse mask annotations. - - Returns: - dict: A dict containing the following keys: bboxes, bboxes_ignore,\ - labels, masks, seg_map. "masks" are raw annotations and not \ - decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - inter_w = max(0, min(x1 + w, img_info['width']) - max(x1, 0)) - inter_h = max(0, min(y1 + h, img_info['height']) - max(y1, 0)) - if inter_w * inter_h == 0: - continue - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann.get('segmentation', None)) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - seg_map = img_info['filename'].replace('jpg', 'png') - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=seg_map) - - return ann - - def xyxy2xywh(self, bbox): - """Convert ``xyxy`` style bounding boxes to ``xywh`` style for COCO - evaluation. - - Args: - bbox (numpy.ndarray): The bounding boxes, shape (4, ), in - ``xyxy`` order. - - Returns: - list[float]: The converted bounding boxes, in ``xywh`` order. - """ - - _bbox = bbox.tolist() - return [ - _bbox[0], - _bbox[1], - _bbox[2] - _bbox[0], - _bbox[3] - _bbox[1], - ] - - def _proposal2json(self, results): - """Convert proposal results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - bboxes = results[idx] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = 1 - json_results.append(data) - return json_results - - def _det2json(self, results): - """Convert detection results to COCO json style.""" - json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - result = results[idx] - for label in range(len(result)): - bboxes = result[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - json_results.append(data) - return json_results - - def _segm2json(self, results): - """Convert instance segmentation results to COCO json style.""" - bbox_json_results = [] - segm_json_results = [] - for idx in range(len(self)): - img_id = self.img_ids[idx] - det, seg = results[idx] - for label in range(len(det)): - # bbox results - bboxes = det[label] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(bboxes[i][4]) - data['category_id'] = self.cat_ids[label] - bbox_json_results.append(data) - - # segm results - # some detectors use different scores for bbox and mask - if isinstance(seg, tuple): - segms = seg[0][label] - mask_score = seg[1][label] - else: - segms = seg[label] - mask_score = [bbox[4] for bbox in bboxes] - for i in range(bboxes.shape[0]): - data = dict() - data['image_id'] = img_id - data['bbox'] = self.xyxy2xywh(bboxes[i]) - data['score'] = float(mask_score[i]) - data['category_id'] = self.cat_ids[label] - if isinstance(segms[i]['counts'], bytes): - segms[i]['counts'] = segms[i]['counts'].decode() - data['segmentation'] = segms[i] - segm_json_results.append(data) - return bbox_json_results, segm_json_results - - def results2json(self, results, outfile_prefix): - """Dump the detection results to a COCO style json file. - - There are 3 types of results: proposals, bbox predictions, mask - predictions, and they have different data types. This method will - automatically recognize the type, and dump them to json files. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. If the - prefix is "somepath/xxx", the json files will be named - "somepath/xxx.bbox.json", "somepath/xxx.segm.json", - "somepath/xxx.proposal.json". - - Returns: - dict[str: str]: Possible keys are "bbox", "segm", "proposal", and \ - values are corresponding filenames. - """ - result_files = dict() - if isinstance(results[0], list): - json_results = self._det2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - mmcv.dump(json_results, result_files['bbox']) - elif isinstance(results[0], tuple): - json_results = self._segm2json(results) - result_files['bbox'] = f'{outfile_prefix}.bbox.json' - result_files['proposal'] = f'{outfile_prefix}.bbox.json' - result_files['segm'] = f'{outfile_prefix}.segm.json' - mmcv.dump(json_results[0], result_files['bbox']) - mmcv.dump(json_results[1], result_files['segm']) - elif isinstance(results[0], np.ndarray): - json_results = self._proposal2json(results) - result_files['proposal'] = f'{outfile_prefix}.proposal.json' - mmcv.dump(json_results, result_files['proposal']) - else: - raise TypeError('invalid type of results') - return result_files - - def fast_eval_recall(self, results, proposal_nums, iou_thrs, logger=None): - gt_bboxes = [] - for i in range(len(self.img_ids)): - ann_ids = self.coco.get_ann_ids(img_ids=self.img_ids[i]) - ann_info = self.coco.load_anns(ann_ids) - if len(ann_info) == 0: - gt_bboxes.append(np.zeros((0, 4))) - continue - bboxes = [] - for ann in ann_info: - if ann.get('ignore', False) or ann['iscrowd']: - continue - x1, y1, w, h = ann['bbox'] - bboxes.append([x1, y1, x1 + w, y1 + h]) - bboxes = np.array(bboxes, dtype=np.float32) - if bboxes.shape[0] == 0: - bboxes = np.zeros((0, 4)) - gt_bboxes.append(bboxes) - - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thrs, logger=logger) - ar = recalls.mean(axis=1) - return ar - - def format_results(self, results, jsonfile_prefix=None, **kwargs): - """Format the results to json (standard format for COCO evaluation). - - Args: - results (list[tuple | numpy.ndarray]): Testing results of the - dataset. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving json files when jsonfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if jsonfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - jsonfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2json(results, jsonfile_prefix) - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. - - Returns: - dict[str, float]: COCO style evaluation metric. - """ - - metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - for metric in metrics: - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - if iou_thrs is None: - iou_thrs = np.linspace( - .5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - if metric_items is not None: - if not isinstance(metric_items, list): - metric_items = [metric_items] - - result_files, tmp_dir = self.format_results(results, jsonfile_prefix) - - eval_results = OrderedDict() - cocoGt = self.coco - for metric in metrics: - msg = f'Evaluating {metric}...' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - if metric == 'proposal_fast': - ar = self.fast_eval_recall( - results, proposal_nums, iou_thrs, logger='silent') - log_msg = [] - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - log_msg.append(f'\nAR@{num}\t{ar[i]:.4f}') - log_msg = ''.join(log_msg) - print_log(log_msg, logger=logger) - continue - - if metric not in result_files: - raise KeyError(f'{metric} is not in results') - try: - cocoDt = cocoGt.loadRes(result_files[metric]) - except IndexError: - print_log( - 'The testing results of the whole dataset is empty.', - logger=logger, - level=logging.ERROR) - break - - iou_type = 'bbox' if metric == 'proposal' else metric - cocoEval = COCOeval(cocoGt, cocoDt, iou_type) - cocoEval.params.catIds = self.cat_ids - cocoEval.params.imgIds = self.img_ids - cocoEval.params.maxDets = list(proposal_nums) - cocoEval.params.iouThrs = iou_thrs - # mapping of cocoEval.stats - coco_metric_names = { - 'mAP': 0, - 'mAP_50': 1, - 'mAP_75': 2, - 'mAP_s': 3, - 'mAP_m': 4, - 'mAP_l': 5, - 'AR@100': 6, - 'AR@300': 7, - 'AR@1000': 8, - 'AR_s@1000': 9, - 'AR_m@1000': 10, - 'AR_l@1000': 11 - } - if metric_items is not None: - for metric_item in metric_items: - if metric_item not in coco_metric_names: - raise KeyError( - f'metric item {metric_item} is not supported') - - if metric == 'proposal': - cocoEval.params.useCats = 0 - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - if metric_items is None: - metric_items = [ - 'AR@100', 'AR@300', 'AR@1000', 'AR_s@1000', - 'AR_m@1000', 'AR_l@1000' - ] - - for item in metric_items: - val = float( - f'{cocoEval.stats[coco_metric_names[item]]:.3f}') - eval_results[item] = val - else: - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - if classwise: # Compute per-category AP - # Compute per-category AP - # from https://github.com/facebookresearch/detectron2/ - precisions = cocoEval.eval['precision'] - # precision: (iou, recall, cls, area range, max dets) - assert len(self.cat_ids) == precisions.shape[2] - - results_per_category = [] - for idx, catId in enumerate(self.cat_ids): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - nm = self.coco.loadCats(catId)[0] - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - if precision.size: - ap = np.mean(precision) - else: - ap = float('nan') - results_per_category.append( - (f'{nm["name"]}', f'{float(ap):0.3f}')) - - num_columns = min(6, len(results_per_category) * 2) - results_flatten = list( - itertools.chain(*results_per_category)) - headers = ['category', 'AP'] * (num_columns // 2) - results_2d = itertools.zip_longest(*[ - results_flatten[i::num_columns] - for i in range(num_columns) - ]) - table_data = [headers] - table_data += [result for result in results_2d] - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - if metric_items is None: - metric_items = [ - 'mAP', 'mAP_50', 'mAP_75', 'mAP_s', 'mAP_m', 'mAP_l' - ] - - for metric_item in metric_items: - key = f'{metric}_{metric_item}' - val = float( - f'{cocoEval.stats[coco_metric_names[metric_item]]:.3f}' - ) - eval_results[key] = val - ap = cocoEval.stats[:6] - eval_results[f'{metric}_mAP_copypaste'] = ( - f'{ap[0]:.3f} {ap[1]:.3f} {ap[2]:.3f} {ap[3]:.3f} ' - f'{ap[4]:.3f} {ap[5]:.3f}') - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/builder.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/builder.py deleted file mode 100644 index d6bf37d8c8f2dc9bd0e1b7383f446112c4f95cbd..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/builder.py +++ /dev/null @@ -1,143 +0,0 @@ -import copy -import platform -import random -from functools import partial - -import numpy as np -from annotator.uniformer.mmcv.parallel import collate -from annotator.uniformer.mmcv.runner import get_dist_info -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg -from torch.utils.data import DataLoader - -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - hard_limit = rlimit[1] - soft_limit = min(4096, hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - from .dataset_wrappers import ConcatDataset - ann_files = cfg['ann_file'] - img_prefixes = cfg.get('img_prefix', None) - seg_prefixes = cfg.get('seg_prefix', None) - proposal_files = cfg.get('proposal_file', None) - separate_eval = cfg.get('separate_eval', True) - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - # pop 'separate_eval' since it is not a valid key for common datasets. - if 'separate_eval' in data_cfg: - data_cfg.pop('separate_eval') - data_cfg['ann_file'] = ann_files[i] - if isinstance(img_prefixes, (list, tuple)): - data_cfg['img_prefix'] = img_prefixes[i] - if isinstance(seg_prefixes, (list, tuple)): - data_cfg['seg_prefix'] = seg_prefixes[i] - if isinstance(proposal_files, (list, tuple)): - data_cfg['proposal_file'] = proposal_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - from .dataset_wrappers import (ConcatDataset, RepeatDataset, - ClassBalancedDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - # DistributedGroupSampler will definitely shuffle the data to satisfy - # that images on each GPU are in the same group - if shuffle: - sampler = DistributedGroupSampler( - dataset, samples_per_gpu, world_size, rank, seed=seed) - else: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=False, seed=seed) - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=False, - worker_init_fn=init_fn, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/RobinWZQ/CCLAP/net.py b/spaces/RobinWZQ/CCLAP/net.py deleted file mode 100644 index 4e6346ea8f8544e11722a4b32c289ffbabdbf2c7..0000000000000000000000000000000000000000 --- a/spaces/RobinWZQ/CCLAP/net.py +++ /dev/null @@ -1,281 +0,0 @@ -import torch -import torch.nn as nn -from utils import mean_variance_norm, DEVICE -from utils import calc_ss_loss, calc_remd_loss, calc_moment_loss, calc_mse_loss, calc_histogram_loss -from hist_loss import RGBuvHistBlock -import torch - -class Net(nn.Module): - def __init__(self, args): - super(Net, self).__init__() - self.args = args - self.vgg = vgg19[:44] - self.vgg.load_state_dict(torch.load('./checkpoints/encoder.pth', map_location='cpu'), strict=False) - for param in self.vgg.parameters(): - param.requires_grad = False - - self.align1 = PAMA(512) - self.align2 = PAMA(512) - self.align3 = PAMA(512) - - self.decoder = decoder - self.hist = RGBuvHistBlock(insz=64, h=256, - intensity_scale=True, - method='inverse-quadratic', - device=DEVICE) - - if args.pretrained == True: - self.align1.load_state_dict(torch.load('./checkpoints/PAMA1.pth', map_location='cpu'), strict=True) - self.align2.load_state_dict(torch.load('./checkpoints/PAMA2.pth', map_location='cpu'), strict=True) - self.align3.load_state_dict(torch.load('./checkpoints/PAMA3.pth', map_location='cpu'), strict=True) - self.decoder.load_state_dict(torch.load('./checkpoints/decoder.pth', map_location='cpu'), strict=False) - - if args.requires_grad == False: - for param in self.parameters(): - param.requires_grad = False - - - def forward(self, Ic, Is): - feat_c = self.forward_vgg(Ic) - feat_s = self.forward_vgg(Is) - Fc, Fs = feat_c[3], feat_s[3] - - Fcs1 = self.align1(Fc, Fs) - Fcs2 = self.align2(Fcs1, Fs) - Fcs3 = self.align3(Fcs2, Fs) - - Ics3 = self.decoder(Fcs3) - - if self.args.training == True: - Ics1 = self.decoder(Fcs1) - Ics2 = self.decoder(Fcs2) - Irc = self.decoder(Fc) - Irs = self.decoder(Fs) - feat_cs1 = self.forward_vgg(Ics1) - feat_cs2 = self.forward_vgg(Ics2) - feat_cs3 = self.forward_vgg(Ics3) - feat_rc = self.forward_vgg(Irc) - feat_rs = self.forward_vgg(Irs) - - content_loss1, remd_loss1, moment_loss1, color_loss1 = 0.0, 0.0, 0.0, 0.0 - content_loss2, remd_loss2, moment_loss2, color_loss2 = 0.0, 0.0, 0.0, 0.0 - content_loss3, remd_loss3, moment_loss3, color_loss3 = 0.0, 0.0, 0.0, 0.0 - loss_rec = 0.0 - - for l in range(2, 5): - content_loss1 += self.args.w_content1 * calc_ss_loss(feat_cs1[l], feat_c[l]) - remd_loss1 += self.args.w_remd1 * calc_remd_loss(feat_cs1[l], feat_s[l]) - moment_loss1 += self.args.w_moment1 * calc_moment_loss(feat_cs1[l], feat_s[l]) - - content_loss2 += self.args.w_content2 * calc_ss_loss(feat_cs2[l], feat_c[l]) - remd_loss2 += self.args.w_remd2 * calc_remd_loss(feat_cs2[l], feat_s[l]) - moment_loss2 += self.args.w_moment2 * calc_moment_loss(feat_cs2[l], feat_s[l]) - - content_loss3 += self.args.w_content3 * calc_ss_loss(feat_cs3[l], feat_c[l]) - remd_loss3 += self.args.w_remd3 * calc_remd_loss(feat_cs3[l], feat_s[l]) - moment_loss3 += self.args.w_moment3 * calc_moment_loss(feat_cs3[l], feat_s[l]) - - loss_rec += 0.5 * calc_mse_loss(feat_rc[l], feat_c[l]) + 0.5 * calc_mse_loss(feat_rs[l], feat_s[l]) - loss_rec += 25 * calc_mse_loss(Irc, Ic) - loss_rec += 25 * calc_mse_loss(Irs, Is) - - if self.args.color_on: - color_loss1 += self.args.w_color1 * calc_histogram_loss(Ics1, Is, self.hist) - color_loss2 += self.args.w_color2 * calc_histogram_loss(Ics2, Is, self.hist) - color_loss3 += self.args.w_color3 * calc_histogram_loss(Ics3, Is, self.hist) - - loss1 = (content_loss1+remd_loss1+moment_loss1+color_loss1)/(self.args.w_content1+self.args.w_remd1+self.args.w_moment1+self.args.w_color1) - loss2 = (content_loss2+remd_loss2+moment_loss2+color_loss2)/(self.args.w_content2+self.args.w_remd2+self.args.w_moment2+self.args.w_color2) - loss3 = (content_loss3+remd_loss3+moment_loss3+color_loss3)/(self.args.w_content3+self.args.w_remd3+self.args.w_moment3+self.args.w_color3) - loss = loss1 + loss2 + loss3 + loss_rec - return loss - else: - return Ics3 - - def forward_vgg(self, x): - relu1_1 = self.vgg[:4](x) - relu2_1 = self.vgg[4:11](relu1_1) - relu3_1 = self.vgg[11:18](relu2_1) - relu4_1 = self.vgg[18:31](relu3_1) - relu5_1 = self.vgg[31:44](relu4_1) - return [relu1_1, relu2_1, relu3_1, relu4_1, relu5_1] - - def save_ckpts(self): - torch.save(self.align1.state_dict(), "./checkpoints/PAMA1.pth") - torch.save(self.align2.state_dict(), "./checkpoints/PAMA2.pth") - torch.save(self.align3.state_dict(), "./checkpoints/PAMA3.pth") - torch.save(self.decoder.state_dict(), "./checkpoints/decoder.pth") - -#--------------------------------------------------------------------------------------------------------------- - -vgg19 = nn.Sequential( - nn.Conv2d(3, 3, (1, 1)), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(3, 64, (3, 3)), - nn.ReLU(), # relu1-1 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(64, 64, (3, 3)), - nn.ReLU(), # relu1-2 - nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(64, 128, (3, 3)), - nn.ReLU(), # relu2-1 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(128, 128, (3, 3)), - nn.ReLU(), # relu2-2 - nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(128, 256, (3, 3)), - nn.ReLU(), # relu3-1 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 256, (3, 3)), - nn.ReLU(), # relu3-2 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 256, (3, 3)), - nn.ReLU(), # relu3-3 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 256, (3, 3)), - nn.ReLU(), # relu3-4 - nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 512, (3, 3)), - nn.ReLU(), # relu4-1, - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU(), # relu4-2 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU(), # relu4-3 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU(), # relu4-4 - nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU(), # relu5-1 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU(), # relu5-2 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU(), # relu5-3 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 512, (3, 3)), - nn.ReLU() # relu5-4 -) - -#--------------------------------------------------------------------------------------------------------------- - -decoder = nn.Sequential( - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(512, 256, (3, 3)), - nn.ReLU(), #relu4_1 - nn.Upsample(scale_factor=2, mode='nearest'), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 256, (3, 3)), - nn.ReLU(), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 256, (3, 3)), - nn.ReLU(), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 256, (3, 3)), - nn.ReLU(), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(256, 128, (3, 3)), - nn.ReLU(), #relu3_1 - nn.Upsample(scale_factor=2, mode='nearest'), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(128, 128, (3, 3)), - nn.ReLU(), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(128, 64, (3, 3)), - nn.ReLU(), #relu2_1 - nn.Upsample(scale_factor=2, mode='nearest'), - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(64, 64, (3, 3)), - nn.ReLU(), #relu1_1 - nn.ReflectionPad2d((1, 1, 1, 1)), - nn.Conv2d(64, 3, (3, 3)), -) - -#--------------------------------------------------------------------------------------------------------------- - -class AttentionUnit(nn.Module): - def __init__(self, channels): - super(AttentionUnit, self).__init__() - self.relu6 = nn.ReLU6() - self.f = nn.Conv2d(channels, channels//2, (1, 1)) - self.g = nn.Conv2d(channels, channels//2, (1, 1)) - self.h = nn.Conv2d(channels, channels//2, (1, 1)) - - self.out_conv = nn.Conv2d(channels//2, channels, (1, 1)) - self.softmax = nn.Softmax(dim = -1) - - def forward(self, Fc, Fs): - B, C, H, W = Fc.shape - f_Fc = self.relu6(self.f(mean_variance_norm(Fc))) - g_Fs = self.relu6(self.g(mean_variance_norm(Fs))) - h_Fs = self.relu6(self.h(Fs)) - f_Fc = f_Fc.view(f_Fc.shape[0], f_Fc.shape[1], -1).permute(0, 2, 1) - g_Fs = g_Fs.view(g_Fs.shape[0], g_Fs.shape[1], -1) - - Attention = self.softmax(torch.bmm(f_Fc, g_Fs)) - - h_Fs = h_Fs.view(h_Fs.shape[0], h_Fs.shape[1], -1) - - Fcs = torch.bmm(h_Fs, Attention.permute(0, 2, 1)) - Fcs = Fcs.view(B, C//2, H, W) - Fcs = self.relu6(self.out_conv(Fcs)) - - return Fcs - -class FuseUnit(nn.Module): - def __init__(self, channels): - super(FuseUnit, self).__init__() - self.proj1 = nn.Conv2d(2*channels, channels, (1, 1)) - self.proj2 = nn.Conv2d(channels, channels, (1, 1)) - self.proj3 = nn.Conv2d(channels, channels, (1, 1)) - - self.fuse1x = nn.Conv2d(channels, 1, (1, 1), stride = 1) - self.fuse3x = nn.Conv2d(channels, 1, (3, 3), stride = 1) - self.fuse5x = nn.Conv2d(channels, 1, (5, 5), stride = 1) - - self.pad3x = nn.ReflectionPad2d((1, 1, 1, 1)) - self.pad5x = nn.ReflectionPad2d((2, 2, 2, 2)) - self.sigmoid = nn.Sigmoid() - - def forward(self, F1, F2): - Fcat = self.proj1(torch.cat((F1, F2), dim=1)) - F1 = self.proj2(F1) - F2 = self.proj3(F2) - - fusion1 = self.sigmoid(self.fuse1x(Fcat)) - fusion3 = self.sigmoid(self.fuse3x(self.pad3x(Fcat))) - fusion5 = self.sigmoid(self.fuse5x(self.pad5x(Fcat))) - fusion = (fusion1 + fusion3 + fusion5) / 3 - - return torch.clamp(fusion, min=0, max=1.0)*F1 + torch.clamp(1 - fusion, min=0, max=1.0)*F2 - -class PAMA(nn.Module): - def __init__(self, channels): - super(PAMA, self).__init__() - self.conv_in = nn.Conv2d(channels, channels, (3, 3), stride=1) - self.attn = AttentionUnit(channels) - self.fuse = FuseUnit(channels) - self.conv_out = nn.Conv2d(channels, channels, (3, 3), stride=1) - - self.pad = nn.ReflectionPad2d((1, 1, 1, 1)) - self.relu6 = nn.ReLU6() - - def forward(self, Fc, Fs): - Fc = self.relu6(self.conv_in(self.pad(Fc))) - Fs = self.relu6(self.conv_in(self.pad(Fs))) - Fcs = self.attn(Fc, Fs) - Fcs = self.relu6(self.conv_out(self.pad(Fcs))) - Fcs = self.fuse(Fc, Fcs) - - return Fcs - -#--------------------------------------------------------------------------------------------------------------- - - diff --git a/spaces/Robo2000/ClinicalTerminologyUIUX-GR/README.md b/spaces/Robo2000/ClinicalTerminologyUIUX-GR/README.md deleted file mode 100644 index 13cf6614a61fd34db6293844a184ffa83458cb7b..0000000000000000000000000000000000000000 --- a/spaces/Robo2000/ClinicalTerminologyUIUX-GR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ClinicalTerminologyUIUX GR -emoji: 📈 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SIGGRAPH2022/sketch2pose/scripts/download.sh b/spaces/SIGGRAPH2022/sketch2pose/scripts/download.sh deleted file mode 100644 index a99c8f5e564423fae3ba8a55fc0998b90e916717..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/sketch2pose/scripts/download.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env sh - - -# set -euo pipefail - - -asset_dir="./assets" - -[ ! -e "${asset_dir}"/models_smplx_v1_1.zip ] \ - && echo Error: Download SMPL-X body model from https://smpl-x.is.tue.mpg.de \ - and save zip archive to "${asset_dir}" \ - && exit 1 \ - && : - -asset_urls=( - # Download constants (SPIN) - http://visiondata.cis.upenn.edu/spin/data.tar.gz - - # Download essentials (SMPLify-XMC) - https://download.is.tue.mpg.de/tuch/smplify-xmc-essentials.zip - - # Download sketch2pose models - http://www-labs.iro.umontreal.ca/~bmpix/sketch2pose/models.zip - - # Download test images - http://www-labs.iro.umontreal.ca/~bmpix/sketch2pose/images.zip -) -for asset_url in "${asset_urls[@]}"; do - wget \ - -nc \ - -c \ - --directory-prefix "${asset_dir}" \ - "${asset_url}" -done - -models_dir="./models" -mkdir -p "${models_dir}" - -model_files=( - # Unzip smplx models - models_smplx_v1_1.zip - - # Unzip essentials (SMPLifu-XMC) - smplify-xmc-essentials.zip - - # Unzip sketch2pose models - models.zip -) - -for model_file in "${model_files[@]}"; do - unzip \ - -u \ - -d "${models_dir}" \ - "${asset_dir}"/"${model_file}" -done - -# Unzip constants (SPIN) -tar \ - --skip-old-files \ - -xvf "${asset_dir}"/data.tar.gz \ - -C "${models_dir}" \ - data/smpl_mean_params.npz - -data_dir="./data" -mkdir -p "${data_dir}" - -# Unzip test images -unzip \ - -u \ - -d "${data_dir}" \ - "${asset_dir}"/images.zip - -data_dir="./output" -mkdir -p "${data_dir}" -unzip \ - -u \ - -d "${data_dir}" \ - "${asset_dir}"/output.zip diff --git a/spaces/Salavat/Interslavic-Translator-NLLB200/README.md b/spaces/Salavat/Interslavic-Translator-NLLB200/README.md deleted file mode 100644 index 03b6c9993409b427c66e82a4bbb88a8b04baa6fd..0000000000000000000000000000000000000000 --- a/spaces/Salavat/Interslavic-Translator-NLLB200/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Interslavic Translator NLLB200 -emoji: 📚 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py b/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py deleted file mode 100644 index 7ff3ff22fc21014fa7b6c12fba96a2ca36fc9cc4..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py +++ /dev/null @@ -1,165 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import numpy as np - -from transformers import CLIPFeatureExtractor, CLIPTokenizer - -from ...onnx_utils import OnnxRuntimeModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from . import StableDiffusionPipelineOutput - - -class StableDiffusionOnnxPipeline(DiffusionPipeline): - vae_decoder: OnnxRuntimeModel - text_encoder: OnnxRuntimeModel - tokenizer: CLIPTokenizer - unet: OnnxRuntimeModel - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - safety_checker: OnnxRuntimeModel - feature_extractor: CLIPFeatureExtractor - - def __init__( - self, - vae_decoder: OnnxRuntimeModel, - text_encoder: OnnxRuntimeModel, - tokenizer: CLIPTokenizer, - unet: OnnxRuntimeModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: OnnxRuntimeModel, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("np") - self.register_modules( - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = 512, - width: Optional[int] = 512, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - latents: Optional[np.ndarray] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ): - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_embeddings = self.text_encoder(input_ids=text_input.input_ids.astype(np.int32))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="np" - ) - uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - latents_shape = (batch_size, 4, height // 8, width // 8) - if latents is None: - latents = np.random.randn(*latents_shape).astype(np.float32) - elif latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - if accepts_offset: - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = latents * self.scheduler.sigmas[0] - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents - if isinstance(self.scheduler, LMSDiscreteScheduler): - sigma = self.scheduler.sigmas[i] - # the model input needs to be scaled to match the continuous ODE formulation in K-LMS - latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5) - - # predict the noise residual - noise_pred = self.unet( - sample=latent_model_input, timestep=np.array([t]), encoder_hidden_states=text_embeddings - ) - noise_pred = noise_pred[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs).prev_sample - else: - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae_decoder(latent_sample=latents)[0] - - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose((0, 2, 3, 1)) - - # run safety checker - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="np") - image, has_nsfw_concept = self.safety_checker(clip_input=safety_checker_input.pixel_values, images=image) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/ServerX/PorcoDiaz/infer/modules/vc/modules.py b/spaces/ServerX/PorcoDiaz/infer/modules/vc/modules.py deleted file mode 100644 index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/modules/vc/modules.py +++ /dev/null @@ -1,526 +0,0 @@ -import os, sys -import traceback -import logging -now_dir = os.getcwd() -sys.path.append(now_dir) -logger = logging.getLogger(__name__) -import lib.globals.globals as rvc_globals -import numpy as np -import soundfile as sf -import torch -from io import BytesIO -from infer.lib.audio import load_audio -from infer.lib.audio import wav2 -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from infer.modules.vc.pipeline import Pipeline -from infer.modules.vc.utils import * -import time -import scipy.io.wavfile as wavfile - -def note_to_hz(note_name): - SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2} - pitch_class, octave = note_name[:-1], int(note_name[-1]) - semitone = SEMITONES[pitch_class] - note_number = 12 * (octave - 4) + semitone - frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number - return frequency - -class VC: - def __init__(self, config): - self.n_spk = None - self.tgt_sr = None - self.net_g = None - self.pipeline = None - self.cpt = None - self.version = None - self.if_f0 = None - self.version = None - self.hubert_model = None - - self.config = config - - def get_vc(self, sid, *to_return_protect): - logger.info("Get sid: " + sid) - - to_return_protect0 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[0] - if self.if_f0 != 0 and to_return_protect - else 0.5, - "__type__": "update", - } - to_return_protect1 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[1] - if self.if_f0 != 0 and to_return_protect - else 0.33, - "__type__": "update", - } - - if not sid: - if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - logger.info("Clean model cache") - del ( - self.net_g, - self.n_spk, - self.vc, - self.hubert_model, - self.tgt_sr, - ) # ,cpt - self.hubert_model = ( - self.net_g - ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"]) - del self.net_g, self.cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return ( - {"visible": False, "__type__": "update"}, - { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - }, - { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - }, - "", - "", - ) - #person = f'{os.getenv("weight_root")}/{sid}' - person = f'{sid}' - #logger.info(f"Loading: {person}") - logger.info(f"Loading...") - self.cpt = torch.load(person, map_location="cpu") - self.tgt_sr = self.cpt["config"][-1] - self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - - synthesizer_class = { - ("v1", 1): SynthesizerTrnMs256NSFsid, - ("v1", 0): SynthesizerTrnMs256NSFsid_nono, - ("v2", 1): SynthesizerTrnMs768NSFsid, - ("v2", 0): SynthesizerTrnMs768NSFsid_nono, - } - - self.net_g = synthesizer_class.get( - (self.version, self.if_f0), SynthesizerTrnMs256NSFsid - )(*self.cpt["config"], is_half=self.config.is_half) - - del self.net_g.enc_q - - self.net_g.load_state_dict(self.cpt["weight"], strict=False) - self.net_g.eval().to(self.config.device) - if self.config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - - self.pipeline = Pipeline(self.tgt_sr, self.config) - n_spk = self.cpt["config"][-3] - index = {"value": get_index_path_from_model(sid), "__type__": "update"} - logger.info("Select index: " + index["value"]) - - return ( - ( - {"visible": False, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1 - ) - if to_return_protect - else {"visible": False, "maximum": n_spk, "__type__": "update"} - ) - - - def vc_single( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - output_folder = "audio-outputs" - os.makedirs(output_folder, exist_ok=True) - output_filename = "generated_audio_{}.wav" - output_count = 1 - while True: - current_output_path = os.path.join(output_folder, output_filename.format(output_count)) - if not os.path.exists(current_output_path): - break - output_count += 1 - - wavfile.write(current_output_path, self.tgt_sr, audio_opt) - print(f"Generated audio saved to: {current_output_path}") - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - def vc_single_dont_save( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - - def vc_multi( - self, - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [ - os.path.join(dir_path, name) for name in os.listdir(dir_path) - ] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = self.vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" - % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1) - with BytesIO() as wavf: - sf.write( - wavf, - audio_opt, - tgt_sr, - format="wav" - ) - wavf.seek(0, 0) - with open(path, "wb") as outf: - wav2(wavf, outf, format1) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/models.py b/spaces/Sky5408er/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 52e15d1b9775038fd6e82b2efe6f95f51c66802d..0000000000000000000000000000000000000000 --- a/spaces/Sky5408er/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - device = next(self.parameters()).device # 获取模型所在的设备 - x, m_p, logs_p, x_mask = self.enc_p(x.to(device), x_lengths.to(device)) - if self.n_speakers > 0: - g = self.emb_g(sid.to(device)).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/SkyYeXianer/vits-uma-genshin-honkai/attentions.py b/spaces/SkyYeXianer/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/SkyYeXianer/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/SpacesExamples/docker-examples/style.css b/spaces/SpacesExamples/docker-examples/style.css deleted file mode 100644 index af4e23927a03e13fd16ebc7b4eb6eb434c42f65b..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/docker-examples/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} \ No newline at end of file diff --git a/spaces/Stearns/Soar/pysoarlib/SVSCommands.py b/spaces/Stearns/Soar/pysoarlib/SVSCommands.py deleted file mode 100644 index 5736530c78e7b24e5443ddbfa976a61496cf38d2..0000000000000000000000000000000000000000 --- a/spaces/Stearns/Soar/pysoarlib/SVSCommands.py +++ /dev/null @@ -1,84 +0,0 @@ -""" -This module defines a set of methods that generate SVS string commands -""" -class SVSCommands: - """ Contains static methods that generate SVS string commands - - These can then be passed to agent.SendSVSCommands - Note that all transforms (pos, rot, scale) should be lists of 3 floats - """ - @staticmethod - def pos_to_str(pos): - """ Returns a string of 3 space-separated position values """ - return "{:f} {:f} {:f}".format(pos[0], pos[1], pos[2]) - - @staticmethod - def rot_to_str(rot): - """ Returns a string of 3 space-separated rotation values """ - return "{:f} {:f} {:f}".format(rot[0], rot[1], rot[2]) - - @staticmethod - def scl_to_str(scl): - """ Returns a string of 3 space-separated scale values """ - return "{:f} {:f} {:f}".format(scl[0], scl[1], scl[2]) - - @staticmethod - def bbox_verts(): - """ Returns a string of 8 vertices (24 numbers) forming a bounding box - - It is of unit size centered at the origin - """ - return "0.5 0.5 0.5 0.5 0.5 -0.5 0.5 -0.5 0.5 0.5 -0.5 -0.5 -0.5 0.5 0.5 -0.5 0.5 -0.5 -0.5 -0.5 0.5 -0.5 -0.5 -0.5" - - @staticmethod - def add_node(node_id, pos=None, rot=None, scl=None, parent="world"): - """ Returns an SVS command for adding a graph node to the scene (no geometry) """ - cmd = "add {:s} {:s} ".format(node_id, parent) - if pos: cmd += " p {:s}".format(SVSCommands.pos_to_str(pos)) - if rot: cmd += " r {:s}".format(SVSCommands.rot_to_str(rot)) - if scl: cmd += " s {:s}".format(SVSCommands.scl_to_str(scl)) - return cmd - - @staticmethod - def add_box(obj_id, pos=None, rot=None, scl=None, parent="world"): - """ Returns an SVS command for adding a bounding box object to the scene """ - cmd = "add {:s} {:s} v {:s}".format(obj_id, parent, SVSCommands.bbox_verts()) - if pos: cmd += " p {:s}".format(SVSCommands.pos_to_str(pos)) - if rot: cmd += " r {:s}".format(SVSCommands.rot_to_str(rot)) - if scl: cmd += " s {:s}".format(SVSCommands.scl_to_str(scl)) - return cmd - - @staticmethod - def change_pos(obj_id, pos): - """ Returns an SVS command for changing the position of an svs object """ - return "change {:s} p {:s}".format(obj_id, SVSCommands.pos_to_str(pos)) - - @staticmethod - def change_rot(obj_id, rot): - """ Returns an SVS command for changing the rotation of an svs object """ - return "change {:s} r {:s}".format(obj_id, SVSCommands.rot_to_str(rot)) - - @staticmethod - def change_scl(obj_id, scl): - """ Returns an SVS command for changing the scale of an svs object """ - return "change {:s} s {:s}".format(obj_id, SVSCommands.scl_to_str(scl)) - - @staticmethod - def delete(obj_id): - """ Returns an SVS command for deleting an object """ - return "delete {:s}".format(obj_id) - - @staticmethod - def add_tag(obj_id, tag_name, tag_value): - """ Returns an SVS command for adding a tag to an object (^name value) """ - return "tag add {:s} {:s} {:s}".format(obj_id, tag_name, tag_value) - - @staticmethod - def change_tag(obj_id, tag_name, tag_value): - """ Returns an SVS command for changing a tag on an object (^name value) """ - return "tag change {:s} {:s} {:s}".format(obj_id, tag_name, tag_value) - - @staticmethod - def delete_tag(obj_id, tag_name): - """ Returns an SVS command for deleting a tag with the given name from an object """ - return "tag delete {:s} {:s}".format(obj_id, tag_name) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FtexImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FtexImagePlugin.py deleted file mode 100644 index c7c32252b87f95abd3fe655983055563aa824457..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FtexImagePlugin.py +++ /dev/null @@ -1,125 +0,0 @@ -""" -A Pillow loader for .ftc and .ftu files (FTEX) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -Independence War 2: Edge Of Chaos - Texture File Format - 16 October 2001 - -The textures used for 3D objects in Independence War 2: Edge Of Chaos are in a -packed custom format called FTEX. This file format uses file extensions FTC -and FTU. -* FTC files are compressed textures (using standard texture compression). -* FTU files are not compressed. -Texture File Format -The FTC and FTU texture files both use the same format. This -has the following structure: -{header} -{format_directory} -{data} -Where: -{header} = { - u32:magic, - u32:version, - u32:width, - u32:height, - u32:mipmap_count, - u32:format_count -} - -* The "magic" number is "FTEX". -* "width" and "height" are the dimensions of the texture. -* "mipmap_count" is the number of mipmaps in the texture. -* "format_count" is the number of texture formats (different versions of the -same texture) in this file. - -{format_directory} = format_count * { u32:format, u32:where } - -The format value is 0 for DXT1 compressed textures and 1 for 24-bit RGB -uncompressed textures. -The texture data for a format starts at the position "where" in the file. - -Each set of texture data in the file has the following structure: -{data} = format_count * { u32:mipmap_size, mipmap_size * { u8 } } -* "mipmap_size" is the number of bytes in that mip level. For compressed -textures this is the size of the texture data compressed with DXT1. For 24 bit -uncompressed textures, this is 3 * width * height. Following this are the image -bytes for that mipmap level. - -Note: All data is stored in little-Endian (Intel) byte order. -""" - -import struct -from enum import IntEnum -from io import BytesIO - -from . import Image, ImageFile -from ._deprecate import deprecate - -MAGIC = b"FTEX" - - -class Format(IntEnum): - DXT1 = 0 - UNCOMPRESSED = 1 - - -def __getattr__(name): - for enum, prefix in {Format: "FORMAT_"}.items(): - if name.startswith(prefix): - name = name[len(prefix) :] - if name in enum.__members__: - deprecate(f"{prefix}{name}", 10, f"{enum.__name__}.{name}") - return enum[name] - msg = f"module '{__name__}' has no attribute '{name}'" - raise AttributeError(msg) - - -class FtexImageFile(ImageFile.ImageFile): - format = "FTEX" - format_description = "Texture File Format (IW2:EOC)" - - def _open(self): - if not _accept(self.fp.read(4)): - msg = "not an FTEX file" - raise SyntaxError(msg) - struct.unpack(" None: - super().start() - - @override - def stop(self) -> None: - super().stop() - - @override - def reset(self) -> None: - self._instances = {} - self._segment_cache = defaultdict(dict) - super().reset() - - @override - def create_segments(self, collection: Collection) -> Set[Segment]: - vector_segment = _segment( - SegmentType.HNSW_LOCAL_MEMORY, SegmentScope.VECTOR, collection - ) - metadata_segment = _segment( - SegmentType.SQLITE, SegmentScope.METADATA, collection - ) - self._sysdb.create_segment(vector_segment) - self._sysdb.create_segment(metadata_segment) - return {vector_segment, metadata_segment} - - @override - def delete_segments(self, collection_id: UUID) -> None: - segments = self._sysdb.get_segments(collection=collection_id) - for segment in segments: - self._sysdb.delete_segment(segment["id"]) - del self._instances[segment["id"]] - del self._segment_cache[collection_id][segment["scope"]] - del self._segment_cache[collection_id] - - T = TypeVar("T", bound="SegmentImplementation") - - @override - def get_segment(self, collection_id: UUID, type: Type[T]) -> SegmentImplementation: - if type == Type[MetadataReader]: - scope = SegmentScope.METADATA - elif type == Type[VectorReader]: - scope = SegmentScope.VECTOR - else: - raise ValueError(f"Invalid segment type: {type}") - - if scope not in self._segment_cache[collection_id]: - segments = self._sysdb.get_segments(collection=collection_id, scope=scope) - known_types = set([k.value for k in SEGMENT_TYPE_IMPLS.keys()]) - # Get the first segment of a known type - segment = next(filter(lambda s: s["type"] in known_types, segments)) - self._segment_cache[collection_id][scope] = segment - - return self._instance(self._segment_cache[collection_id][scope]) - - def _instance(self, segment: Segment) -> SegmentImplementation: - if segment["id"] not in self._instances: - classname = SEGMENT_TYPE_IMPLS[SegmentType(segment["type"])] - cls = get_class(classname, SegmentImplementation) - self._instances[segment["id"]] = cls(self._system, segment) - return self._instances[segment["id"]] - - -def _segment(type: SegmentType, scope: SegmentScope, collection: Collection) -> Segment: - """Create a metadata dict, propagating metadata correctly for the given segment type.""" - metadata = {} - regexes = PROPAGATE_METADATA.get(type, []) - if collection["metadata"]: - for key, value in collection["metadata"].items(): - for regex in regexes: - if re.match(regex, key): - metadata[key] = value - break - - return Segment( - id=uuid4(), - type=type.value, - scope=scope, - topic=collection["topic"], - collection=collection["id"], - metadata=metadata, - ) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/parser/_parser.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/parser/_parser.py deleted file mode 100644 index 37d1663b2f72447800d9a553929e3de932244289..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/dateutil/parser/_parser.py +++ /dev/null @@ -1,1613 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers a generic date/time string parser which is able to parse -most known formats to represent a date and/or time. - -This module attempts to be forgiving with regards to unlikely input formats, -returning a datetime object even for dates which are ambiguous. If an element -of a date/time stamp is omitted, the following rules are applied: - -- If AM or PM is left unspecified, a 24-hour clock is assumed, however, an hour - on a 12-hour clock (``0 <= hour <= 12``) *must* be specified if AM or PM is - specified. -- If a time zone is omitted, a timezone-naive datetime is returned. - -If any other elements are missing, they are taken from the -:class:`datetime.datetime` object passed to the parameter ``default``. If this -results in a day number exceeding the valid number of days per month, the -value falls back to the end of the month. - -Additional resources about date/time string formats can be found below: - -- `A summary of the international standard date and time notation - `_ -- `W3C Date and Time Formats `_ -- `Time Formats (Planetary Rings Node) `_ -- `CPAN ParseDate module - `_ -- `Java SimpleDateFormat Class - `_ -""" -from __future__ import unicode_literals - -import datetime -import re -import string -import time -import warnings - -from calendar import monthrange -from io import StringIO - -import six -from six import integer_types, text_type - -from decimal import Decimal - -from warnings import warn - -from .. import relativedelta -from .. import tz - -__all__ = ["parse", "parserinfo", "ParserError"] - - -# TODO: pandas.core.tools.datetimes imports this explicitly. Might be worth -# making public and/or figuring out if there is something we can -# take off their plate. -class _timelex(object): - # Fractional seconds are sometimes split by a comma - _split_decimal = re.compile("([.,])") - - def __init__(self, instream): - if isinstance(instream, (bytes, bytearray)): - instream = instream.decode() - - if isinstance(instream, text_type): - instream = StringIO(instream) - elif getattr(instream, 'read', None) is None: - raise TypeError('Parser must be a string or character stream, not ' - '{itype}'.format(itype=instream.__class__.__name__)) - - self.instream = instream - self.charstack = [] - self.tokenstack = [] - self.eof = False - - def get_token(self): - """ - This function breaks the time string into lexical units (tokens), which - can be parsed by the parser. Lexical units are demarcated by changes in - the character set, so any continuous string of letters is considered - one unit, any continuous string of numbers is considered one unit. - - The main complication arises from the fact that dots ('.') can be used - both as separators (e.g. "Sep.20.2009") or decimal points (e.g. - "4:30:21.447"). As such, it is necessary to read the full context of - any dot-separated strings before breaking it into tokens; as such, this - function maintains a "token stack", for when the ambiguous context - demands that multiple tokens be parsed at once. - """ - if self.tokenstack: - return self.tokenstack.pop(0) - - seenletters = False - token = None - state = None - - while not self.eof: - # We only realize that we've reached the end of a token when we - # find a character that's not part of the current token - since - # that character may be part of the next token, it's stored in the - # charstack. - if self.charstack: - nextchar = self.charstack.pop(0) - else: - nextchar = self.instream.read(1) - while nextchar == '\x00': - nextchar = self.instream.read(1) - - if not nextchar: - self.eof = True - break - elif not state: - # First character of the token - determines if we're starting - # to parse a word, a number or something else. - token = nextchar - if self.isword(nextchar): - state = 'a' - elif self.isnum(nextchar): - state = '0' - elif self.isspace(nextchar): - token = ' ' - break # emit token - else: - break # emit token - elif state == 'a': - # If we've already started reading a word, we keep reading - # letters until we find something that's not part of a word. - seenletters = True - if self.isword(nextchar): - token += nextchar - elif nextchar == '.': - token += nextchar - state = 'a.' - else: - self.charstack.append(nextchar) - break # emit token - elif state == '0': - # If we've already started reading a number, we keep reading - # numbers until we find something that doesn't fit. - if self.isnum(nextchar): - token += nextchar - elif nextchar == '.' or (nextchar == ',' and len(token) >= 2): - token += nextchar - state = '0.' - else: - self.charstack.append(nextchar) - break # emit token - elif state == 'a.': - # If we've seen some letters and a dot separator, continue - # parsing, and the tokens will be broken up later. - seenletters = True - if nextchar == '.' or self.isword(nextchar): - token += nextchar - elif self.isnum(nextchar) and token[-1] == '.': - token += nextchar - state = '0.' - else: - self.charstack.append(nextchar) - break # emit token - elif state == '0.': - # If we've seen at least one dot separator, keep going, we'll - # break up the tokens later. - if nextchar == '.' or self.isnum(nextchar): - token += nextchar - elif self.isword(nextchar) and token[-1] == '.': - token += nextchar - state = 'a.' - else: - self.charstack.append(nextchar) - break # emit token - - if (state in ('a.', '0.') and (seenletters or token.count('.') > 1 or - token[-1] in '.,')): - l = self._split_decimal.split(token) - token = l[0] - for tok in l[1:]: - if tok: - self.tokenstack.append(tok) - - if state == '0.' and token.count('.') == 0: - token = token.replace(',', '.') - - return token - - def __iter__(self): - return self - - def __next__(self): - token = self.get_token() - if token is None: - raise StopIteration - - return token - - def next(self): - return self.__next__() # Python 2.x support - - @classmethod - def split(cls, s): - return list(cls(s)) - - @classmethod - def isword(cls, nextchar): - """ Whether or not the next character is part of a word """ - return nextchar.isalpha() - - @classmethod - def isnum(cls, nextchar): - """ Whether the next character is part of a number """ - return nextchar.isdigit() - - @classmethod - def isspace(cls, nextchar): - """ Whether the next character is whitespace """ - return nextchar.isspace() - - -class _resultbase(object): - - def __init__(self): - for attr in self.__slots__: - setattr(self, attr, None) - - def _repr(self, classname): - l = [] - for attr in self.__slots__: - value = getattr(self, attr) - if value is not None: - l.append("%s=%s" % (attr, repr(value))) - return "%s(%s)" % (classname, ", ".join(l)) - - def __len__(self): - return (sum(getattr(self, attr) is not None - for attr in self.__slots__)) - - def __repr__(self): - return self._repr(self.__class__.__name__) - - -class parserinfo(object): - """ - Class which handles what inputs are accepted. Subclass this to customize - the language and acceptable values for each parameter. - - :param dayfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the day (``True``) or month (``False``). If - ``yearfirst`` is set to ``True``, this distinguishes between YDM - and YMD. Default is ``False``. - - :param yearfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the year. If ``True``, the first number is taken - to be the year, otherwise the last number is taken to be the year. - Default is ``False``. - """ - - # m from a.m/p.m, t from ISO T separator - JUMP = [" ", ".", ",", ";", "-", "/", "'", - "at", "on", "and", "ad", "m", "t", "of", - "st", "nd", "rd", "th"] - - WEEKDAYS = [("Mon", "Monday"), - ("Tue", "Tuesday"), # TODO: "Tues" - ("Wed", "Wednesday"), - ("Thu", "Thursday"), # TODO: "Thurs" - ("Fri", "Friday"), - ("Sat", "Saturday"), - ("Sun", "Sunday")] - MONTHS = [("Jan", "January"), - ("Feb", "February"), # TODO: "Febr" - ("Mar", "March"), - ("Apr", "April"), - ("May", "May"), - ("Jun", "June"), - ("Jul", "July"), - ("Aug", "August"), - ("Sep", "Sept", "September"), - ("Oct", "October"), - ("Nov", "November"), - ("Dec", "December")] - HMS = [("h", "hour", "hours"), - ("m", "minute", "minutes"), - ("s", "second", "seconds")] - AMPM = [("am", "a"), - ("pm", "p")] - UTCZONE = ["UTC", "GMT", "Z", "z"] - PERTAIN = ["of"] - TZOFFSET = {} - # TODO: ERA = ["AD", "BC", "CE", "BCE", "Stardate", - # "Anno Domini", "Year of Our Lord"] - - def __init__(self, dayfirst=False, yearfirst=False): - self._jump = self._convert(self.JUMP) - self._weekdays = self._convert(self.WEEKDAYS) - self._months = self._convert(self.MONTHS) - self._hms = self._convert(self.HMS) - self._ampm = self._convert(self.AMPM) - self._utczone = self._convert(self.UTCZONE) - self._pertain = self._convert(self.PERTAIN) - - self.dayfirst = dayfirst - self.yearfirst = yearfirst - - self._year = time.localtime().tm_year - self._century = self._year // 100 * 100 - - def _convert(self, lst): - dct = {} - for i, v in enumerate(lst): - if isinstance(v, tuple): - for v in v: - dct[v.lower()] = i - else: - dct[v.lower()] = i - return dct - - def jump(self, name): - return name.lower() in self._jump - - def weekday(self, name): - try: - return self._weekdays[name.lower()] - except KeyError: - pass - return None - - def month(self, name): - try: - return self._months[name.lower()] + 1 - except KeyError: - pass - return None - - def hms(self, name): - try: - return self._hms[name.lower()] - except KeyError: - return None - - def ampm(self, name): - try: - return self._ampm[name.lower()] - except KeyError: - return None - - def pertain(self, name): - return name.lower() in self._pertain - - def utczone(self, name): - return name.lower() in self._utczone - - def tzoffset(self, name): - if name in self._utczone: - return 0 - - return self.TZOFFSET.get(name) - - def convertyear(self, year, century_specified=False): - """ - Converts two-digit years to year within [-50, 49] - range of self._year (current local time) - """ - - # Function contract is that the year is always positive - assert year >= 0 - - if year < 100 and not century_specified: - # assume current century to start - year += self._century - - if year >= self._year + 50: # if too far in future - year -= 100 - elif year < self._year - 50: # if too far in past - year += 100 - - return year - - def validate(self, res): - # move to info - if res.year is not None: - res.year = self.convertyear(res.year, res.century_specified) - - if ((res.tzoffset == 0 and not res.tzname) or - (res.tzname == 'Z' or res.tzname == 'z')): - res.tzname = "UTC" - res.tzoffset = 0 - elif res.tzoffset != 0 and res.tzname and self.utczone(res.tzname): - res.tzoffset = 0 - return True - - -class _ymd(list): - def __init__(self, *args, **kwargs): - super(self.__class__, self).__init__(*args, **kwargs) - self.century_specified = False - self.dstridx = None - self.mstridx = None - self.ystridx = None - - @property - def has_year(self): - return self.ystridx is not None - - @property - def has_month(self): - return self.mstridx is not None - - @property - def has_day(self): - return self.dstridx is not None - - def could_be_day(self, value): - if self.has_day: - return False - elif not self.has_month: - return 1 <= value <= 31 - elif not self.has_year: - # Be permissive, assume leap year - month = self[self.mstridx] - return 1 <= value <= monthrange(2000, month)[1] - else: - month = self[self.mstridx] - year = self[self.ystridx] - return 1 <= value <= monthrange(year, month)[1] - - def append(self, val, label=None): - if hasattr(val, '__len__'): - if val.isdigit() and len(val) > 2: - self.century_specified = True - if label not in [None, 'Y']: # pragma: no cover - raise ValueError(label) - label = 'Y' - elif val > 100: - self.century_specified = True - if label not in [None, 'Y']: # pragma: no cover - raise ValueError(label) - label = 'Y' - - super(self.__class__, self).append(int(val)) - - if label == 'M': - if self.has_month: - raise ValueError('Month is already set') - self.mstridx = len(self) - 1 - elif label == 'D': - if self.has_day: - raise ValueError('Day is already set') - self.dstridx = len(self) - 1 - elif label == 'Y': - if self.has_year: - raise ValueError('Year is already set') - self.ystridx = len(self) - 1 - - def _resolve_from_stridxs(self, strids): - """ - Try to resolve the identities of year/month/day elements using - ystridx, mstridx, and dstridx, if enough of these are specified. - """ - if len(self) == 3 and len(strids) == 2: - # we can back out the remaining stridx value - missing = [x for x in range(3) if x not in strids.values()] - key = [x for x in ['y', 'm', 'd'] if x not in strids] - assert len(missing) == len(key) == 1 - key = key[0] - val = missing[0] - strids[key] = val - - assert len(self) == len(strids) # otherwise this should not be called - out = {key: self[strids[key]] for key in strids} - return (out.get('y'), out.get('m'), out.get('d')) - - def resolve_ymd(self, yearfirst, dayfirst): - len_ymd = len(self) - year, month, day = (None, None, None) - - strids = (('y', self.ystridx), - ('m', self.mstridx), - ('d', self.dstridx)) - - strids = {key: val for key, val in strids if val is not None} - if (len(self) == len(strids) > 0 or - (len(self) == 3 and len(strids) == 2)): - return self._resolve_from_stridxs(strids) - - mstridx = self.mstridx - - if len_ymd > 3: - raise ValueError("More than three YMD values") - elif len_ymd == 1 or (mstridx is not None and len_ymd == 2): - # One member, or two members with a month string - if mstridx is not None: - month = self[mstridx] - # since mstridx is 0 or 1, self[mstridx-1] always - # looks up the other element - other = self[mstridx - 1] - else: - other = self[0] - - if len_ymd > 1 or mstridx is None: - if other > 31: - year = other - else: - day = other - - elif len_ymd == 2: - # Two members with numbers - if self[0] > 31: - # 99-01 - year, month = self - elif self[1] > 31: - # 01-99 - month, year = self - elif dayfirst and self[1] <= 12: - # 13-01 - day, month = self - else: - # 01-13 - month, day = self - - elif len_ymd == 3: - # Three members - if mstridx == 0: - if self[1] > 31: - # Apr-2003-25 - month, year, day = self - else: - month, day, year = self - elif mstridx == 1: - if self[0] > 31 or (yearfirst and self[2] <= 31): - # 99-Jan-01 - year, month, day = self - else: - # 01-Jan-01 - # Give precedence to day-first, since - # two-digit years is usually hand-written. - day, month, year = self - - elif mstridx == 2: - # WTF!? - if self[1] > 31: - # 01-99-Jan - day, year, month = self - else: - # 99-01-Jan - year, day, month = self - - else: - if (self[0] > 31 or - self.ystridx == 0 or - (yearfirst and self[1] <= 12 and self[2] <= 31)): - # 99-01-01 - if dayfirst and self[2] <= 12: - year, day, month = self - else: - year, month, day = self - elif self[0] > 12 or (dayfirst and self[1] <= 12): - # 13-01-01 - day, month, year = self - else: - # 01-13-01 - month, day, year = self - - return year, month, day - - -class parser(object): - def __init__(self, info=None): - self.info = info or parserinfo() - - def parse(self, timestr, default=None, - ignoretz=False, tzinfos=None, **kwargs): - """ - Parse the date/time string into a :class:`datetime.datetime` object. - - :param timestr: - Any date/time string using the supported formats. - - :param default: - The default datetime object, if this is a datetime object and not - ``None``, elements specified in ``timestr`` replace elements in the - default object. - - :param ignoretz: - If set ``True``, time zones in parsed strings are ignored and a - naive :class:`datetime.datetime` object is returned. - - :param tzinfos: - Additional time zone names / aliases which may be present in the - string. This argument maps time zone names (and optionally offsets - from those time zones) to time zones. This parameter can be a - dictionary with timezone aliases mapping time zone names to time - zones or a function taking two parameters (``tzname`` and - ``tzoffset``) and returning a time zone. - - The timezones to which the names are mapped can be an integer - offset from UTC in seconds or a :class:`tzinfo` object. - - .. doctest:: - :options: +NORMALIZE_WHITESPACE - - >>> from dateutil.parser import parse - >>> from dateutil.tz import gettz - >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} - >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) - >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, - tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) - - This parameter is ignored if ``ignoretz`` is set. - - :param \\*\\*kwargs: - Keyword arguments as passed to ``_parse()``. - - :return: - Returns a :class:`datetime.datetime` object or, if the - ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the - first element being a :class:`datetime.datetime` object, the second - a tuple containing the fuzzy tokens. - - :raises ParserError: - Raised for invalid or unknown string format, if the provided - :class:`tzinfo` is not in a valid format, or if an invalid date - would be created. - - :raises TypeError: - Raised for non-string or character stream input. - - :raises OverflowError: - Raised if the parsed date exceeds the largest valid C integer on - your system. - """ - - if default is None: - default = datetime.datetime.now().replace(hour=0, minute=0, - second=0, microsecond=0) - - res, skipped_tokens = self._parse(timestr, **kwargs) - - if res is None: - raise ParserError("Unknown string format: %s", timestr) - - if len(res) == 0: - raise ParserError("String does not contain a date: %s", timestr) - - try: - ret = self._build_naive(res, default) - except ValueError as e: - six.raise_from(ParserError(str(e) + ": %s", timestr), e) - - if not ignoretz: - ret = self._build_tzaware(ret, res, tzinfos) - - if kwargs.get('fuzzy_with_tokens', False): - return ret, skipped_tokens - else: - return ret - - class _result(_resultbase): - __slots__ = ["year", "month", "day", "weekday", - "hour", "minute", "second", "microsecond", - "tzname", "tzoffset", "ampm","any_unused_tokens"] - - def _parse(self, timestr, dayfirst=None, yearfirst=None, fuzzy=False, - fuzzy_with_tokens=False): - """ - Private method which performs the heavy lifting of parsing, called from - ``parse()``, which passes on its ``kwargs`` to this function. - - :param timestr: - The string to parse. - - :param dayfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the day (``True``) or month (``False``). If - ``yearfirst`` is set to ``True``, this distinguishes between YDM - and YMD. If set to ``None``, this value is retrieved from the - current :class:`parserinfo` object (which itself defaults to - ``False``). - - :param yearfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the year. If ``True``, the first number is taken - to be the year, otherwise the last number is taken to be the year. - If this is set to ``None``, the value is retrieved from the current - :class:`parserinfo` object (which itself defaults to ``False``). - - :param fuzzy: - Whether to allow fuzzy parsing, allowing for string like "Today is - January 1, 2047 at 8:21:00AM". - - :param fuzzy_with_tokens: - If ``True``, ``fuzzy`` is automatically set to True, and the parser - will return a tuple where the first element is the parsed - :class:`datetime.datetime` datetimestamp and the second element is - a tuple containing the portions of the string which were ignored: - - .. doctest:: - - >>> from dateutil.parser import parse - >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True) - (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at ')) - - """ - if fuzzy_with_tokens: - fuzzy = True - - info = self.info - - if dayfirst is None: - dayfirst = info.dayfirst - - if yearfirst is None: - yearfirst = info.yearfirst - - res = self._result() - l = _timelex.split(timestr) # Splits the timestr into tokens - - skipped_idxs = [] - - # year/month/day list - ymd = _ymd() - - len_l = len(l) - i = 0 - try: - while i < len_l: - - # Check if it's a number - value_repr = l[i] - try: - value = float(value_repr) - except ValueError: - value = None - - if value is not None: - # Numeric token - i = self._parse_numeric_token(l, i, info, ymd, res, fuzzy) - - # Check weekday - elif info.weekday(l[i]) is not None: - value = info.weekday(l[i]) - res.weekday = value - - # Check month name - elif info.month(l[i]) is not None: - value = info.month(l[i]) - ymd.append(value, 'M') - - if i + 1 < len_l: - if l[i + 1] in ('-', '/'): - # Jan-01[-99] - sep = l[i + 1] - ymd.append(l[i + 2]) - - if i + 3 < len_l and l[i + 3] == sep: - # Jan-01-99 - ymd.append(l[i + 4]) - i += 2 - - i += 2 - - elif (i + 4 < len_l and l[i + 1] == l[i + 3] == ' ' and - info.pertain(l[i + 2])): - # Jan of 01 - # In this case, 01 is clearly year - if l[i + 4].isdigit(): - # Convert it here to become unambiguous - value = int(l[i + 4]) - year = str(info.convertyear(value)) - ymd.append(year, 'Y') - else: - # Wrong guess - pass - # TODO: not hit in tests - i += 4 - - # Check am/pm - elif info.ampm(l[i]) is not None: - value = info.ampm(l[i]) - val_is_ampm = self._ampm_valid(res.hour, res.ampm, fuzzy) - - if val_is_ampm: - res.hour = self._adjust_ampm(res.hour, value) - res.ampm = value - - elif fuzzy: - skipped_idxs.append(i) - - # Check for a timezone name - elif self._could_be_tzname(res.hour, res.tzname, res.tzoffset, l[i]): - res.tzname = l[i] - res.tzoffset = info.tzoffset(res.tzname) - - # Check for something like GMT+3, or BRST+3. Notice - # that it doesn't mean "I am 3 hours after GMT", but - # "my time +3 is GMT". If found, we reverse the - # logic so that timezone parsing code will get it - # right. - if i + 1 < len_l and l[i + 1] in ('+', '-'): - l[i + 1] = ('+', '-')[l[i + 1] == '+'] - res.tzoffset = None - if info.utczone(res.tzname): - # With something like GMT+3, the timezone - # is *not* GMT. - res.tzname = None - - # Check for a numbered timezone - elif res.hour is not None and l[i] in ('+', '-'): - signal = (-1, 1)[l[i] == '+'] - len_li = len(l[i + 1]) - - # TODO: check that l[i + 1] is integer? - if len_li == 4: - # -0300 - hour_offset = int(l[i + 1][:2]) - min_offset = int(l[i + 1][2:]) - elif i + 2 < len_l and l[i + 2] == ':': - # -03:00 - hour_offset = int(l[i + 1]) - min_offset = int(l[i + 3]) # TODO: Check that l[i+3] is minute-like? - i += 2 - elif len_li <= 2: - # -[0]3 - hour_offset = int(l[i + 1][:2]) - min_offset = 0 - else: - raise ValueError(timestr) - - res.tzoffset = signal * (hour_offset * 3600 + min_offset * 60) - - # Look for a timezone name between parenthesis - if (i + 5 < len_l and - info.jump(l[i + 2]) and l[i + 3] == '(' and - l[i + 5] == ')' and - 3 <= len(l[i + 4]) and - self._could_be_tzname(res.hour, res.tzname, - None, l[i + 4])): - # -0300 (BRST) - res.tzname = l[i + 4] - i += 4 - - i += 1 - - # Check jumps - elif not (info.jump(l[i]) or fuzzy): - raise ValueError(timestr) - - else: - skipped_idxs.append(i) - i += 1 - - # Process year/month/day - year, month, day = ymd.resolve_ymd(yearfirst, dayfirst) - - res.century_specified = ymd.century_specified - res.year = year - res.month = month - res.day = day - - except (IndexError, ValueError): - return None, None - - if not info.validate(res): - return None, None - - if fuzzy_with_tokens: - skipped_tokens = self._recombine_skipped(l, skipped_idxs) - return res, tuple(skipped_tokens) - else: - return res, None - - def _parse_numeric_token(self, tokens, idx, info, ymd, res, fuzzy): - # Token is a number - value_repr = tokens[idx] - try: - value = self._to_decimal(value_repr) - except Exception as e: - six.raise_from(ValueError('Unknown numeric token'), e) - - len_li = len(value_repr) - - len_l = len(tokens) - - if (len(ymd) == 3 and len_li in (2, 4) and - res.hour is None and - (idx + 1 >= len_l or - (tokens[idx + 1] != ':' and - info.hms(tokens[idx + 1]) is None))): - # 19990101T23[59] - s = tokens[idx] - res.hour = int(s[:2]) - - if len_li == 4: - res.minute = int(s[2:]) - - elif len_li == 6 or (len_li > 6 and tokens[idx].find('.') == 6): - # YYMMDD or HHMMSS[.ss] - s = tokens[idx] - - if not ymd and '.' not in tokens[idx]: - ymd.append(s[:2]) - ymd.append(s[2:4]) - ymd.append(s[4:]) - else: - # 19990101T235959[.59] - - # TODO: Check if res attributes already set. - res.hour = int(s[:2]) - res.minute = int(s[2:4]) - res.second, res.microsecond = self._parsems(s[4:]) - - elif len_li in (8, 12, 14): - # YYYYMMDD - s = tokens[idx] - ymd.append(s[:4], 'Y') - ymd.append(s[4:6]) - ymd.append(s[6:8]) - - if len_li > 8: - res.hour = int(s[8:10]) - res.minute = int(s[10:12]) - - if len_li > 12: - res.second = int(s[12:]) - - elif self._find_hms_idx(idx, tokens, info, allow_jump=True) is not None: - # HH[ ]h or MM[ ]m or SS[.ss][ ]s - hms_idx = self._find_hms_idx(idx, tokens, info, allow_jump=True) - (idx, hms) = self._parse_hms(idx, tokens, info, hms_idx) - if hms is not None: - # TODO: checking that hour/minute/second are not - # already set? - self._assign_hms(res, value_repr, hms) - - elif idx + 2 < len_l and tokens[idx + 1] == ':': - # HH:MM[:SS[.ss]] - res.hour = int(value) - value = self._to_decimal(tokens[idx + 2]) # TODO: try/except for this? - (res.minute, res.second) = self._parse_min_sec(value) - - if idx + 4 < len_l and tokens[idx + 3] == ':': - res.second, res.microsecond = self._parsems(tokens[idx + 4]) - - idx += 2 - - idx += 2 - - elif idx + 1 < len_l and tokens[idx + 1] in ('-', '/', '.'): - sep = tokens[idx + 1] - ymd.append(value_repr) - - if idx + 2 < len_l and not info.jump(tokens[idx + 2]): - if tokens[idx + 2].isdigit(): - # 01-01[-01] - ymd.append(tokens[idx + 2]) - else: - # 01-Jan[-01] - value = info.month(tokens[idx + 2]) - - if value is not None: - ymd.append(value, 'M') - else: - raise ValueError() - - if idx + 3 < len_l and tokens[idx + 3] == sep: - # We have three members - value = info.month(tokens[idx + 4]) - - if value is not None: - ymd.append(value, 'M') - else: - ymd.append(tokens[idx + 4]) - idx += 2 - - idx += 1 - idx += 1 - - elif idx + 1 >= len_l or info.jump(tokens[idx + 1]): - if idx + 2 < len_l and info.ampm(tokens[idx + 2]) is not None: - # 12 am - hour = int(value) - res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 2])) - idx += 1 - else: - # Year, month or day - ymd.append(value) - idx += 1 - - elif info.ampm(tokens[idx + 1]) is not None and (0 <= value < 24): - # 12am - hour = int(value) - res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 1])) - idx += 1 - - elif ymd.could_be_day(value): - ymd.append(value) - - elif not fuzzy: - raise ValueError() - - return idx - - def _find_hms_idx(self, idx, tokens, info, allow_jump): - len_l = len(tokens) - - if idx+1 < len_l and info.hms(tokens[idx+1]) is not None: - # There is an "h", "m", or "s" label following this token. We take - # assign the upcoming label to the current token. - # e.g. the "12" in 12h" - hms_idx = idx + 1 - - elif (allow_jump and idx+2 < len_l and tokens[idx+1] == ' ' and - info.hms(tokens[idx+2]) is not None): - # There is a space and then an "h", "m", or "s" label. - # e.g. the "12" in "12 h" - hms_idx = idx + 2 - - elif idx > 0 and info.hms(tokens[idx-1]) is not None: - # There is a "h", "m", or "s" preceding this token. Since neither - # of the previous cases was hit, there is no label following this - # token, so we use the previous label. - # e.g. the "04" in "12h04" - hms_idx = idx-1 - - elif (1 < idx == len_l-1 and tokens[idx-1] == ' ' and - info.hms(tokens[idx-2]) is not None): - # If we are looking at the final token, we allow for a - # backward-looking check to skip over a space. - # TODO: Are we sure this is the right condition here? - hms_idx = idx - 2 - - else: - hms_idx = None - - return hms_idx - - def _assign_hms(self, res, value_repr, hms): - # See GH issue #427, fixing float rounding - value = self._to_decimal(value_repr) - - if hms == 0: - # Hour - res.hour = int(value) - if value % 1: - res.minute = int(60*(value % 1)) - - elif hms == 1: - (res.minute, res.second) = self._parse_min_sec(value) - - elif hms == 2: - (res.second, res.microsecond) = self._parsems(value_repr) - - def _could_be_tzname(self, hour, tzname, tzoffset, token): - return (hour is not None and - tzname is None and - tzoffset is None and - len(token) <= 5 and - (all(x in string.ascii_uppercase for x in token) - or token in self.info.UTCZONE)) - - def _ampm_valid(self, hour, ampm, fuzzy): - """ - For fuzzy parsing, 'a' or 'am' (both valid English words) - may erroneously trigger the AM/PM flag. Deal with that - here. - """ - val_is_ampm = True - - # If there's already an AM/PM flag, this one isn't one. - if fuzzy and ampm is not None: - val_is_ampm = False - - # If AM/PM is found and hour is not, raise a ValueError - if hour is None: - if fuzzy: - val_is_ampm = False - else: - raise ValueError('No hour specified with AM or PM flag.') - elif not 0 <= hour <= 12: - # If AM/PM is found, it's a 12 hour clock, so raise - # an error for invalid range - if fuzzy: - val_is_ampm = False - else: - raise ValueError('Invalid hour specified for 12-hour clock.') - - return val_is_ampm - - def _adjust_ampm(self, hour, ampm): - if hour < 12 and ampm == 1: - hour += 12 - elif hour == 12 and ampm == 0: - hour = 0 - return hour - - def _parse_min_sec(self, value): - # TODO: Every usage of this function sets res.second to the return - # value. Are there any cases where second will be returned as None and - # we *don't* want to set res.second = None? - minute = int(value) - second = None - - sec_remainder = value % 1 - if sec_remainder: - second = int(60 * sec_remainder) - return (minute, second) - - def _parse_hms(self, idx, tokens, info, hms_idx): - # TODO: Is this going to admit a lot of false-positives for when we - # just happen to have digits and "h", "m" or "s" characters in non-date - # text? I guess hex hashes won't have that problem, but there's plenty - # of random junk out there. - if hms_idx is None: - hms = None - new_idx = idx - elif hms_idx > idx: - hms = info.hms(tokens[hms_idx]) - new_idx = hms_idx - else: - # Looking backwards, increment one. - hms = info.hms(tokens[hms_idx]) + 1 - new_idx = idx - - return (new_idx, hms) - - # ------------------------------------------------------------------ - # Handling for individual tokens. These are kept as methods instead - # of functions for the sake of customizability via subclassing. - - def _parsems(self, value): - """Parse a I[.F] seconds value into (seconds, microseconds).""" - if "." not in value: - return int(value), 0 - else: - i, f = value.split(".") - return int(i), int(f.ljust(6, "0")[:6]) - - def _to_decimal(self, val): - try: - decimal_value = Decimal(val) - # See GH 662, edge case, infinite value should not be converted - # via `_to_decimal` - if not decimal_value.is_finite(): - raise ValueError("Converted decimal value is infinite or NaN") - except Exception as e: - msg = "Could not convert %s to decimal" % val - six.raise_from(ValueError(msg), e) - else: - return decimal_value - - # ------------------------------------------------------------------ - # Post-Parsing construction of datetime output. These are kept as - # methods instead of functions for the sake of customizability via - # subclassing. - - def _build_tzinfo(self, tzinfos, tzname, tzoffset): - if callable(tzinfos): - tzdata = tzinfos(tzname, tzoffset) - else: - tzdata = tzinfos.get(tzname) - # handle case where tzinfo is paased an options that returns None - # eg tzinfos = {'BRST' : None} - if isinstance(tzdata, datetime.tzinfo) or tzdata is None: - tzinfo = tzdata - elif isinstance(tzdata, text_type): - tzinfo = tz.tzstr(tzdata) - elif isinstance(tzdata, integer_types): - tzinfo = tz.tzoffset(tzname, tzdata) - else: - raise TypeError("Offset must be tzinfo subclass, tz string, " - "or int offset.") - return tzinfo - - def _build_tzaware(self, naive, res, tzinfos): - if (callable(tzinfos) or (tzinfos and res.tzname in tzinfos)): - tzinfo = self._build_tzinfo(tzinfos, res.tzname, res.tzoffset) - aware = naive.replace(tzinfo=tzinfo) - aware = self._assign_tzname(aware, res.tzname) - - elif res.tzname and res.tzname in time.tzname: - aware = naive.replace(tzinfo=tz.tzlocal()) - - # Handle ambiguous local datetime - aware = self._assign_tzname(aware, res.tzname) - - # This is mostly relevant for winter GMT zones parsed in the UK - if (aware.tzname() != res.tzname and - res.tzname in self.info.UTCZONE): - aware = aware.replace(tzinfo=tz.UTC) - - elif res.tzoffset == 0: - aware = naive.replace(tzinfo=tz.UTC) - - elif res.tzoffset: - aware = naive.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset)) - - elif not res.tzname and not res.tzoffset: - # i.e. no timezone information was found. - aware = naive - - elif res.tzname: - # tz-like string was parsed but we don't know what to do - # with it - warnings.warn("tzname {tzname} identified but not understood. " - "Pass `tzinfos` argument in order to correctly " - "return a timezone-aware datetime. In a future " - "version, this will raise an " - "exception.".format(tzname=res.tzname), - category=UnknownTimezoneWarning) - aware = naive - - return aware - - def _build_naive(self, res, default): - repl = {} - for attr in ("year", "month", "day", "hour", - "minute", "second", "microsecond"): - value = getattr(res, attr) - if value is not None: - repl[attr] = value - - if 'day' not in repl: - # If the default day exceeds the last day of the month, fall back - # to the end of the month. - cyear = default.year if res.year is None else res.year - cmonth = default.month if res.month is None else res.month - cday = default.day if res.day is None else res.day - - if cday > monthrange(cyear, cmonth)[1]: - repl['day'] = monthrange(cyear, cmonth)[1] - - naive = default.replace(**repl) - - if res.weekday is not None and not res.day: - naive = naive + relativedelta.relativedelta(weekday=res.weekday) - - return naive - - def _assign_tzname(self, dt, tzname): - if dt.tzname() != tzname: - new_dt = tz.enfold(dt, fold=1) - if new_dt.tzname() == tzname: - return new_dt - - return dt - - def _recombine_skipped(self, tokens, skipped_idxs): - """ - >>> tokens = ["foo", " ", "bar", " ", "19June2000", "baz"] - >>> skipped_idxs = [0, 1, 2, 5] - >>> _recombine_skipped(tokens, skipped_idxs) - ["foo bar", "baz"] - """ - skipped_tokens = [] - for i, idx in enumerate(sorted(skipped_idxs)): - if i > 0 and idx - 1 == skipped_idxs[i - 1]: - skipped_tokens[-1] = skipped_tokens[-1] + tokens[idx] - else: - skipped_tokens.append(tokens[idx]) - - return skipped_tokens - - -DEFAULTPARSER = parser() - - -def parse(timestr, parserinfo=None, **kwargs): - """ - - Parse a string in one of the supported formats, using the - ``parserinfo`` parameters. - - :param timestr: - A string containing a date/time stamp. - - :param parserinfo: - A :class:`parserinfo` object containing parameters for the parser. - If ``None``, the default arguments to the :class:`parserinfo` - constructor are used. - - The ``**kwargs`` parameter takes the following keyword arguments: - - :param default: - The default datetime object, if this is a datetime object and not - ``None``, elements specified in ``timestr`` replace elements in the - default object. - - :param ignoretz: - If set ``True``, time zones in parsed strings are ignored and a naive - :class:`datetime` object is returned. - - :param tzinfos: - Additional time zone names / aliases which may be present in the - string. This argument maps time zone names (and optionally offsets - from those time zones) to time zones. This parameter can be a - dictionary with timezone aliases mapping time zone names to time - zones or a function taking two parameters (``tzname`` and - ``tzoffset``) and returning a time zone. - - The timezones to which the names are mapped can be an integer - offset from UTC in seconds or a :class:`tzinfo` object. - - .. doctest:: - :options: +NORMALIZE_WHITESPACE - - >>> from dateutil.parser import parse - >>> from dateutil.tz import gettz - >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} - >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) - >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) - datetime.datetime(2012, 1, 19, 17, 21, - tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) - - This parameter is ignored if ``ignoretz`` is set. - - :param dayfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the day (``True``) or month (``False``). If - ``yearfirst`` is set to ``True``, this distinguishes between YDM and - YMD. If set to ``None``, this value is retrieved from the current - :class:`parserinfo` object (which itself defaults to ``False``). - - :param yearfirst: - Whether to interpret the first value in an ambiguous 3-integer date - (e.g. 01/05/09) as the year. If ``True``, the first number is taken to - be the year, otherwise the last number is taken to be the year. If - this is set to ``None``, the value is retrieved from the current - :class:`parserinfo` object (which itself defaults to ``False``). - - :param fuzzy: - Whether to allow fuzzy parsing, allowing for string like "Today is - January 1, 2047 at 8:21:00AM". - - :param fuzzy_with_tokens: - If ``True``, ``fuzzy`` is automatically set to True, and the parser - will return a tuple where the first element is the parsed - :class:`datetime.datetime` datetimestamp and the second element is - a tuple containing the portions of the string which were ignored: - - .. doctest:: - - >>> from dateutil.parser import parse - >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True) - (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at ')) - - :return: - Returns a :class:`datetime.datetime` object or, if the - ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the - first element being a :class:`datetime.datetime` object, the second - a tuple containing the fuzzy tokens. - - :raises ParserError: - Raised for invalid or unknown string formats, if the provided - :class:`tzinfo` is not in a valid format, or if an invalid date would - be created. - - :raises OverflowError: - Raised if the parsed date exceeds the largest valid C integer on - your system. - """ - if parserinfo: - return parser(parserinfo).parse(timestr, **kwargs) - else: - return DEFAULTPARSER.parse(timestr, **kwargs) - - -class _tzparser(object): - - class _result(_resultbase): - - __slots__ = ["stdabbr", "stdoffset", "dstabbr", "dstoffset", - "start", "end"] - - class _attr(_resultbase): - __slots__ = ["month", "week", "weekday", - "yday", "jyday", "day", "time"] - - def __repr__(self): - return self._repr("") - - def __init__(self): - _resultbase.__init__(self) - self.start = self._attr() - self.end = self._attr() - - def parse(self, tzstr): - res = self._result() - l = [x for x in re.split(r'([,:.]|[a-zA-Z]+|[0-9]+)',tzstr) if x] - used_idxs = list() - try: - - len_l = len(l) - - i = 0 - while i < len_l: - # BRST+3[BRDT[+2]] - j = i - while j < len_l and not [x for x in l[j] - if x in "0123456789:,-+"]: - j += 1 - if j != i: - if not res.stdabbr: - offattr = "stdoffset" - res.stdabbr = "".join(l[i:j]) - else: - offattr = "dstoffset" - res.dstabbr = "".join(l[i:j]) - - for ii in range(j): - used_idxs.append(ii) - i = j - if (i < len_l and (l[i] in ('+', '-') or l[i][0] in - "0123456789")): - if l[i] in ('+', '-'): - # Yes, that's right. See the TZ variable - # documentation. - signal = (1, -1)[l[i] == '+'] - used_idxs.append(i) - i += 1 - else: - signal = -1 - len_li = len(l[i]) - if len_li == 4: - # -0300 - setattr(res, offattr, (int(l[i][:2]) * 3600 + - int(l[i][2:]) * 60) * signal) - elif i + 1 < len_l and l[i + 1] == ':': - # -03:00 - setattr(res, offattr, - (int(l[i]) * 3600 + - int(l[i + 2]) * 60) * signal) - used_idxs.append(i) - i += 2 - elif len_li <= 2: - # -[0]3 - setattr(res, offattr, - int(l[i][:2]) * 3600 * signal) - else: - return None - used_idxs.append(i) - i += 1 - if res.dstabbr: - break - else: - break - - - if i < len_l: - for j in range(i, len_l): - if l[j] == ';': - l[j] = ',' - - assert l[i] == ',' - - i += 1 - - if i >= len_l: - pass - elif (8 <= l.count(',') <= 9 and - not [y for x in l[i:] if x != ',' - for y in x if y not in "0123456789+-"]): - # GMT0BST,3,0,30,3600,10,0,26,7200[,3600] - for x in (res.start, res.end): - x.month = int(l[i]) - used_idxs.append(i) - i += 2 - if l[i] == '-': - value = int(l[i + 1]) * -1 - used_idxs.append(i) - i += 1 - else: - value = int(l[i]) - used_idxs.append(i) - i += 2 - if value: - x.week = value - x.weekday = (int(l[i]) - 1) % 7 - else: - x.day = int(l[i]) - used_idxs.append(i) - i += 2 - x.time = int(l[i]) - used_idxs.append(i) - i += 2 - if i < len_l: - if l[i] in ('-', '+'): - signal = (-1, 1)[l[i] == "+"] - used_idxs.append(i) - i += 1 - else: - signal = 1 - used_idxs.append(i) - res.dstoffset = (res.stdoffset + int(l[i]) * signal) - - # This was a made-up format that is not in normal use - warn(('Parsed time zone "%s"' % tzstr) + - 'is in a non-standard dateutil-specific format, which ' + - 'is now deprecated; support for parsing this format ' + - 'will be removed in future versions. It is recommended ' + - 'that you switch to a standard format like the GNU ' + - 'TZ variable format.', tz.DeprecatedTzFormatWarning) - elif (l.count(',') == 2 and l[i:].count('/') <= 2 and - not [y for x in l[i:] if x not in (',', '/', 'J', 'M', - '.', '-', ':') - for y in x if y not in "0123456789"]): - for x in (res.start, res.end): - if l[i] == 'J': - # non-leap year day (1 based) - used_idxs.append(i) - i += 1 - x.jyday = int(l[i]) - elif l[i] == 'M': - # month[-.]week[-.]weekday - used_idxs.append(i) - i += 1 - x.month = int(l[i]) - used_idxs.append(i) - i += 1 - assert l[i] in ('-', '.') - used_idxs.append(i) - i += 1 - x.week = int(l[i]) - if x.week == 5: - x.week = -1 - used_idxs.append(i) - i += 1 - assert l[i] in ('-', '.') - used_idxs.append(i) - i += 1 - x.weekday = (int(l[i]) - 1) % 7 - else: - # year day (zero based) - x.yday = int(l[i]) + 1 - - used_idxs.append(i) - i += 1 - - if i < len_l and l[i] == '/': - used_idxs.append(i) - i += 1 - # start time - len_li = len(l[i]) - if len_li == 4: - # -0300 - x.time = (int(l[i][:2]) * 3600 + - int(l[i][2:]) * 60) - elif i + 1 < len_l and l[i + 1] == ':': - # -03:00 - x.time = int(l[i]) * 3600 + int(l[i + 2]) * 60 - used_idxs.append(i) - i += 2 - if i + 1 < len_l and l[i + 1] == ':': - used_idxs.append(i) - i += 2 - x.time += int(l[i]) - elif len_li <= 2: - # -[0]3 - x.time = (int(l[i][:2]) * 3600) - else: - return None - used_idxs.append(i) - i += 1 - - assert i == len_l or l[i] == ',' - - i += 1 - - assert i >= len_l - - except (IndexError, ValueError, AssertionError): - return None - - unused_idxs = set(range(len_l)).difference(used_idxs) - res.any_unused_tokens = not {l[n] for n in unused_idxs}.issubset({",",":"}) - return res - - -DEFAULTTZPARSER = _tzparser() - - -def _parsetz(tzstr): - return DEFAULTTZPARSER.parse(tzstr) - - -class ParserError(ValueError): - """Exception subclass used for any failure to parse a datetime string. - - This is a subclass of :py:exc:`ValueError`, and should be raised any time - earlier versions of ``dateutil`` would have raised ``ValueError``. - - .. versionadded:: 2.8.1 - """ - def __str__(self): - try: - return self.args[0] % self.args[1:] - except (TypeError, IndexError): - return super(ParserError, self).__str__() - - def __repr__(self): - args = ", ".join("'%s'" % arg for arg in self.args) - return "%s(%s)" % (self.__class__.__name__, args) - - -class UnknownTimezoneWarning(RuntimeWarning): - """Raised when the parser finds a timezone it cannot parse into a tzinfo. - - .. versionadded:: 2.7.0 - """ -# vim:ts=4:sw=4:et diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/export/shared.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/export/shared.py deleted file mode 100644 index 53ba9335e26819f9381115eba17bbbe3816b469c..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/export/shared.py +++ /dev/null @@ -1,1039 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import collections -import copy -import functools -import logging -import numpy as np -import os -from typing import Any, Callable, Dict, List, Optional, Tuple, Union -from unittest import mock -import caffe2.python.utils as putils -import torch -import torch.nn.functional as F -from caffe2.proto import caffe2_pb2 -from caffe2.python import core, net_drawer, workspace -from torch.nn.functional import interpolate as interp - -logger = logging.getLogger(__name__) - - -# ==== torch/utils_toffee/cast.py ======================================= - - -def to_device(t, device_str): - """ - This function is a replacement of .to(another_device) such that it allows the - casting to be traced properly by explicitly calling the underlying copy ops. - It also avoids introducing unncessary op when casting to the same device. - """ - src = t.device - dst = torch.device(device_str) - - if src == dst: - return t - elif src.type == "cuda" and dst.type == "cpu": - return torch.ops._caffe2.CopyGPUToCPU(t) - elif src.type == "cpu" and dst.type == "cuda": - return torch.ops._caffe2.CopyCPUToGPU(t) - else: - raise RuntimeError("Can't cast tensor from device {} to device {}".format(src, dst)) - - -# ==== torch/utils_toffee/interpolate.py ======================================= - - -# Note: borrowed from vision/detection/fair/detectron/detectron/modeling/detector.py -def BilinearInterpolation(tensor_in, up_scale): - assert up_scale % 2 == 0, "Scale should be even" - - def upsample_filt(size): - factor = (size + 1) // 2 - if size % 2 == 1: - center = factor - 1 - else: - center = factor - 0.5 - - og = np.ogrid[:size, :size] - return (1 - abs(og[0] - center) / factor) * (1 - abs(og[1] - center) / factor) - - kernel_size = int(up_scale) * 2 - bil_filt = upsample_filt(kernel_size) - - dim = int(tensor_in.shape[1]) - kernel = np.zeros((dim, dim, kernel_size, kernel_size), dtype=np.float32) - kernel[range(dim), range(dim), :, :] = bil_filt - - tensor_out = F.conv_transpose2d( - tensor_in, - weight=to_device(torch.Tensor(kernel), tensor_in.device), - bias=None, - stride=int(up_scale), - padding=int(up_scale / 2), - ) - - return tensor_out - - -# NOTE: ONNX is incompatible with traced torch.nn.functional.interpolate if -# using dynamic `scale_factor` rather than static `size`. (T43166860) -# NOTE: Caffe2 Int8 conversion might not be able to quantize `size` properly. -def onnx_compatibale_interpolate( - input, size=None, scale_factor=None, mode="nearest", align_corners=None -): - # NOTE: The input dimensions are interpreted in the form: - # `mini-batch x channels x [optional depth] x [optional height] x width`. - if size is None and scale_factor is not None: - if input.dim() == 4: - if isinstance(scale_factor, (int, float)): - height_scale, width_scale = (scale_factor, scale_factor) - else: - assert isinstance(scale_factor, (tuple, list)) - assert len(scale_factor) == 2 - height_scale, width_scale = scale_factor - - assert not align_corners, "No matching C2 op for align_corners == True" - if mode == "nearest": - return torch.ops._caffe2.ResizeNearest( - input, order="NCHW", width_scale=width_scale, height_scale=height_scale - ) - elif mode == "bilinear": - logger.warning( - "Use F.conv_transpose2d for bilinear interpolate" - " because there's no such C2 op, this may cause significant" - " slowdown and the boundary pixels won't be as same as" - " using F.interpolate due to padding." - ) - assert height_scale == width_scale - return BilinearInterpolation(input, up_scale=height_scale) - logger.warning("Output size is not static, it might cause ONNX conversion issue") - - return interp(input, size, scale_factor, mode, align_corners) - - -def mock_torch_nn_functional_interpolate(): - def decorator(func): - @functools.wraps(func) - def _mock_torch_nn_functional_interpolate(*args, **kwargs): - if torch.onnx.is_in_onnx_export(): - with mock.patch( - "torch.nn.functional.interpolate", side_effect=onnx_compatibale_interpolate - ): - return func(*args, **kwargs) - else: - return func(*args, **kwargs) - - return _mock_torch_nn_functional_interpolate - - return decorator - - -# ==== torch/utils_caffe2/ws_utils.py ========================================== - - -class ScopedWS(object): - def __init__(self, ws_name, is_reset, is_cleanup=False): - self.ws_name = ws_name - self.is_reset = is_reset - self.is_cleanup = is_cleanup - self.org_ws = "" - - def __enter__(self): - self.org_ws = workspace.CurrentWorkspace() - if self.ws_name is not None: - workspace.SwitchWorkspace(self.ws_name, True) - if self.is_reset: - workspace.ResetWorkspace() - - return workspace - - def __exit__(self, *args): - if self.is_cleanup: - workspace.ResetWorkspace() - if self.ws_name is not None: - workspace.SwitchWorkspace(self.org_ws) - - -def fetch_any_blob(name): - bb = None - try: - bb = workspace.FetchBlob(name) - except TypeError: - bb = workspace.FetchInt8Blob(name) - except Exception as e: - logger.error("Get blob {} error: {}".format(name, e)) - - return bb - - -# ==== torch/utils_caffe2/protobuf.py ========================================== - - -def get_pb_arg(pb, arg_name): - for x in pb.arg: - if x.name == arg_name: - return x - return None - - -def get_pb_arg_valf(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return arg.f if arg is not None else default_val - - -def get_pb_arg_floats(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return list(map(float, arg.floats)) if arg is not None else default_val - - -def get_pb_arg_ints(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return list(map(int, arg.ints)) if arg is not None else default_val - - -def get_pb_arg_vali(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return arg.i if arg is not None else default_val - - -def get_pb_arg_vals(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return arg.s if arg is not None else default_val - - -def get_pb_arg_valstrings(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return list(arg.strings) if arg is not None else default_val - - -def check_set_pb_arg(pb, arg_name, arg_attr, arg_value, allow_override=False): - arg = get_pb_arg(pb, arg_name) - if arg is None: - arg = putils.MakeArgument(arg_name, arg_value) - assert hasattr(arg, arg_attr) - pb.arg.extend([arg]) - if allow_override and getattr(arg, arg_attr) != arg_value: - logger.warning( - "Override argument {}: {} -> {}".format(arg_name, getattr(arg, arg_attr), arg_value) - ) - setattr(arg, arg_attr, arg_value) - else: - assert arg is not None - assert getattr(arg, arg_attr) == arg_value, "Existing value {}, new value {}".format( - getattr(arg, arg_attr), arg_value - ) - - -def _create_const_fill_op_from_numpy(name, tensor, device_option=None): - assert type(tensor) == np.ndarray - kTypeNameMapper = { - np.dtype("float32"): "GivenTensorFill", - np.dtype("int32"): "GivenTensorIntFill", - np.dtype("int64"): "GivenTensorInt64Fill", - np.dtype("uint8"): "GivenTensorStringFill", - } - - args_dict = {} - if tensor.dtype == np.dtype("uint8"): - args_dict.update({"values": [str(tensor.data)], "shape": [1]}) - else: - args_dict.update({"values": tensor, "shape": tensor.shape}) - - if device_option is not None: - args_dict["device_option"] = device_option - - return core.CreateOperator(kTypeNameMapper[tensor.dtype], [], [name], **args_dict) - - -def _create_const_fill_op_from_c2_int8_tensor(name, int8_tensor): - assert type(int8_tensor) == workspace.Int8Tensor - kTypeNameMapper = { - np.dtype("int32"): "Int8GivenIntTensorFill", - np.dtype("uint8"): "Int8GivenTensorFill", - } - - tensor = int8_tensor.data - assert tensor.dtype in [np.dtype("uint8"), np.dtype("int32")] - values = tensor.tobytes() if tensor.dtype == np.dtype("uint8") else tensor - - return core.CreateOperator( - kTypeNameMapper[tensor.dtype], - [], - [name], - values=values, - shape=tensor.shape, - Y_scale=int8_tensor.scale, - Y_zero_point=int8_tensor.zero_point, - ) - - -def create_const_fill_op( - name: str, - blob: Union[np.ndarray, workspace.Int8Tensor], - device_option: Optional[caffe2_pb2.DeviceOption] = None, -) -> caffe2_pb2.OperatorDef: - """ - Given a blob object, return the Caffe2 operator that creates this blob - as constant. Currently support NumPy tensor and Caffe2 Int8Tensor. - """ - - tensor_type = type(blob) - assert tensor_type in [ - np.ndarray, - workspace.Int8Tensor, - ], 'Error when creating const fill op for "{}", unsupported blob type: {}'.format( - name, type(blob) - ) - - if tensor_type == np.ndarray: - return _create_const_fill_op_from_numpy(name, blob, device_option) - elif tensor_type == workspace.Int8Tensor: - assert device_option is None - return _create_const_fill_op_from_c2_int8_tensor(name, blob) - - -def construct_init_net_from_params( - params: Dict[str, Any], device_options: Optional[Dict[str, caffe2_pb2.DeviceOption]] = None -) -> caffe2_pb2.NetDef: - """ - Construct the init_net from params dictionary - """ - init_net = caffe2_pb2.NetDef() - device_options = device_options or {} - for name, blob in params.items(): - if isinstance(blob, str): - logger.warning( - ( - "Blob {} with type {} is not supported in generating init net," - " skipped.".format(name, type(blob)) - ) - ) - continue - init_net.op.extend( - [create_const_fill_op(name, blob, device_option=device_options.get(name, None))] - ) - init_net.external_output.append(name) - return init_net - - -def get_producer_map(ssa): - """ - Return dict from versioned blob to (i, j), - where i is index of producer op, j is the index of output of that op. - """ - producer_map = {} - for i in range(len(ssa)): - outputs = ssa[i][1] - for j, outp in enumerate(outputs): - producer_map[outp] = (i, j) - return producer_map - - -def get_consumer_map(ssa): - """ - Return dict from versioned blob to list of (i, j), - where i is index of consumer op, j is the index of input of that op. - """ - consumer_map = collections.defaultdict(list) - for i in range(len(ssa)): - inputs = ssa[i][0] - for j, inp in enumerate(inputs): - consumer_map[inp].append((i, j)) - return consumer_map - - -def get_params_from_init_net( - init_net: caffe2_pb2.NetDef, -) -> [Dict[str, Any], Dict[str, caffe2_pb2.DeviceOption]]: - """ - Take the output blobs from init_net by running it. - Outputs: - params: dict from blob name to numpy array - device_options: dict from blob name to the device option of its creating op - """ - # NOTE: this assumes that the params is determined by producer op with the - # only exception be CopyGPUToCPU which is CUDA op but returns CPU tensor. - def _get_device_option(producer_op): - if producer_op.type == "CopyGPUToCPU": - return caffe2_pb2.DeviceOption() - else: - return producer_op.device_option - - with ScopedWS("__get_params_from_init_net__", is_reset=True, is_cleanup=True) as ws: - ws.RunNetOnce(init_net) - params = {b: fetch_any_blob(b) for b in init_net.external_output} - ssa, versions = core.get_ssa(init_net) - producer_map = get_producer_map(ssa) - device_options = { - b: _get_device_option(init_net.op[producer_map[(b, versions[b])][0]]) - for b in init_net.external_output - } - return params, device_options - - -def _updater_raise(op, input_types, output_types): - raise RuntimeError( - "Failed to apply updater for op {} given input_types {} and" - " output_types {}".format(op, input_types, output_types) - ) - - -def _generic_status_identifier( - predict_net: caffe2_pb2.NetDef, - status_updater: Callable, - known_status: Dict[Tuple[str, int], Any], -) -> Dict[Tuple[str, int], Any]: - """ - Statically infer the status of each blob, the status can be such as device type - (CPU/GPU), layout (NCHW/NHWC), data type (float32/int8), etc. "Blob" here - is versioned blob (Tuple[str, int]) in the format compatible with ssa. - Inputs: - predict_net: the caffe2 network - status_updater: a callable, given an op and the status of its input/output, - it returns the updated status of input/output. `None` is used for - representing unknown status. - known_status: a dict containing known status, used as initialization. - Outputs: - A dict mapping from versioned blob to its status - """ - ssa, versions = core.get_ssa(predict_net) - versioned_ext_input = [(b, 0) for b in predict_net.external_input] - versioned_ext_output = [(b, versions[b]) for b in predict_net.external_output] - all_versioned_blobs = set().union(*[set(x[0] + x[1]) for x in ssa]) - - allowed_vbs = all_versioned_blobs.union(versioned_ext_input).union(versioned_ext_output) - assert all(k in allowed_vbs for k in known_status) - assert all(v is not None for v in known_status.values()) - _known_status = copy.deepcopy(known_status) - - def _check_and_update(key, value): - assert value is not None - if key in _known_status: - if not _known_status[key] == value: - raise RuntimeError( - "Confilict status for {}, existing status {}, new status {}".format( - key, _known_status[key], value - ) - ) - _known_status[key] = value - - def _update_i(op, ssa_i): - versioned_inputs = ssa_i[0] - versioned_outputs = ssa_i[1] - - inputs_status = [_known_status.get(b, None) for b in versioned_inputs] - outputs_status = [_known_status.get(b, None) for b in versioned_outputs] - - new_inputs_status, new_outputs_status = status_updater(op, inputs_status, outputs_status) - - for versioned_blob, status in zip( - versioned_inputs + versioned_outputs, new_inputs_status + new_outputs_status - ): - if status is not None: - _check_and_update(versioned_blob, status) - - for op, ssa_i in zip(predict_net.op, ssa): - _update_i(op, ssa_i) - for op, ssa_i in zip(reversed(predict_net.op), reversed(ssa)): - _update_i(op, ssa_i) - - # NOTE: This strictly checks all the blob from predict_net must be assgined - # a known status. However sometimes it's impossible (eg. having deadend op), - # we may relax this constraint if - for k in all_versioned_blobs: - if k not in _known_status: - raise NotImplementedError( - "Can not infer the status for {}. Currently only support the case where" - " a single forward and backward pass can identify status for all blobs.".format(k) - ) - - return _known_status - - -def infer_device_type( - predict_net: caffe2_pb2.NetDef, - known_status: Dict[Tuple[str, int], Any], - device_name_style: str = "caffe2", -) -> Dict[Tuple[str, int], str]: - """Return the device type ("cpu" or "gpu"/"cuda") of each (versioned) blob""" - - assert device_name_style in ["caffe2", "pytorch"] - _CPU_STR = "cpu" - _GPU_STR = "gpu" if device_name_style == "caffe2" else "cuda" - - def _copy_cpu_to_gpu_updater(op, input_types, output_types): - if input_types[0] == _GPU_STR or output_types[0] == _CPU_STR: - _updater_raise(op, input_types, output_types) - return ([_CPU_STR], [_GPU_STR]) - - def _copy_gpu_to_cpu_updater(op, input_types, output_types): - if input_types[0] == _CPU_STR or output_types[0] == _GPU_STR: - _updater_raise(op, input_types, output_types) - return ([_GPU_STR], [_CPU_STR]) - - def _other_ops_updater(op, input_types, output_types): - non_none_types = [x for x in input_types + output_types if x is not None] - if len(non_none_types) > 0: - the_type = non_none_types[0] - if not all(x == the_type for x in non_none_types): - _updater_raise(op, input_types, output_types) - else: - the_type = None - return ([the_type for _ in op.input], [the_type for _ in op.output]) - - def _device_updater(op, *args, **kwargs): - return { - "CopyCPUToGPU": _copy_cpu_to_gpu_updater, - "CopyGPUToCPU": _copy_gpu_to_cpu_updater, - }.get(op.type, _other_ops_updater)(op, *args, **kwargs) - - return _generic_status_identifier(predict_net, _device_updater, known_status) - - -# ==== torch/utils_caffe2/vis.py =============================================== - - -def _modify_blob_names(ops, blob_rename_f): - ret = [] - - def _replace_list(blob_list, replaced_list): - del blob_list[:] - blob_list.extend(replaced_list) - - for x in ops: - cur = copy.deepcopy(x) - _replace_list(cur.input, list(map(blob_rename_f, cur.input))) - _replace_list(cur.output, list(map(blob_rename_f, cur.output))) - ret.append(cur) - - return ret - - -def _rename_blob(name, blob_sizes, blob_ranges): - def _list_to_str(bsize): - ret = ", ".join([str(x) for x in bsize]) - ret = "[" + ret + "]" - return ret - - ret = name - if blob_sizes is not None and name in blob_sizes: - ret += "\n" + _list_to_str(blob_sizes[name]) - if blob_ranges is not None and name in blob_ranges: - ret += "\n" + _list_to_str(blob_ranges[name]) - - return ret - - -# graph_name could not contain word 'graph' -def save_graph(net, file_name, graph_name="net", op_only=True, blob_sizes=None, blob_ranges=None): - blob_rename_f = functools.partial(_rename_blob, blob_sizes=blob_sizes, blob_ranges=blob_ranges) - return save_graph_base(net, file_name, graph_name, op_only, blob_rename_f) - - -def save_graph_base(net, file_name, graph_name="net", op_only=True, blob_rename_func=None): - graph = None - ops = net.op - if blob_rename_func is not None: - ops = _modify_blob_names(ops, blob_rename_func) - if not op_only: - graph = net_drawer.GetPydotGraph(ops, graph_name, rankdir="TB") - else: - graph = net_drawer.GetPydotGraphMinimal( - ops, graph_name, rankdir="TB", minimal_dependency=True - ) - - try: - par_dir = os.path.dirname(file_name) - if not os.path.exists(par_dir): - os.makedirs(par_dir) - - format = os.path.splitext(os.path.basename(file_name))[-1] - if format == ".png": - graph.write_png(file_name) - elif format == ".pdf": - graph.write_pdf(file_name) - elif format == ".svg": - graph.write_svg(file_name) - else: - print("Incorrect format {}".format(format)) - except Exception as e: - print("Error when writing graph to image {}".format(e)) - - return graph - - -# ==== torch/utils_toffee/aten_to_caffe2.py ==================================== - - -def group_norm_replace_aten_with_caffe2(predict_net: caffe2_pb2.NetDef): - """ - For ONNX exported model, GroupNorm will be represented as ATen op, - this can be a drop in replacement from ATen to GroupNorm - """ - count = 0 - for op in predict_net.op: - if op.type == "ATen": - op_name = get_pb_arg_vals(op, "operator", None) # return byte in py3 - if op_name and op_name.decode() == "group_norm": - op.arg.remove(get_pb_arg(op, "operator")) - - if get_pb_arg_vali(op, "cudnn_enabled", None): - op.arg.remove(get_pb_arg(op, "cudnn_enabled")) - - num_groups = get_pb_arg_vali(op, "num_groups", None) - if num_groups is not None: - op.arg.remove(get_pb_arg(op, "num_groups")) - check_set_pb_arg(op, "group", "i", num_groups) - - op.type = "GroupNorm" - count += 1 - if count > 1: - logger.info("Replaced {} ATen operator to GroupNormOp".format(count)) - - -# ==== torch/utils_toffee/alias.py ============================================= - - -def alias(x, name, is_backward=False): - if not torch.onnx.is_in_onnx_export(): - return x - assert isinstance(x, torch.Tensor) - return torch.ops._caffe2.AliasWithName(x, name, is_backward=is_backward) - - -def fuse_alias_placeholder(predict_net, init_net): - """Remove AliasWithName placeholder and rename the input/output of it""" - # First we finish all the re-naming - for i, op in enumerate(predict_net.op): - if op.type == "AliasWithName": - assert len(op.input) == 1 - assert len(op.output) == 1 - name = get_pb_arg_vals(op, "name", None).decode() - is_backward = bool(get_pb_arg_vali(op, "is_backward", 0)) - rename_op_input(predict_net, init_net, i, 0, name, from_producer=is_backward) - rename_op_output(predict_net, i, 0, name) - - # Remove AliasWithName, should be very safe since it's a non-op - new_ops = [] - for op in predict_net.op: - if op.type != "AliasWithName": - new_ops.append(op) - else: - # safety check - assert op.input == op.output - assert op.input[0] == op.arg[0].s.decode() - del predict_net.op[:] - predict_net.op.extend(new_ops) - - -# ==== torch/utils_caffe2/graph_transform.py =================================== - - -class IllegalGraphTransformError(ValueError): - """When a graph transform function call can't be executed.""" - - -def _rename_versioned_blob_in_proto( - proto: caffe2_pb2.NetDef, - old_name: str, - new_name: str, - version: int, - ssa: List[Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]], - start_versions: Dict[str, int], - end_versions: Dict[str, int], -): - """In given proto, rename all blobs with matched version""" - # Operater list - for op, i_th_ssa in zip(proto.op, ssa): - versioned_inputs, versioned_outputs = i_th_ssa - for i in range(len(op.input)): - if versioned_inputs[i] == (old_name, version): - op.input[i] = new_name - for i in range(len(op.output)): - if versioned_outputs[i] == (old_name, version): - op.output[i] = new_name - # external_input - if start_versions.get(old_name, 0) == version: - for i in range(len(proto.external_input)): - if proto.external_input[i] == old_name: - proto.external_input[i] = new_name - # external_output - if end_versions.get(old_name, 0) == version: - for i in range(len(proto.external_output)): - if proto.external_output[i] == old_name: - proto.external_output[i] = new_name - - -def rename_op_input( - predict_net: caffe2_pb2.NetDef, - init_net: caffe2_pb2.NetDef, - op_id: int, - input_id: int, - new_name: str, - from_producer: bool = False, -): - """ - Rename the op_id-th operator in predict_net, change it's input_id-th input's - name to the new_name. It also does automatic re-route and change - external_input and init_net if necessary. - - It requires the input is only consumed by this op. - - This function modifies predict_net and init_net in-place. - - When from_producer is enable, this also updates other operators that consumes - the same input. Be cautious because may trigger unintended behavior. - """ - assert isinstance(predict_net, caffe2_pb2.NetDef) - assert isinstance(init_net, caffe2_pb2.NetDef) - - init_net_ssa, init_net_versions = core.get_ssa(init_net) - predict_net_ssa, predict_net_versions = core.get_ssa( - predict_net, copy.deepcopy(init_net_versions) - ) - - versioned_inputs, versioned_outputs = predict_net_ssa[op_id] - old_name, version = versioned_inputs[input_id] - - if from_producer: - producer_map = get_producer_map(predict_net_ssa) - if not (old_name, version) in producer_map: - raise NotImplementedError( - "Can't find producer, the input {} is probably from" - " init_net, this is not supported yet.".format(old_name) - ) - producer = producer_map[(old_name, version)] - rename_op_output(predict_net, producer[0], producer[1], new_name) - return - - def contain_targets(op_ssa): - return (old_name, version) in op_ssa[0] - - is_consumer = [contain_targets(op_ssa) for op_ssa in predict_net_ssa] - if sum(is_consumer) > 1: - raise IllegalGraphTransformError( - ( - "Input '{}' of operator(#{}) are consumed by other ops, please use" - + " rename_op_output on the producer instead. Offending op: \n{}" - ).format(old_name, op_id, predict_net.op[op_id]) - ) - - # update init_net - _rename_versioned_blob_in_proto( - init_net, old_name, new_name, version, init_net_ssa, {}, init_net_versions - ) - # update predict_net - _rename_versioned_blob_in_proto( - predict_net, - old_name, - new_name, - version, - predict_net_ssa, - init_net_versions, - predict_net_versions, - ) - - -def rename_op_output(predict_net: caffe2_pb2.NetDef, op_id: int, output_id: int, new_name: str): - """ - Rename the op_id-th operator in predict_net, change it's output_id-th input's - name to the new_name. It also does automatic re-route and change - external_output and if necessary. - - It allows multiple consumers of its output. - - This function modifies predict_net in-place, doesn't need init_net. - """ - assert isinstance(predict_net, caffe2_pb2.NetDef) - - ssa, blob_versions = core.get_ssa(predict_net) - - versioned_inputs, versioned_outputs = ssa[op_id] - old_name, version = versioned_outputs[output_id] - - # update predict_net - _rename_versioned_blob_in_proto( - predict_net, old_name, new_name, version, ssa, {}, blob_versions - ) - - -def get_sub_graph_external_input_output( - predict_net: caffe2_pb2.NetDef, sub_graph_op_indices: List[int] -) -> Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]: - """ - Return the list of external input/output of sub-graph, - each element is tuple of the name and corresponding version in predict_net. - - external input/output is defined the same way as caffe2 NetDef. - """ - ssa, versions = core.get_ssa(predict_net) - - all_inputs = [] - all_outputs = [] - for op_id in sub_graph_op_indices: - all_inputs += [inp for inp in ssa[op_id][0] if inp not in all_inputs] - all_outputs += list(ssa[op_id][1]) # ssa output won't repeat - - # for versioned blobs, external inputs are just those blob in all_inputs - # but not in all_outputs - ext_inputs = [inp for inp in all_inputs if inp not in all_outputs] - - # external outputs are essentially outputs of this subgraph that are used - # outside of this sub-graph (including predict_net.external_output) - all_other_inputs = sum( - (ssa[i][0] for i in range(len(ssa)) if i not in sub_graph_op_indices), - [(outp, versions[outp]) for outp in predict_net.external_output], - ) - ext_outputs = [outp for outp in all_outputs if outp in set(all_other_inputs)] - - return ext_inputs, ext_outputs - - -class DiGraph: - """A DAG representation of caffe2 graph, each vertice is a versioned blob.""" - - def __init__(self): - self.vertices = set() - self.graph = collections.defaultdict(list) - - def add_edge(self, u, v): - self.graph[u].append(v) - self.vertices.add(u) - self.vertices.add(v) - - # grab from https://www.geeksforgeeks.org/find-paths-given-source-destination/ - def get_all_paths(self, s, d): - visited = {k: False for k in self.vertices} - path = [] - all_paths = [] - - def _get_all_paths_util(graph, u, d, visited, path): - visited[u] = True - path.append(u) - if u == d: - all_paths.append(copy.deepcopy(path)) - else: - for i in graph[u]: - if not visited[i]: - _get_all_paths_util(graph, i, d, visited, path) - path.pop() - visited[u] = False - - _get_all_paths_util(self.graph, s, d, visited, path) - return all_paths - - @staticmethod - def from_ssa(ssa): - graph = DiGraph() - for op_id in range(len(ssa)): - for inp in ssa[op_id][0]: - for outp in ssa[op_id][1]: - graph.add_edge(inp, outp) - return graph - - -def _get_dependency_chain(ssa, versioned_target, versioned_source): - """ - Return the index list of relevant operator to produce target blob from source blob, - if there's no dependency, return empty list. - """ - - # finding all paths between nodes can be O(N!), thus we can only search - # in the subgraph using the op starting from the first consumer of source blob - # to the producer of the target blob. - consumer_map = get_consumer_map(ssa) - producer_map = get_producer_map(ssa) - start_op = min(x[0] for x in consumer_map[versioned_source]) - 15 - end_op = ( - producer_map[versioned_target][0] + 15 if versioned_target in producer_map else start_op - ) - sub_graph_ssa = ssa[start_op : end_op + 1] - if len(sub_graph_ssa) > 30: - logger.warning( - "Subgraph bebetween {} and {} is large (from op#{} to op#{}), it" - " might take non-trival time to find all paths between them.".format( - versioned_source, versioned_target, start_op, end_op - ) - ) - - dag = DiGraph.from_ssa(sub_graph_ssa) - paths = dag.get_all_paths(versioned_source, versioned_target) # include two ends - ops_in_paths = [[producer_map[blob][0] for blob in path[1:]] for path in paths] - return sorted(set().union(*[set(ops) for ops in ops_in_paths])) - - -def identify_reshape_sub_graph(predict_net: caffe2_pb2.NetDef) -> List[List[int]]: - """ - Idenfity the reshape sub-graph in a protobuf. - The reshape sub-graph is defined as matching the following pattern: - - (input_blob) -> Op_1 -> ... -> Op_N -> (new_shape) -─┐ - └-------------------------------------------> Reshape -> (output_blob) - - Return: - List of sub-graphs, each sub-graph is represented as a list of indices - of the relavent ops, [Op_1, Op_2, ..., Op_N, Reshape] - """ - - ssa, _ = core.get_ssa(predict_net) - - ret = [] - for i, op in enumerate(predict_net.op): - if op.type == "Reshape": - assert len(op.input) == 2 - input_ssa = ssa[i][0] - data_source = input_ssa[0] - shape_source = input_ssa[1] - op_indices = _get_dependency_chain(ssa, shape_source, data_source) - ret.append(op_indices + [i]) - return ret - - -def remove_reshape_for_fc(predict_net, params): - """ - In PyTorch nn.Linear has to take 2D tensor, this often leads to reshape - a 4D tensor to 2D by calling .view(). However this (dynamic) reshaping - doesn't work well with ONNX and Int8 tools, and cause using extra - ops (eg. ExpandDims) that might not be available on mobile. - Luckily Caffe2 supports 4D tensor for FC, so we can remove those reshape - after exporting ONNX model. - """ - from caffe2.python import core - - # find all reshape sub-graph that can be removed, which is now all Reshape - # sub-graph whose output is only consumed by FC. - # TODO: to make it safer, we may need the actually value to better determine - # if a Reshape before FC is removable. - reshape_sub_graphs = identify_reshape_sub_graph(predict_net) - sub_graphs_to_remove = [] - for reshape_sub_graph in reshape_sub_graphs: - reshape_op_id = reshape_sub_graph[-1] - assert predict_net.op[reshape_op_id].type == "Reshape" - ssa, _ = core.get_ssa(predict_net) - reshape_output = ssa[reshape_op_id][1][0] - consumers = [i for i in range(len(ssa)) if reshape_output in ssa[i][0]] - if all(predict_net.op[consumer].type == "FC" for consumer in consumers): - # safety check if the sub-graph is isolated, for this reshape sub-graph, - # it means it has one non-param external input and one external output. - ext_inputs, ext_outputs = get_sub_graph_external_input_output( - predict_net, reshape_sub_graph - ) - non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0] - if len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1: - sub_graphs_to_remove.append(reshape_sub_graph) - - # perform removing subgraph by: - # 1: rename the Reshape's output to its input, then the graph can be - # seen as in-place itentify, meaning whose external input/output are the same. - # 2: simply remove those ops. - remove_op_ids = [] - params_to_remove = [] - for sub_graph in sub_graphs_to_remove: - logger.info( - "Remove Reshape sub-graph:\n{}".format( - "".join(["(#{:>4})\n{}".format(i, predict_net.op[i]) for i in sub_graph]) - ) - ) - reshape_op_id = sub_graph[-1] - new_reshap_output = predict_net.op[reshape_op_id].input[0] - rename_op_output(predict_net, reshape_op_id, 0, new_reshap_output) - ext_inputs, ext_outputs = get_sub_graph_external_input_output(predict_net, sub_graph) - non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0] - params_ext_inputs = [inp for inp in ext_inputs if inp[1] == 0] - assert len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1 - assert ext_outputs[0][0] == non_params_ext_inputs[0][0] - assert ext_outputs[0][1] == non_params_ext_inputs[0][1] + 1 - remove_op_ids.extend(sub_graph) - params_to_remove.extend(params_ext_inputs) - - predict_net = copy.deepcopy(predict_net) - new_ops = [op for i, op in enumerate(predict_net.op) if i not in remove_op_ids] - del predict_net.op[:] - predict_net.op.extend(new_ops) - for versioned_params in params_to_remove: - name = versioned_params[0] - logger.info("Remove params: {} from init_net and predict_net.external_input".format(name)) - del params[name] - predict_net.external_input.remove(name) - - return predict_net, params - - -def fuse_copy_between_cpu_and_gpu(predict_net: caffe2_pb2.NetDef): - """ - In-place fuse extra copy ops between cpu/gpu for the following case: - a -CopyAToB-> b -CopyBToA> c1 -NextOp1-> d1 - -CopyBToA> c2 -NextOp2-> d2 - The fused network will look like: - a -NextOp1-> d1 - -NextOp2-> d2 - """ - - _COPY_OPS = ["CopyCPUToGPU", "CopyGPUToCPU"] - - def _fuse_once(predict_net): - ssa, blob_versions = core.get_ssa(predict_net) - consumer_map = get_consumer_map(ssa) - versioned_external_output = [ - (name, blob_versions[name]) for name in predict_net.external_output - ] - - for op_id, op in enumerate(predict_net.op): - if op.type in _COPY_OPS: - fw_copy_versioned_output = ssa[op_id][1][0] - consumer_ids = [x[0] for x in consumer_map[fw_copy_versioned_output]] - reverse_op_type = _COPY_OPS[1 - _COPY_OPS.index(op.type)] - - is_fusable = ( - len(consumer_ids) > 0 - and fw_copy_versioned_output not in versioned_external_output - and all( - predict_net.op[_op_id].type == reverse_op_type - and ssa[_op_id][1][0] not in versioned_external_output - for _op_id in consumer_ids - ) - ) - - if is_fusable: - for rv_copy_op_id in consumer_ids: - # making each NextOp uses "a" directly and removing Copy ops - rs_copy_versioned_output = ssa[rv_copy_op_id][1][0] - next_op_id, inp_id = consumer_map[rs_copy_versioned_output][0] - predict_net.op[next_op_id].input[inp_id] = op.input[0] - # remove CopyOps - new_ops = [ - op - for i, op in enumerate(predict_net.op) - if i != op_id and i not in consumer_ids - ] - del predict_net.op[:] - predict_net.op.extend(new_ops) - return True - - return False - - # _fuse_once returns False is nothing can be fused - while _fuse_once(predict_net): - pass - - -def remove_dead_end_ops(net_def: caffe2_pb2.NetDef): - """remove ops if its output is not used or not in external_output""" - ssa, versions = core.get_ssa(net_def) - versioned_external_output = [(name, versions[name]) for name in net_def.external_output] - consumer_map = get_consumer_map(ssa) - removed_op_ids = set() - - def _is_dead_end(versioned_blob): - return not ( - versioned_blob in versioned_external_output - or ( - len(consumer_map[versioned_blob]) > 0 - and all(x[0] not in removed_op_ids for x in consumer_map[versioned_blob]) - ) - ) - - for i, ssa_i in reversed(list(enumerate(ssa))): - versioned_outputs = ssa_i[1] - if all(_is_dead_end(outp) for outp in versioned_outputs): - removed_op_ids.add(i) - - # simply removing those deadend ops should have no effect to external_output - new_ops = [op for i, op in enumerate(net_def.op) if i not in removed_op_ids] - del net_def.op[:] - net_def.op.extend(new_ops) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/layers/csrc/README.md b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/layers/csrc/README.md deleted file mode 100644 index 778ed3da0bae89820831bcd8a72ff7b9cad8d4dd..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/layers/csrc/README.md +++ /dev/null @@ -1,7 +0,0 @@ - - -To add a new Op: - -1. Create a new directory -2. Implement new ops there -3. Delcare its Python interface in `vision.cpp`. diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py deleted file mode 100644 index 722d5d8d71f75486e2db3008907c4eadfca41d63..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .conv_module import ConvModule - - -class DepthwiseSeparableConvModule(nn.Module): - """Depthwise separable convolution module. - - See https://arxiv.org/pdf/1704.04861.pdf for details. - - This module can replace a ConvModule with the conv block replaced by two - conv block: depthwise conv block and pointwise conv block. The depthwise - conv block contains depthwise-conv/norm/activation layers. The pointwise - conv block contains pointwise-conv/norm/activation layers. It should be - noted that there will be norm/activation layer in the depthwise conv block - if `norm_cfg` and `act_cfg` are specified. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. Default: 1. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. Default: 0. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. Default: 1. - norm_cfg (dict): Default norm config for both depthwise ConvModule and - pointwise ConvModule. Default: None. - act_cfg (dict): Default activation config for both depthwise ConvModule - and pointwise ConvModule. Default: dict(type='ReLU'). - dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - kwargs (optional): Other shared arguments for depthwise and pointwise - ConvModule. See ConvModule for ref. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - dw_norm_cfg='default', - dw_act_cfg='default', - pw_norm_cfg='default', - pw_act_cfg='default', - **kwargs): - super(DepthwiseSeparableConvModule, self).__init__() - assert 'groups' not in kwargs, 'groups should not be specified' - - # if norm/activation config of depthwise/pointwise ConvModule is not - # specified, use default config. - dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg - dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg - pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg - pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg - - # depthwise convolution - self.depthwise_conv = ConvModule( - in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - norm_cfg=dw_norm_cfg, - act_cfg=dw_act_cfg, - **kwargs) - - self.pointwise_conv = ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=pw_norm_cfg, - act_cfg=pw_act_cfg, - **kwargs) - - def forward(self, x): - x = self.depthwise_conv(x) - x = self.pointwise_conv(x) - return x diff --git a/spaces/THEMUNCHERCRUNCHER/teachif/Dockerfile b/spaces/THEMUNCHERCRUNCHER/teachif/Dockerfile deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/tabular_metrics.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/tabular_metrics.py deleted file mode 100644 index f5e9cd64356045b30da040a344408c55fac6e420..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/tabular_metrics.py +++ /dev/null @@ -1,181 +0,0 @@ -""" -=============================== -Metrics calculation -=============================== -Includes a few metric as well as functions composing metrics on results files. - -""" - - - -import numpy as np -import torch -from sklearn.metrics import roc_auc_score, accuracy_score, balanced_accuracy_score, average_precision_score -from scipy.stats import rankdata -import pandas as pd - -""" -=============================== -Metrics calculation -=============================== -""" -def auc_metric(target, pred, multi_class='ovo', numpy=False): - lib = np if numpy else torch - try: - if not numpy: - target = torch.tensor(target) if not torch.is_tensor(target) else target - pred = torch.tensor(pred) if not torch.is_tensor(pred) else pred - if len(lib.unique(target)) > 2: - if not numpy: - return torch.tensor(roc_auc_score(target, pred, multi_class=multi_class)) - return roc_auc_score(target, pred, multi_class=multi_class) - else: - if len(pred.shape) == 2: - pred = pred[:, 1] - if not numpy: - return torch.tensor(roc_auc_score(target, pred)) - return roc_auc_score(target, pred) - except ValueError as e: - print(e) - return np.nan - -def accuracy_metric(target, pred): - target = torch.tensor(target) if not torch.is_tensor(target) else target - pred = torch.tensor(pred) if not torch.is_tensor(pred) else pred - if len(torch.unique(target)) > 2: - return torch.tensor(accuracy_score(target, torch.argmax(pred, -1))) - else: - return torch.tensor(accuracy_score(target, pred[:, 1] > 0.5)) - -def average_precision_metric(target, pred): - target = torch.tensor(target) if not torch.is_tensor(target) else target - pred = torch.tensor(pred) if not torch.is_tensor(pred) else pred - if len(torch.unique(target)) > 2: - return torch.tensor(average_precision_score(target, torch.argmax(pred, -1))) - else: - return torch.tensor(average_precision_score(target, pred[:, 1] > 0.5)) - -def balanced_accuracy_metric(target, pred): - target = torch.tensor(target) if not torch.is_tensor(target) else target - pred = torch.tensor(pred) if not torch.is_tensor(pred) else pred - if len(torch.unique(target)) > 2: - return torch.tensor(balanced_accuracy_score(target, torch.argmax(pred, -1))) - else: - return torch.tensor(balanced_accuracy_score(target, pred[:, 1] > 0.5)) - -def cross_entropy(target, pred): - target = torch.tensor(target) if not torch.is_tensor(target) else target - pred = torch.tensor(pred) if not torch.is_tensor(pred) else pred - if len(torch.unique(target)) > 2: - ce = torch.nn.CrossEntropyLoss() - return ce(pred.float(), target.long()) - else: - bce = torch.nn.BCELoss() - return bce(pred[:, 1].float(), target.float()) - -def time_metric(): - """ - Dummy function, will just be used as a handler. - """ - pass - -def count_metric(x, y): - """ - Dummy function, returns one count per dataset. - """ - return 1 - -""" -=============================== -Metrics composition -=============================== -""" -def calculate_score_per_method(metric, name:str, global_results:dict, ds:list, eval_positions:list, aggregator:str='mean'): - """ - Calculates the metric given by 'metric' and saves it under 'name' in the 'global_results' - - :param metric: Metric function - :param name: Name of metric in 'global_results' - :param global_results: Dicrtonary containing the results for current method for a collection of datasets - :param ds: Dataset to calculate metrics on, a list of dataset properties - :param eval_positions: List of positions to calculate metrics on - :param aggregator: Specifies way to aggregate results across evaluation positions - :return: - """ - aggregator_f = np.nanmean if aggregator == 'mean' else np.nansum - for pos in eval_positions: - valid_positions = 0 - for d in ds: - if f'{d[0]}_outputs_at_{pos}' in global_results: - preds = global_results[f'{d[0]}_outputs_at_{pos}'] - y = global_results[f'{d[0]}_ys_at_{pos}'] - - preds, y = preds.detach().cpu().numpy() if torch.is_tensor( - preds) else preds, y.detach().cpu().numpy() if torch.is_tensor(y) else y - - try: - if metric == time_metric: - global_results[f'{d[0]}_{name}_at_{pos}'] = global_results[f'{d[0]}_time_at_{pos}'] - valid_positions = valid_positions + 1 - else: - global_results[f'{d[0]}_{name}_at_{pos}'] = aggregator_f( - [metric(y[split], preds[split]) for split in range(y.shape[0])]) - valid_positions = valid_positions + 1 - except Exception as err: - print(f'Error calculating metric with {err}, {type(err)} at {d[0]} {pos} {name}') - global_results[f'{d[0]}_{name}_at_{pos}'] = np.nan - else: - global_results[f'{d[0]}_{name}_at_{pos}'] = np.nan - - if valid_positions > 0: - global_results[f'{aggregator}_{name}_at_{pos}'] = aggregator_f([global_results[f'{d[0]}_{name}_at_{pos}'] for d in ds]) - else: - global_results[f'{aggregator}_{name}_at_{pos}'] = np.nan - - for d in ds: - metrics = [global_results[f'{d[0]}_{name}_at_{pos}'] for pos in eval_positions] - metrics = [m for m in metrics if not np.isnan(m)] - global_results[f'{d[0]}_{aggregator}_{name}'] = aggregator_f(metrics) if len(metrics) > 0 else np.nan - - metrics = [global_results[f'{aggregator}_{name}_at_{pos}'] for pos in eval_positions] - metrics = [m for m in metrics if not np.isnan(m)] - global_results[f'{aggregator}_{name}'] = aggregator_f(metrics) if len(metrics) > 0 else np.nan - - -def calculate_score(metric, name, global_results, ds, eval_positions, aggregator='mean', limit_to=''): - """ - Calls calculate_metrics_by_method with a range of methods. See arguments of that method. - :param limit_to: This method will not get metric calculations. - """ - for m in global_results: - if limit_to not in m: - continue - calculate_score_per_method(metric, name, global_results[m], ds, eval_positions, aggregator=aggregator) - - -def make_metric_matrix(global_results, methods, pos, name, ds): - result = [] - for m in global_results: - result += [[global_results[m][d[0] + '_' + name + '_at_' + str(pos)] for d in ds]] - result = np.array(result) - result = pd.DataFrame(result.T, index=[d[0] for d in ds], columns=[k[:-8] for k in list(global_results.keys())]) - - matrix_means, matrix_stds = [], [] - - for method in methods: - matrix_means += [result.iloc[:, [(method) in c for c in result.columns]].mean(axis=1)] - matrix_stds += [result.iloc[:, [(method) in c for c in result.columns]].std(axis=1)] - - matrix_means = pd.DataFrame(matrix_means, index=methods).T - matrix_stds = pd.DataFrame(matrix_stds, index=methods).T - - return matrix_means, matrix_stds - - -def make_ranks_and_wins_table(matrix): - for dss in matrix.T: - matrix.loc[dss] = rankdata(-matrix.round(3).loc[dss]) - ranks_acc = matrix.mean() - wins_acc = (matrix == 1).sum() - - return ranks_acc, wins_acc \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/utils.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/utils.py deleted file mode 100644 index 134848ae526e54e2b18738f83088c4a17efcce96..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/utils.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Dict, Generator - -from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response - -from pip._internal.exceptions import NetworkConnectionError - -# The following comments and HTTP headers were originally added by -# Donald Stufft in git commit 22c562429a61bb77172039e480873fb239dd8c03. -# -# We use Accept-Encoding: identity here because requests defaults to -# accepting compressed responses. This breaks in a variety of ways -# depending on how the server is configured. -# - Some servers will notice that the file isn't a compressible file -# and will leave the file alone and with an empty Content-Encoding -# - Some servers will notice that the file is already compressed and -# will leave the file alone, adding a Content-Encoding: gzip header -# - Some servers won't notice anything at all and will take a file -# that's already been compressed and compress it again, and set -# the Content-Encoding: gzip header -# By setting this to request only the identity encoding we're hoping -# to eliminate the third case. Hopefully there does not exist a server -# which when given a file will notice it is already compressed and that -# you're not asking for a compressed file and will then decompress it -# before sending because if that's the case I don't think it'll ever be -# possible to make this work. -HEADERS: Dict[str, str] = {"Accept-Encoding": "identity"} - - -def raise_for_status(resp: Response) -> None: - http_error_msg = "" - if isinstance(resp.reason, bytes): - # We attempt to decode utf-8 first because some servers - # choose to localize their reason strings. If the string - # isn't utf-8, we fall back to iso-8859-1 for all other - # encodings. - try: - reason = resp.reason.decode("utf-8") - except UnicodeDecodeError: - reason = resp.reason.decode("iso-8859-1") - else: - reason = resp.reason - - if 400 <= resp.status_code < 500: - http_error_msg = ( - f"{resp.status_code} Client Error: {reason} for url: {resp.url}" - ) - - elif 500 <= resp.status_code < 600: - http_error_msg = ( - f"{resp.status_code} Server Error: {reason} for url: {resp.url}" - ) - - if http_error_msg: - raise NetworkConnectionError(http_error_msg, response=resp) - - -def response_chunks( - response: Response, chunk_size: int = CONTENT_CHUNK_SIZE -) -> Generator[bytes, None, None]: - """Given a requests Response, provide the data chunks.""" - try: - # Special case for urllib3. - for chunk in response.raw.stream( - chunk_size, - # We use decode_content=False here because we don't - # want urllib3 to mess with the raw bytes we get - # from the server. If we decompress inside of - # urllib3 then we cannot verify the checksum - # because the checksum will be of the compressed - # file. This breakage will only occur if the - # server adds a Content-Encoding header, which - # depends on how the server was configured: - # - Some servers will notice that the file isn't a - # compressible file and will leave the file alone - # and with an empty Content-Encoding - # - Some servers will notice that the file is - # already compressed and will leave the file - # alone and will add a Content-Encoding: gzip - # header - # - Some servers won't notice anything at all and - # will take a file that's already been compressed - # and compress it again and set the - # Content-Encoding: gzip header - # - # By setting this not to decode automatically we - # hope to eliminate problems with the second case. - decode_content=False, - ): - yield chunk - except AttributeError: - # Standard file-like object. - while True: - chunk = response.raw.read(chunk_size) - if not chunk: - break - yield chunk diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/version.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/version.py deleted file mode 100644 index dc8c44cf7b267cc122b491566af0b54c85c19c92..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/version.py +++ /dev/null @@ -1,4 +0,0 @@ -# file generated by setuptools_scm -# don't change, don't track in version control -__version__ = version = '3.8.1' -__version_tuple__ = version_tuple = (3, 8, 1) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/adapters.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/adapters.py deleted file mode 100644 index 10c176790b622465538788d73a9e3afee99b3875..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/adapters.py +++ /dev/null @@ -1,538 +0,0 @@ -""" -requests.adapters -~~~~~~~~~~~~~~~~~ - -This module contains the transport adapters that Requests uses to define -and maintain connections. -""" - -import os.path -import socket # noqa: F401 - -from pip._vendor.urllib3.exceptions import ClosedPoolError, ConnectTimeoutError -from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError -from pip._vendor.urllib3.exceptions import InvalidHeader as _InvalidHeader -from pip._vendor.urllib3.exceptions import ( - LocationValueError, - MaxRetryError, - NewConnectionError, - ProtocolError, -) -from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError -from pip._vendor.urllib3.exceptions import ReadTimeoutError, ResponseError -from pip._vendor.urllib3.exceptions import SSLError as _SSLError -from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url -from pip._vendor.urllib3.util import Timeout as TimeoutSauce -from pip._vendor.urllib3.util import parse_url -from pip._vendor.urllib3.util.retry import Retry - -from .auth import _basic_auth_str -from .compat import basestring, urlparse -from .cookies import extract_cookies_to_jar -from .exceptions import ( - ConnectionError, - ConnectTimeout, - InvalidHeader, - InvalidProxyURL, - InvalidSchema, - InvalidURL, - ProxyError, - ReadTimeout, - RetryError, - SSLError, -) -from .models import Response -from .structures import CaseInsensitiveDict -from .utils import ( - DEFAULT_CA_BUNDLE_PATH, - extract_zipped_paths, - get_auth_from_url, - get_encoding_from_headers, - prepend_scheme_if_needed, - select_proxy, - urldefragauth, -) - -try: - from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager -except ImportError: - - def SOCKSProxyManager(*args, **kwargs): - raise InvalidSchema("Missing dependencies for SOCKS support.") - - -DEFAULT_POOLBLOCK = False -DEFAULT_POOLSIZE = 10 -DEFAULT_RETRIES = 0 -DEFAULT_POOL_TIMEOUT = None - - -class BaseAdapter: - """The Base Transport Adapter""" - - def __init__(self): - super().__init__() - - def send( - self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None - ): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - """ - raise NotImplementedError - - def close(self): - """Cleans up adapter specific items.""" - raise NotImplementedError - - -class HTTPAdapter(BaseAdapter): - """The built-in HTTP Adapter for urllib3. - - Provides a general-case interface for Requests sessions to contact HTTP and - HTTPS urls by implementing the Transport Adapter interface. This class will - usually be created by the :class:`Session ` class under the - covers. - - :param pool_connections: The number of urllib3 connection pools to cache. - :param pool_maxsize: The maximum number of connections to save in the pool. - :param max_retries: The maximum number of retries each connection - should attempt. Note, this applies only to failed DNS lookups, socket - connections and connection timeouts, never to requests where data has - made it to the server. By default, Requests does not retry failed - connections. If you need granular control over the conditions under - which we retry a request, import urllib3's ``Retry`` class and pass - that instead. - :param pool_block: Whether the connection pool should block for connections. - - Usage:: - - >>> import requests - >>> s = requests.Session() - >>> a = requests.adapters.HTTPAdapter(max_retries=3) - >>> s.mount('http://', a) - """ - - __attrs__ = [ - "max_retries", - "config", - "_pool_connections", - "_pool_maxsize", - "_pool_block", - ] - - def __init__( - self, - pool_connections=DEFAULT_POOLSIZE, - pool_maxsize=DEFAULT_POOLSIZE, - max_retries=DEFAULT_RETRIES, - pool_block=DEFAULT_POOLBLOCK, - ): - if max_retries == DEFAULT_RETRIES: - self.max_retries = Retry(0, read=False) - else: - self.max_retries = Retry.from_int(max_retries) - self.config = {} - self.proxy_manager = {} - - super().__init__() - - self._pool_connections = pool_connections - self._pool_maxsize = pool_maxsize - self._pool_block = pool_block - - self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block) - - def __getstate__(self): - return {attr: getattr(self, attr, None) for attr in self.__attrs__} - - def __setstate__(self, state): - # Can't handle by adding 'proxy_manager' to self.__attrs__ because - # self.poolmanager uses a lambda function, which isn't pickleable. - self.proxy_manager = {} - self.config = {} - - for attr, value in state.items(): - setattr(self, attr, value) - - self.init_poolmanager( - self._pool_connections, self._pool_maxsize, block=self._pool_block - ) - - def init_poolmanager( - self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs - ): - """Initializes a urllib3 PoolManager. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param connections: The number of urllib3 connection pools to cache. - :param maxsize: The maximum number of connections to save in the pool. - :param block: Block when no free connections are available. - :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager. - """ - # save these values for pickling - self._pool_connections = connections - self._pool_maxsize = maxsize - self._pool_block = block - - self.poolmanager = PoolManager( - num_pools=connections, - maxsize=maxsize, - block=block, - **pool_kwargs, - ) - - def proxy_manager_for(self, proxy, **proxy_kwargs): - """Return urllib3 ProxyManager for the given proxy. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The proxy to return a urllib3 ProxyManager for. - :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager. - :returns: ProxyManager - :rtype: urllib3.ProxyManager - """ - if proxy in self.proxy_manager: - manager = self.proxy_manager[proxy] - elif proxy.lower().startswith("socks"): - username, password = get_auth_from_url(proxy) - manager = self.proxy_manager[proxy] = SOCKSProxyManager( - proxy, - username=username, - password=password, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs, - ) - else: - proxy_headers = self.proxy_headers(proxy) - manager = self.proxy_manager[proxy] = proxy_from_url( - proxy, - proxy_headers=proxy_headers, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs, - ) - - return manager - - def cert_verify(self, conn, url, verify, cert): - """Verify a SSL certificate. This method should not be called from user - code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param conn: The urllib3 connection object associated with the cert. - :param url: The requested URL. - :param verify: Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: The SSL certificate to verify. - """ - if url.lower().startswith("https") and verify: - - cert_loc = None - - # Allow self-specified cert location. - if verify is not True: - cert_loc = verify - - if not cert_loc: - cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH) - - if not cert_loc or not os.path.exists(cert_loc): - raise OSError( - f"Could not find a suitable TLS CA certificate bundle, " - f"invalid path: {cert_loc}" - ) - - conn.cert_reqs = "CERT_REQUIRED" - - if not os.path.isdir(cert_loc): - conn.ca_certs = cert_loc - else: - conn.ca_cert_dir = cert_loc - else: - conn.cert_reqs = "CERT_NONE" - conn.ca_certs = None - conn.ca_cert_dir = None - - if cert: - if not isinstance(cert, basestring): - conn.cert_file = cert[0] - conn.key_file = cert[1] - else: - conn.cert_file = cert - conn.key_file = None - if conn.cert_file and not os.path.exists(conn.cert_file): - raise OSError( - f"Could not find the TLS certificate file, " - f"invalid path: {conn.cert_file}" - ) - if conn.key_file and not os.path.exists(conn.key_file): - raise OSError( - f"Could not find the TLS key file, invalid path: {conn.key_file}" - ) - - def build_response(self, req, resp): - """Builds a :class:`Response ` object from a urllib3 - response. This should not be called from user code, and is only exposed - for use when subclassing the - :class:`HTTPAdapter ` - - :param req: The :class:`PreparedRequest ` used to generate the response. - :param resp: The urllib3 response object. - :rtype: requests.Response - """ - response = Response() - - # Fallback to None if there's no status_code, for whatever reason. - response.status_code = getattr(resp, "status", None) - - # Make headers case-insensitive. - response.headers = CaseInsensitiveDict(getattr(resp, "headers", {})) - - # Set encoding. - response.encoding = get_encoding_from_headers(response.headers) - response.raw = resp - response.reason = response.raw.reason - - if isinstance(req.url, bytes): - response.url = req.url.decode("utf-8") - else: - response.url = req.url - - # Add new cookies from the server. - extract_cookies_to_jar(response.cookies, req, resp) - - # Give the Response some context. - response.request = req - response.connection = self - - return response - - def get_connection(self, url, proxies=None): - """Returns a urllib3 connection for the given URL. This should not be - called from user code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param url: The URL to connect to. - :param proxies: (optional) A Requests-style dictionary of proxies used on this request. - :rtype: urllib3.ConnectionPool - """ - proxy = select_proxy(url, proxies) - - if proxy: - proxy = prepend_scheme_if_needed(proxy, "http") - proxy_url = parse_url(proxy) - if not proxy_url.host: - raise InvalidProxyURL( - "Please check proxy URL. It is malformed " - "and could be missing the host." - ) - proxy_manager = self.proxy_manager_for(proxy) - conn = proxy_manager.connection_from_url(url) - else: - # Only scheme should be lower case - parsed = urlparse(url) - url = parsed.geturl() - conn = self.poolmanager.connection_from_url(url) - - return conn - - def close(self): - """Disposes of any internal state. - - Currently, this closes the PoolManager and any active ProxyManager, - which closes any pooled connections. - """ - self.poolmanager.clear() - for proxy in self.proxy_manager.values(): - proxy.clear() - - def request_url(self, request, proxies): - """Obtain the url to use when making the final request. - - If the message is being sent through a HTTP proxy, the full URL has to - be used. Otherwise, we should only use the path portion of the URL. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` being sent. - :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs. - :rtype: str - """ - proxy = select_proxy(request.url, proxies) - scheme = urlparse(request.url).scheme - - is_proxied_http_request = proxy and scheme != "https" - using_socks_proxy = False - if proxy: - proxy_scheme = urlparse(proxy).scheme.lower() - using_socks_proxy = proxy_scheme.startswith("socks") - - url = request.path_url - if is_proxied_http_request and not using_socks_proxy: - url = urldefragauth(request.url) - - return url - - def add_headers(self, request, **kwargs): - """Add any headers needed by the connection. As of v2.0 this does - nothing by default, but is left for overriding by users that subclass - the :class:`HTTPAdapter `. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` to add headers to. - :param kwargs: The keyword arguments from the call to send(). - """ - pass - - def proxy_headers(self, proxy): - """Returns a dictionary of the headers to add to any request sent - through a proxy. This works with urllib3 magic to ensure that they are - correctly sent to the proxy, rather than in a tunnelled request if - CONNECT is being used. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The url of the proxy being used for this request. - :rtype: dict - """ - headers = {} - username, password = get_auth_from_url(proxy) - - if username: - headers["Proxy-Authorization"] = _basic_auth_str(username, password) - - return headers - - def send( - self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None - ): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple or urllib3 Timeout object - :param verify: (optional) Either a boolean, in which case it controls whether - we verify the server's TLS certificate, or a string, in which case it - must be a path to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - :rtype: requests.Response - """ - - try: - conn = self.get_connection(request.url, proxies) - except LocationValueError as e: - raise InvalidURL(e, request=request) - - self.cert_verify(conn, request.url, verify, cert) - url = self.request_url(request, proxies) - self.add_headers( - request, - stream=stream, - timeout=timeout, - verify=verify, - cert=cert, - proxies=proxies, - ) - - chunked = not (request.body is None or "Content-Length" in request.headers) - - if isinstance(timeout, tuple): - try: - connect, read = timeout - timeout = TimeoutSauce(connect=connect, read=read) - except ValueError: - raise ValueError( - f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " - f"or a single float to set both timeouts to the same value." - ) - elif isinstance(timeout, TimeoutSauce): - pass - else: - timeout = TimeoutSauce(connect=timeout, read=timeout) - - try: - resp = conn.urlopen( - method=request.method, - url=url, - body=request.body, - headers=request.headers, - redirect=False, - assert_same_host=False, - preload_content=False, - decode_content=False, - retries=self.max_retries, - timeout=timeout, - chunked=chunked, - ) - - except (ProtocolError, OSError) as err: - raise ConnectionError(err, request=request) - - except MaxRetryError as e: - if isinstance(e.reason, ConnectTimeoutError): - # TODO: Remove this in 3.0.0: see #2811 - if not isinstance(e.reason, NewConnectionError): - raise ConnectTimeout(e, request=request) - - if isinstance(e.reason, ResponseError): - raise RetryError(e, request=request) - - if isinstance(e.reason, _ProxyError): - raise ProxyError(e, request=request) - - if isinstance(e.reason, _SSLError): - # This branch is for urllib3 v1.22 and later. - raise SSLError(e, request=request) - - raise ConnectionError(e, request=request) - - except ClosedPoolError as e: - raise ConnectionError(e, request=request) - - except _ProxyError as e: - raise ProxyError(e) - - except (_SSLError, _HTTPError) as e: - if isinstance(e, _SSLError): - # This branch is for urllib3 versions earlier than v1.22 - raise SSLError(e, request=request) - elif isinstance(e, ReadTimeoutError): - raise ReadTimeout(e, request=request) - elif isinstance(e, _InvalidHeader): - raise InvalidHeader(e, request=request) - else: - raise - - return self.build_response(request, resp) diff --git a/spaces/ThankGod/movie-poster-diffusion/app.py b/spaces/ThankGod/movie-poster-diffusion/app.py deleted file mode 100644 index b026cd148f27dee21aa40e12c9dd94fe95f0ddef..0000000000000000000000000000000000000000 --- a/spaces/ThankGod/movie-poster-diffusion/app.py +++ /dev/null @@ -1,86 +0,0 @@ -from ipywidgets.widgets.interaction import Dropdown -import gradio as gr -import torch -import os -from diffusers import StableDiffusionPipeline -import base64 -import io -import requests - -HF_TOKEN = os.getenv('HF_TOKEN') -hf_writer = gr.HuggingFaceDatasetSaver(HF_TOKEN, "crowdsourced-movie-poster-diffusion") - -auth_token = os.environ.get("auth_token") -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=auth_token) -pipe = pipe.to(device) - -generator = torch.Generator(device=device) -seed = generator.seed() -print(f"The seed for this generator is: {seed}") - -latents1 = torch.randn(1,4,64,64) - -def convert_image_2string(image): - out_buffer = io.BytesIO() - image.save(out_buffer, format="PNG") - out_buffer .seek(0) - base64_bytes = base64.b64encode(out_buffer .read()) - base64_str = base64_bytes.decode("ascii") - return base64_str - -def improve_image(image): - url1 = 'https://hf.space/embed/NotFungibleIO/GFPGAN/+/api/predict' - url2 = 'https://hf.space/embed/abidlabs/GFPGAN/+/api/predict' - request_objt = { - "data":[f'image/jpeg;base64,{convert_image_2string(image)}',2]} - return requests.post(url2, json=request_objt).json() - -def generate(celebrity, setting_list_option, setting_text): - movie_setting = setting_list_option - if setting_list_option == 'None': - movie_setting = setting_text - return movie_setting - - prompt = f'Movie poster of {celebrity} in {movie_setting} with title caption, surreal, photorealistic, portrait, 4k High Definition' - #'A movie potrait of' + celebrity + 'sterring in' + setting - image = pipe(prompt, - guidance_scale=20, - num_inference_steps=100, - latents=latents1).images[0] - image = improve_image(image) - image = gr.processing_utils.decode_base64_to_image(image['data'][0]) - return image - -title="🖼️Movie poster Generator (Diffusion Model) Demo" -description = "Generate amazing photo realistic images of your favourite movie\ - characters starring in movies that did not exist" -article = """ - - Enter the name of your preffered movie character - - Also select a movie from the posible list of options. - """ - -gr.Interface( - fn=generate, - inputs=[gr.Textbox(label='Enter name of Movie Celebrity', value='Will Smith'), - gr.Dropdown(label='Select from possible Movie Choices', - choices=['The matrix', - 'Gladiator', - 'The Godfather', - 'The Dark Knight', - 'The Lord of the Rings', - 'Star Wars', - 'John Wick', - 'Harry Potter', - 'The Game of thrones', - 'Avengers End Game']), - gr.Textbox(label='Dont like movie recommendations? Write yours instead', value='Star Wars') - ], - allow_flagging="manual", - flagging_options=["Poor Image Quality", "Wrong Movie Artist"], - flagging_callback=hf_writer, - outputs='image', - title=title, - description=description, - article=article -).launch() diff --git a/spaces/ThirdEyeData/Customer-Complaints-Categorization/app.py b/spaces/ThirdEyeData/Customer-Complaints-Categorization/app.py deleted file mode 100644 index d328d6b7f2d4ebd75df1e74581a1fd18c38e3965..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Complaints-Categorization/app.py +++ /dev/null @@ -1,95 +0,0 @@ -from transformers import AutoModelForSequenceClassification -from transformers import AutoTokenizer, AutoConfig -from clean_data import cleaned_complaints -import numpy as np -from scipy.special import softmax -import gradio as gr - -# Preprocess text (username and link placeholders) -def preprocess(text): - new_text = [] - for t in text.split(" "): - t = '@user' if t.startswith('@') and len(t) > 1 else t - t = 'http' if t.startswith('http') else t - new_text.append(t) - return " ".join(new_text) - -# load model -MODEL = f"ThirdEyeData/Consumer-Complaint-Categorization" -model = AutoModelForSequenceClassification.from_pretrained(MODEL) -#model.save_pretrained(MODEL) - - -tokenizer = AutoTokenizer.from_pretrained(MODEL) -config = AutoConfig.from_pretrained(MODEL) - -# create classifier function -def classify_compliant(text): - text = cleaned_complaints(text) - if len(text)<3: - return "Cannot Categorize the Complaint" - else: - text = preprocess(text) - encoded_input = tokenizer(text, return_tensors='pt') - output = model(**encoded_input) - scores = output[0][0].detach().numpy() - scores = softmax(scores) - - # Print labels and scores - probs = {} - ranking = np.argsort(scores) - ranking = ranking[::-1] - - - l = config.id2label[ranking[0]] - #s = scores[ranking[i]] - #probs[l] = np.round(float(s), 4) - return l - - -#build the Gradio app -#Instructuction = "Write an imaginary review about a product or service you might be interested in." -title="Customer Complaints Categorization" -description = """This application uses fine-tune BERT to perform Customer Complaints Categorization. BERT models are usually pretrained on a large corpus of text and then fine tuned for specific tasks. -Effectively handling customer complaints provides an opportunity for the service provider to resolve the customer’s problems on time and thus reduce dissatisfaction levels. - -Write a complaint on an insurance product or service and see how the machine learning model is able to Categorisation your Complaint. -The Complaints Type are: -1. Debt Collection -2. False Claim or Statement -3. Legal Issue -4. Improper contact or sharing of info -5. Follow Up Issue -""" -article = """ - - Click submit button to test Consumer Complaint Segmentation - - Click the clear button to refresh the text - - This application has a linked model https://huggingface.co/ThirdEyeData/Consumer-Complaint-Categorization - """ - -demo = gr.Interface(classify_compliant, - inputs=gr.Textbox(lines =10,label = "Type your Complaint of our Product here or for a quick demo click on the examples provided below and output will automatically be populated in the output box ", max_lines = 20), - outputs = gr.Textbox(lines =5,label = "Complaint Category"), - title = title, - description = description, - #Instruction = Instructuction, - article = article, - #allow_flagging = "never", - live = False, - cache_example = False, - examples=[["""The day before my Salliemae student loan payment was due I contacted a rep to discuss the impact on my account of making my payment at the end of the month rather than the middle for just that one month. - The rep indicated it would be no problem, but that I still may get a call each day from Salliemae until I made my payment. I understood, requested my account be notated accordingly, and hung up. For two weeks I endured numerous calls per day ; - I lost count at six calls one day, which was the norm for the number of calls Salliemae made in an effort to collect a debt that had a due date that had been arranged and had not come up yet. """], - ["""The representative told me the total amount due was {$2100.00} and that I can settle for half of that amount. Unfortunately, I was unable to accept the settlement but began to question the amount because my last statement was {$1800.00} and - there was nothing written in the contract for additional interest charges should my account go into collection. - I told the representative that I will pay the amount actually owed and I want to make a payment arrangement. She told me I can't just do what I want, - If I want to pay the original amount due, it has to be paid in full. I told her that that is not fair debt collection practice and that I am only contractually obligated to the {$1800.00} and we can set up an arrangement from that. """] , - ["""This debt is beyond the Maryland Statute of Limitations. It is illegal for a debt collector to collect on an expired debt. They have taken illegal action by seizing my Maryland State Refund when the debt had already expired and beyond the Statute of Limitation which is 3 years in the state of Maryland"""], - ["""The company has been calling my employer in an attempt to collect a debt. When I spoke with them and informed them that this was not an appropriate number to call. I asked what company they were calling from and a phone number so he told me the company name, but the man on the phone would not give me his name or a phone number. - I had mailed a letter requesting verification a few weeks ago and hadn't received anything back. In the letter I specifically requested that all communication be done through mail."""], - [""" I do n't think I chose the correct issue above, however I think it is closest to my issue. I have a record on my credit report that I have disputed through both the company and the credit bureaus. The dispute is marked as being disputed by me on my report, but it was not removed despite the creditor not sending me verification of this debt. - I do not even know what this debt is for.I have tried contacting the collection agency by mail to obtain verification with no response and they will not remove the item from my report."""]] - - ) -if __name__ == "__main__": - demo.launch() diff --git a/spaces/ThomasSimonini/SmartRobot/Build/Jammo Robot WEBGL.loader.js b/spaces/ThomasSimonini/SmartRobot/Build/Jammo Robot WEBGL.loader.js deleted file mode 100644 index e38bf6ab53c56c0cf8e216717a99da22dd58d29f..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/SmartRobot/Build/Jammo Robot WEBGL.loader.js +++ /dev/null @@ -1 +0,0 @@ -function createUnityInstance(t,r,d){function i(e,t){if(!i.aborted&&r.showBanner)return"error"==t&&(i.aborted=!0),r.showBanner(e,t);switch(t){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function n(e){var t=e.reason||e.error,r=t?t.toString():e.message||e.reason||"",n=t&&t.stack?t.stack.toString():"";(r+="\n"+(n=n.startsWith(r)?n.substring(r.length):n).trim())&&c.stackTraceRegExp&&c.stackTraceRegExp.test(r)&&C(r,e.filename||t&&(t.fileName||t.sourceURL)||"",e.lineno||t&&(t.lineNumber||t.line)||0)}function e(e,t,r){var n=e[t];void 0!==n&&n||(console.warn('Config option "'+t+'" is missing or empty. Falling back to default value: "'+r+'". Consider updating your WebGL template to include the missing config option.'),e[t]=r)}d=d||function(){};var o,c={canvas:t,webglContextAttributes:{preserveDrawingBuffer:!1,powerPreference:2},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){e=window.setInterval(e,t);return this.intervals[e]=!0,e},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&-1!=e.indexOf("wasm streaming compile failed")&&(-1!=e.toLowerCase().indexOf("mime")?i('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):i('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return"build.wasm"==e?this.codeUrl:e},disabledCanvasEvents:["contextmenu","dragstart"]};for(o in e(r,"companyName","Unity"),e(r,"productName","WebGL Player"),e(r,"productVersion","1.0"),r)c[o]=r[o];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var a=c.disabledCanvasEvents.slice();function s(e){e.preventDefault()}a.forEach(function(e){t.addEventListener(e,s)}),window.addEventListener("error",n),window.addEventListener("unhandledrejection",n),c.deinitializers.push(function(){for(var e in c.disableAccessToMediaDevices(),a.forEach(function(e){t.removeEventListener(e,s)}),window.removeEventListener("error",n),window.removeEventListener("unhandledrejection",n),c.intervals)window.clearInterval(e);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;eIf using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+n+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'!
    If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'),void i(r,"error"))}i("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var o=unityFramework;unityFramework=null,s.onload=null,a(o)},s.onerror=function(e){i("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(s),c.deinitializers.push(function(){document.body.removeChild(s)})}).then(function(e){e(c)});x(r="dataUrl"),e=c.cacheControl(c[r]),t=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,n=c[r],n=/file:\/\//.exec(n)?"same-origin":void 0;var r,e,t,n,o=t(c[r],{method:"GET",companyName:c.companyName,productName:c.productName,control:e,mode:n,onProgress:function(e){x(r,e)}}).then(function(e){return e.parsedBody}).catch(function(e){var t="Failed to download file "+c[r];"file:"==location.protocol?i(t+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(t)});c.preRun.push(function(){c.addRunDependency("dataUrl"),o.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),r=0,n="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(r,r+n.length))==n)throw"unknown data format";var o=t.getUint32(r+=n.length,!0);for(r+=4;r (b h) n d', h=h), (q, k, v)) - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - attn = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', attn, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True): - super().__init__() - self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, - dropout=dropout) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - - def forward(self, x, context=None): - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None): - super().__init__() - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv3d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth)] - ) - - self.proj_out = zero_module(nn.Conv3d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w, d = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = rearrange(x, 'b c h w d -> b (h w d) c') - for block in self.transformer_blocks: - x = block(x, context=context) - x = rearrange(x, 'b (h w d) c -> b c h w d', h=h, w=w, d=d) - x = self.proj_out(x) - return x + x_in diff --git a/spaces/Xenova/the-tokenizer-playground/assets/index-3afbca77.js b/spaces/Xenova/the-tokenizer-playground/assets/index-3afbca77.js deleted file mode 100644 index 5a96b41373f6f8260ad13a3cbbbb3725fbfb549f..0000000000000000000000000000000000000000 --- a/spaces/Xenova/the-tokenizer-playground/assets/index-3afbca77.js +++ /dev/null @@ -1,41 +0,0 @@ -(function(){const n=document.createElement("link").relList;if(n&&n.supports&&n.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const o of l)if(o.type==="childList")for(const u of o.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function t(l){const o={};return l.integrity&&(o.integrity=l.integrity),l.referrerPolicy&&(o.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?o.credentials="include":l.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function r(l){if(l.ep)return;l.ep=!0;const o=t(l);fetch(l.href,o)}})();function lc(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Wi={exports:{}},el={},Qi={exports:{}},T={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var Yt=Symbol.for("react.element"),oc=Symbol.for("react.portal"),uc=Symbol.for("react.fragment"),ic=Symbol.for("react.strict_mode"),sc=Symbol.for("react.profiler"),ac=Symbol.for("react.provider"),cc=Symbol.for("react.context"),fc=Symbol.for("react.forward_ref"),dc=Symbol.for("react.suspense"),pc=Symbol.for("react.memo"),mc=Symbol.for("react.lazy"),Mu=Symbol.iterator;function hc(e){return e===null||typeof e!="object"?null:(e=Mu&&e[Mu]||e["@@iterator"],typeof e=="function"?e:null)}var Ki={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},Xi=Object.assign,Yi={};function ot(e,n,t){this.props=e,this.context=n,this.refs=Yi,this.updater=t||Ki}ot.prototype.isReactComponent={};ot.prototype.setState=function(e,n){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,n,"setState")};ot.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function Gi(){}Gi.prototype=ot.prototype;function $o(e,n,t){this.props=e,this.context=n,this.refs=Yi,this.updater=t||Ki}var Ao=$o.prototype=new Gi;Ao.constructor=$o;Xi(Ao,ot.prototype);Ao.isPureReactComponent=!0;var Du=Array.isArray,Zi=Object.prototype.hasOwnProperty,Vo={current:null},Ji={key:!0,ref:!0,__self:!0,__source:!0};function qi(e,n,t){var r,l={},o=null,u=null;if(n!=null)for(r in n.ref!==void 0&&(u=n.ref),n.key!==void 0&&(o=""+n.key),n)Zi.call(n,r)&&!Ji.hasOwnProperty(r)&&(l[r]=n[r]);var i=arguments.length-2;if(i===1)l.children=t;else if(1>>1,G=E[W];if(0>>1;Wl(gl,z))gnl(er,gl)?(E[W]=er,E[gn]=z,W=gn):(E[W]=gl,E[yn]=z,W=yn);else if(gnl(er,z))E[W]=er,E[gn]=z,W=gn;else break e}}return P}function l(E,P){var z=E.sortIndex-P.sortIndex;return z!==0?z:E.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var o=performance;e.unstable_now=function(){return o.now()}}else{var u=Date,i=u.now();e.unstable_now=function(){return u.now()-i}}var s=[],f=[],h=1,m=null,p=3,g=!1,w=!1,k=!1,j=typeof setTimeout=="function"?setTimeout:null,c=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(E){for(var P=t(f);P!==null;){if(P.callback===null)r(f);else if(P.startTime<=E)r(f),P.sortIndex=P.expirationTime,n(s,P);else break;P=t(f)}}function v(E){if(k=!1,d(E),!w)if(t(s)!==null)w=!0,vl(x);else{var P=t(f);P!==null&&yl(v,P.startTime-E)}}function x(E,P){w=!1,k&&(k=!1,c(N),N=-1),g=!0;var z=p;try{for(d(P),m=t(s);m!==null&&(!(m.expirationTime>P)||E&&!Pe());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var G=W(m.expirationTime<=P);P=e.unstable_now(),typeof G=="function"?m.callback=G:m===t(s)&&r(s),d(P)}else r(s);m=t(s)}if(m!==null)var bt=!0;else{var yn=t(f);yn!==null&&yl(v,yn.startTime-P),bt=!1}return bt}finally{m=null,p=z,g=!1}}var C=!1,_=null,N=-1,H=5,R=-1;function Pe(){return!(e.unstable_now()-RE||125W?(E.sortIndex=z,n(f,E),t(s)===null&&E===t(f)&&(k?(c(N),N=-1):k=!0,yl(v,z-W))):(E.sortIndex=G,n(s,E),w||g||(w=!0,vl(x))),E},e.unstable_shouldYield=Pe,e.unstable_wrapCallback=function(E){var P=p;return function(){var z=p;p=P;try{return E.apply(this,arguments)}finally{p=z}}}})(ts);ns.exports=ts;var Pc=ns.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var rs=ae,ge=Pc;function y(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),Kl=Object.prototype.hasOwnProperty,zc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Fu={},Uu={};function Lc(e){return Kl.call(Uu,e)?!0:Kl.call(Fu,e)?!1:zc.test(e)?Uu[e]=!0:(Fu[e]=!0,!1)}function Tc(e,n,t,r){if(t!==null&&t.type===0)return!1;switch(typeof n){case"function":case"symbol":return!0;case"boolean":return r?!1:t!==null?!t.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Rc(e,n,t,r){if(n===null||typeof n>"u"||Tc(e,n,t,r))return!0;if(r)return!1;if(t!==null)switch(t.type){case 3:return!n;case 4:return n===!1;case 5:return isNaN(n);case 6:return isNaN(n)||1>n}return!1}function se(e,n,t,r,l,o,u){this.acceptsBooleans=n===2||n===3||n===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=t,this.propertyName=e,this.type=n,this.sanitizeURL=o,this.removeEmptyString=u}var ee={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ee[e]=new se(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var n=e[0];ee[n]=new se(n,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ee[e]=new se(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ee[e]=new se(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ee[e]=new se(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ee[e]=new se(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ee[e]=new se(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ee[e]=new se(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ee[e]=new se(e,5,!1,e.toLowerCase(),null,!1,!1)});var Ho=/[\-:]([a-z])/g;function Wo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!1,!1)});ee.xlinkHref=new se("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!0,!0)});function Qo(e,n,t,r){var l=ee.hasOwnProperty(n)?ee[n]:null;(l!==null?l.type!==0:r||!(2i||l[u]!==o[i]){var s=` -`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=i);break}}}finally{Sl=!1,Error.prepareStackTrace=t}return(e=e?e.displayName||e.name:"")?gt(e):""}function jc(e){switch(e.tag){case 5:return gt(e.type);case 16:return gt("Lazy");case 13:return gt("Suspense");case 19:return gt("SuspenseList");case 0:case 2:case 15:return e=xl(e.type,!1),e;case 11:return e=xl(e.type.render,!1),e;case 1:return e=xl(e.type,!0),e;default:return""}}function Zl(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Dn:return"Fragment";case Mn:return"Portal";case Xl:return"Profiler";case Ko:return"StrictMode";case Yl:return"Suspense";case Gl:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case us:return(e.displayName||"Context")+".Consumer";case os:return(e._context.displayName||"Context")+".Provider";case Xo:var n=e.render;return e=e.displayName,e||(e=n.displayName||n.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case Yo:return n=e.displayName||null,n!==null?n:Zl(e.type)||"Memo";case Je:n=e._payload,e=e._init;try{return Zl(e(n))}catch{}}return null}function Oc(e){var n=e.type;switch(e.tag){case 24:return"Cache";case 9:return(n.displayName||"Context")+".Consumer";case 10:return(n._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=n.render,e=e.displayName||e.name||"",n.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return n;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Zl(n);case 8:return n===Ko?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof n=="function")return n.displayName||n.name||null;if(typeof n=="string")return n}return null}function dn(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ss(e){var n=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(n==="checkbox"||n==="radio")}function Mc(e){var n=ss(e)?"checked":"value",t=Object.getOwnPropertyDescriptor(e.constructor.prototype,n),r=""+e[n];if(!e.hasOwnProperty(n)&&typeof t<"u"&&typeof t.get=="function"&&typeof t.set=="function"){var l=t.get,o=t.set;return Object.defineProperty(e,n,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,o.call(this,u)}}),Object.defineProperty(e,n,{enumerable:t.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[n]}}}}function rr(e){e._valueTracker||(e._valueTracker=Mc(e))}function as(e){if(!e)return!1;var n=e._valueTracker;if(!n)return!0;var t=n.getValue(),r="";return e&&(r=ss(e)?e.checked?"true":"false":e.value),e=r,e!==t?(n.setValue(e),!0):!1}function Tr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Jl(e,n){var t=n.checked;return V({},n,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:t??e._wrapperState.initialChecked})}function Au(e,n){var t=n.defaultValue==null?"":n.defaultValue,r=n.checked!=null?n.checked:n.defaultChecked;t=dn(n.value!=null?n.value:t),e._wrapperState={initialChecked:r,initialValue:t,controlled:n.type==="checkbox"||n.type==="radio"?n.checked!=null:n.value!=null}}function cs(e,n){n=n.checked,n!=null&&Qo(e,"checked",n,!1)}function ql(e,n){cs(e,n);var t=dn(n.value),r=n.type;if(t!=null)r==="number"?(t===0&&e.value===""||e.value!=t)&&(e.value=""+t):e.value!==""+t&&(e.value=""+t);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}n.hasOwnProperty("value")?bl(e,n.type,t):n.hasOwnProperty("defaultValue")&&bl(e,n.type,dn(n.defaultValue)),n.checked==null&&n.defaultChecked!=null&&(e.defaultChecked=!!n.defaultChecked)}function Vu(e,n,t){if(n.hasOwnProperty("value")||n.hasOwnProperty("defaultValue")){var r=n.type;if(!(r!=="submit"&&r!=="reset"||n.value!==void 0&&n.value!==null))return;n=""+e._wrapperState.initialValue,t||n===e.value||(e.value=n),e.defaultValue=n}t=e.name,t!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,t!==""&&(e.name=t)}function bl(e,n,t){(n!=="number"||Tr(e.ownerDocument)!==e)&&(t==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+t&&(e.defaultValue=""+t))}var wt=Array.isArray;function Kn(e,n,t,r){if(e=e.options,n){n={};for(var l=0;l"+n.valueOf().toString()+"",n=lr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;n.firstChild;)e.appendChild(n.firstChild)}});function jt(e,n){if(n){var t=e.firstChild;if(t&&t===e.lastChild&&t.nodeType===3){t.nodeValue=n;return}}e.textContent=n}var xt={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Dc=["Webkit","ms","Moz","O"];Object.keys(xt).forEach(function(e){Dc.forEach(function(n){n=n+e.charAt(0).toUpperCase()+e.substring(1),xt[n]=xt[e]})});function ms(e,n,t){return n==null||typeof n=="boolean"||n===""?"":t||typeof n!="number"||n===0||xt.hasOwnProperty(e)&&xt[e]?(""+n).trim():n+"px"}function hs(e,n){e=e.style;for(var t in n)if(n.hasOwnProperty(t)){var r=t.indexOf("--")===0,l=ms(t,n[t],r);t==="float"&&(t="cssFloat"),r?e.setProperty(t,l):e[t]=l}}var Ic=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function to(e,n){if(n){if(Ic[e]&&(n.children!=null||n.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(n.dangerouslySetInnerHTML!=null){if(n.children!=null)throw Error(y(60));if(typeof n.dangerouslySetInnerHTML!="object"||!("__html"in n.dangerouslySetInnerHTML))throw Error(y(61))}if(n.style!=null&&typeof n.style!="object")throw Error(y(62))}}function ro(e,n){if(e.indexOf("-")===-1)return typeof n.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var lo=null;function Go(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var oo=null,Xn=null,Yn=null;function Wu(e){if(e=Jt(e)){if(typeof oo!="function")throw Error(y(280));var n=e.stateNode;n&&(n=ol(n),oo(e.stateNode,e.type,n))}}function vs(e){Xn?Yn?Yn.push(e):Yn=[e]:Xn=e}function ys(){if(Xn){var e=Xn,n=Yn;if(Yn=Xn=null,Wu(e),n)for(e=0;e>>=0,e===0?32:31-(Xc(e)/Yc|0)|0}var or=64,ur=4194304;function kt(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Mr(e,n){var t=e.pendingLanes;if(t===0)return 0;var r=0,l=e.suspendedLanes,o=e.pingedLanes,u=t&268435455;if(u!==0){var i=u&~l;i!==0?r=kt(i):(o&=u,o!==0&&(r=kt(o)))}else u=t&~l,u!==0?r=kt(u):o!==0&&(r=kt(o));if(r===0)return 0;if(n!==0&&n!==r&&!(n&l)&&(l=r&-r,o=n&-n,l>=o||l===16&&(o&4194240)!==0))return n;if(r&4&&(r|=t&16),n=e.entangledLanes,n!==0)for(e=e.entanglements,n&=r;0t;t++)n.push(e);return n}function Gt(e,n,t){e.pendingLanes|=n,n!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,n=31-je(n),e[n]=t}function qc(e,n){var t=e.pendingLanes&~n;e.pendingLanes=n,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=n,e.mutableReadLanes&=n,e.entangledLanes&=n,n=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Ct),bu=String.fromCharCode(32),ei=!1;function Fs(e,n){switch(e){case"keyup":return Pf.indexOf(n.keyCode)!==-1;case"keydown":return n.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Us(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var In=!1;function Lf(e,n){switch(e){case"compositionend":return Us(n);case"keypress":return n.which!==32?null:(ei=!0,bu);case"textInput":return e=n.data,e===bu&&ei?null:e;default:return null}}function Tf(e,n){if(In)return e==="compositionend"||!ru&&Fs(e,n)?(e=Ds(),Sr=eu=nn=null,In=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(n.ctrlKey||n.altKey||n.metaKey)||n.ctrlKey&&n.altKey){if(n.char&&1=n)return{node:t,offset:n-e};e=r}e:{for(;t;){if(t.nextSibling){t=t.nextSibling;break e}t=t.parentNode}t=void 0}t=li(t)}}function Bs(e,n){return e&&n?e===n?!0:e&&e.nodeType===3?!1:n&&n.nodeType===3?Bs(e,n.parentNode):"contains"in e?e.contains(n):e.compareDocumentPosition?!!(e.compareDocumentPosition(n)&16):!1:!1}function Hs(){for(var e=window,n=Tr();n instanceof e.HTMLIFrameElement;){try{var t=typeof n.contentWindow.location.href=="string"}catch{t=!1}if(t)e=n.contentWindow;else break;n=Tr(e.document)}return n}function lu(e){var n=e&&e.nodeName&&e.nodeName.toLowerCase();return n&&(n==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||n==="textarea"||e.contentEditable==="true")}function $f(e){var n=Hs(),t=e.focusedElem,r=e.selectionRange;if(n!==t&&t&&t.ownerDocument&&Bs(t.ownerDocument.documentElement,t)){if(r!==null&&lu(t)){if(n=r.start,e=r.end,e===void 0&&(e=n),"selectionStart"in t)t.selectionStart=n,t.selectionEnd=Math.min(e,t.value.length);else if(e=(n=t.ownerDocument||document)&&n.defaultView||window,e.getSelection){e=e.getSelection();var l=t.textContent.length,o=Math.min(r.start,l);r=r.end===void 0?o:Math.min(r.end,l),!e.extend&&o>r&&(l=r,r=o,o=l),l=oi(t,o);var u=oi(t,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(n=n.createRange(),n.setStart(l.node,l.offset),e.removeAllRanges(),o>r?(e.addRange(n),e.extend(u.node,u.offset)):(n.setEnd(u.node,u.offset),e.addRange(n)))}}for(n=[],e=t;e=e.parentNode;)e.nodeType===1&&n.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof t.focus=="function"&&t.focus(),t=0;t=document.documentMode,Fn=null,fo=null,Nt=null,po=!1;function ui(e,n,t){var r=t.window===t?t.document:t.nodeType===9?t:t.ownerDocument;po||Fn==null||Fn!==Tr(r)||(r=Fn,"selectionStart"in r&&lu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Nt&&Ut(Nt,r)||(Nt=r,r=Fr(fo,"onSelect"),0An||(e.current=wo[An],wo[An]=null,An--)}function D(e,n){An++,wo[An]=e.current,e.current=n}var pn={},le=hn(pn),de=hn(!1),Nn=pn;function bn(e,n){var t=e.type.contextTypes;if(!t)return pn;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===n)return r.__reactInternalMemoizedMaskedChildContext;var l={},o;for(o in t)l[o]=n[o];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=n,e.__reactInternalMemoizedMaskedChildContext=l),l}function pe(e){return e=e.childContextTypes,e!=null}function $r(){F(de),F(le)}function pi(e,n,t){if(le.current!==pn)throw Error(y(168));D(le,n),D(de,t)}function qs(e,n,t){var r=e.stateNode;if(n=n.childContextTypes,typeof r.getChildContext!="function")return t;r=r.getChildContext();for(var l in r)if(!(l in n))throw Error(y(108,Oc(e)||"Unknown",l));return V({},t,r)}function Ar(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||pn,Nn=le.current,D(le,e),D(de,de.current),!0}function mi(e,n,t){var r=e.stateNode;if(!r)throw Error(y(169));t?(e=qs(e,n,Nn),r.__reactInternalMemoizedMergedChildContext=e,F(de),F(le),D(le,e)):F(de),D(de,t)}var Ve=null,ul=!1,Il=!1;function bs(e){Ve===null?Ve=[e]:Ve.push(e)}function Jf(e){ul=!0,bs(e)}function vn(){if(!Il&&Ve!==null){Il=!0;var e=0,n=M;try{var t=Ve;for(M=1;e>=u,l-=u,Be=1<<32-je(n)+l|t<N?(H=_,_=null):H=_.sibling;var R=p(c,_,d[N],v);if(R===null){_===null&&(_=H);break}e&&_&&R.alternate===null&&n(c,_),a=o(R,a,N),C===null?x=R:C.sibling=R,C=R,_=H}if(N===d.length)return t(c,_),U&&wn(c,N),x;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var Pe=p(c,_,R.value,v);if(Pe===null){_===null&&(_=H);break}e&&_&&Pe.alternate===null&&n(c,_),a=o(Pe,a,N),C===null?x=Pe:C.sibling=Pe,C=Pe,_=H}if(R.done)return t(c,_),U&&wn(c,N),x;if(_===null){for(;!R.done;N++,R=d.next())R=m(c,R.value,v),R!==null&&(a=o(R,a,N),C===null?x=R:C.sibling=R,C=R);return U&&wn(c,N),x}for(_=r(c,_);!R.done;N++,R=d.next())R=g(_,c,N,R.value,v),R!==null&&(e&&R.alternate!==null&&_.delete(R.key===null?N:R.key),a=o(R,a,N),C===null?x=R:C.sibling=R,C=R);return e&&_.forEach(function(st){return n(c,st)}),U&&wn(c,N),x}function j(c,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Dn&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case tr:e:{for(var x=d.key,C=a;C!==null;){if(C.key===x){if(x=d.type,x===Dn){if(C.tag===7){t(c,C.sibling),a=l(C,d.props.children),a.return=c,c=a;break e}}else if(C.elementType===x||typeof x=="object"&&x!==null&&x.$$typeof===Je&&Si(x)===C.type){t(c,C.sibling),a=l(C,d.props),a.ref=ht(c,C,d),a.return=c,c=a;break e}t(c,C);break}else n(c,C);C=C.sibling}d.type===Dn?(a=_n(d.props.children,c.mode,v,d.key),a.return=c,c=a):(v=Lr(d.type,d.key,d.props,null,c.mode,v),v.ref=ht(c,a,d),v.return=c,c=v)}return u(c);case Mn:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){t(c,a.sibling),a=l(a,d.children||[]),a.return=c,c=a;break e}else{t(c,a);break}else n(c,a);a=a.sibling}a=Wl(d,c.mode,v),a.return=c,c=a}return u(c);case Je:return C=d._init,j(c,a,C(d._payload),v)}if(wt(d))return w(c,a,d,v);if(ct(d))return k(c,a,d,v);pr(c,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(t(c,a.sibling),a=l(a,d),a.return=c,c=a):(t(c,a),a=Hl(d,c.mode,v),a.return=c,c=a),u(c)):t(c,a)}return j}var nt=ia(!0),sa=ia(!1),qt={},$e=hn(qt),Bt=hn(qt),Ht=hn(qt);function En(e){if(e===qt)throw Error(y(174));return e}function pu(e,n){switch(D(Ht,n),D(Bt,e),D($e,qt),e=n.nodeType,e){case 9:case 11:n=(n=n.documentElement)?n.namespaceURI:no(null,"");break;default:e=e===8?n.parentNode:n,n=e.namespaceURI||null,e=e.tagName,n=no(n,e)}F($e),D($e,n)}function tt(){F($e),F(Bt),F(Ht)}function aa(e){En(Ht.current);var n=En($e.current),t=no(n,e.type);n!==t&&(D(Bt,e),D($e,t))}function mu(e){Bt.current===e&&(F($e),F(Bt))}var $=hn(0);function Kr(e){for(var n=e;n!==null;){if(n.tag===13){var t=n.memoizedState;if(t!==null&&(t=t.dehydrated,t===null||t.data==="$?"||t.data==="$!"))return n}else if(n.tag===19&&n.memoizedProps.revealOrder!==void 0){if(n.flags&128)return n}else if(n.child!==null){n.child.return=n,n=n.child;continue}if(n===e)break;for(;n.sibling===null;){if(n.return===null||n.return===e)return null;n=n.return}n.sibling.return=n.return,n=n.sibling}return null}var Fl=[];function hu(){for(var e=0;et?t:4,e(!0);var r=Ul.transition;Ul.transition={};try{e(!1),n()}finally{M=t,Ul.transition=r}}function _a(){return Ne().memoizedState}function nd(e,n,t){var r=cn(e);if(t={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null},Na(e))Pa(n,t);else if(t=ra(e,n,t,r),t!==null){var l=ue();Oe(t,e,r,l),za(t,n,r)}}function td(e,n,t){var r=cn(e),l={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null};if(Na(e))Pa(n,l);else{var o=e.alternate;if(e.lanes===0&&(o===null||o.lanes===0)&&(o=n.lastRenderedReducer,o!==null))try{var u=n.lastRenderedState,i=o(u,t);if(l.hasEagerState=!0,l.eagerState=i,Me(i,u)){var s=n.interleaved;s===null?(l.next=l,fu(n)):(l.next=s.next,s.next=l),n.interleaved=l;return}}catch{}finally{}t=ra(e,n,l,r),t!==null&&(l=ue(),Oe(t,e,r,l),za(t,n,r))}}function Na(e){var n=e.alternate;return e===A||n!==null&&n===A}function Pa(e,n){Pt=Xr=!0;var t=e.pending;t===null?n.next=n:(n.next=t.next,t.next=n),e.pending=n}function za(e,n,t){if(t&4194240){var r=n.lanes;r&=e.pendingLanes,t|=r,n.lanes=t,Jo(e,t)}}var Yr={readContext:_e,useCallback:ne,useContext:ne,useEffect:ne,useImperativeHandle:ne,useInsertionEffect:ne,useLayoutEffect:ne,useMemo:ne,useReducer:ne,useRef:ne,useState:ne,useDebugValue:ne,useDeferredValue:ne,useTransition:ne,useMutableSource:ne,useSyncExternalStore:ne,useId:ne,unstable_isNewReconciler:!1},rd={readContext:_e,useCallback:function(e,n){return Ie().memoizedState=[e,n===void 0?null:n],e},useContext:_e,useEffect:Ei,useImperativeHandle:function(e,n,t){return t=t!=null?t.concat([e]):null,_r(4194308,4,ka.bind(null,n,e),t)},useLayoutEffect:function(e,n){return _r(4194308,4,e,n)},useInsertionEffect:function(e,n){return _r(4,2,e,n)},useMemo:function(e,n){var t=Ie();return n=n===void 0?null:n,e=e(),t.memoizedState=[e,n],e},useReducer:function(e,n,t){var r=Ie();return n=t!==void 0?t(n):n,r.memoizedState=r.baseState=n,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:n},r.queue=e,e=e.dispatch=nd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var n=Ie();return e={current:e},n.memoizedState=e},useState:xi,useDebugValue:ku,useDeferredValue:function(e){return Ie().memoizedState=e},useTransition:function(){var e=xi(!1),n=e[0];return e=ed.bind(null,e[1]),Ie().memoizedState=e,[n,e]},useMutableSource:function(){},useSyncExternalStore:function(e,n,t){var r=A,l=Ie();if(U){if(t===void 0)throw Error(y(407));t=t()}else{if(t=n(),J===null)throw Error(y(349));zn&30||da(r,n,t)}l.memoizedState=t;var o={value:t,getSnapshot:n};return l.queue=o,Ei(ma.bind(null,r,o,e),[e]),r.flags|=2048,Kt(9,pa.bind(null,r,o,t,n),void 0,null),t},useId:function(){var e=Ie(),n=J.identifierPrefix;if(U){var t=He,r=Be;t=(r&~(1<<32-je(r)-1)).toString(32)+t,n=":"+n+"R"+t,t=Wt++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(t,{is:r.is}):(e=u.createElement(t),t==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,t),e[Fe]=n,e[Vt]=r,Fa(e,n,!1,!1),n.stateNode=e;e:{switch(u=ro(t,r),t){case"dialog":I("cancel",e),I("close",e),l=r;break;case"iframe":case"object":case"embed":I("load",e),l=r;break;case"video":case"audio":for(l=0;llt&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304)}else{if(!r)if(e=Kr(u),e!==null){if(n.flags|=128,r=!0,t=e.updateQueue,t!==null&&(n.updateQueue=t,n.flags|=4),vt(o,!0),o.tail===null&&o.tailMode==="hidden"&&!u.alternate&&!U)return te(n),null}else 2*Q()-o.renderingStartTime>lt&&t!==1073741824&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304);o.isBackwards?(u.sibling=n.child,n.child=u):(t=o.last,t!==null?t.sibling=u:n.child=u,o.last=u)}return o.tail!==null?(n=o.tail,o.rendering=n,o.tail=n.sibling,o.renderingStartTime=Q(),n.sibling=null,t=$.current,D($,r?t&1|2:t&1),n):(te(n),null);case 22:case 23:return Nu(),r=n.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(n.flags|=8192),r&&n.mode&1?he&1073741824&&(te(n),n.subtreeFlags&6&&(n.flags|=8192)):te(n),null;case 24:return null;case 25:return null}throw Error(y(156,n.tag))}function fd(e,n){switch(uu(n),n.tag){case 1:return pe(n.type)&&$r(),e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 3:return tt(),F(de),F(le),hu(),e=n.flags,e&65536&&!(e&128)?(n.flags=e&-65537|128,n):null;case 5:return mu(n),null;case 13:if(F($),e=n.memoizedState,e!==null&&e.dehydrated!==null){if(n.alternate===null)throw Error(y(340));et()}return e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 19:return F($),null;case 4:return tt(),null;case 10:return cu(n.type._context),null;case 22:case 23:return Nu(),null;case 24:return null;default:return null}}var hr=!1,re=!1,dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Wn(e,n){var t=e.ref;if(t!==null)if(typeof t=="function")try{t(null)}catch(r){B(e,n,r)}else t.current=null}function Ro(e,n,t){try{t()}catch(r){B(e,n,r)}}var ji=!1;function pd(e,n){if(mo=Dr,e=Hs(),lu(e)){if("selectionStart"in e)var t={start:e.selectionStart,end:e.selectionEnd};else e:{t=(t=e.ownerDocument)&&t.defaultView||window;var r=t.getSelection&&t.getSelection();if(r&&r.rangeCount!==0){t=r.anchorNode;var l=r.anchorOffset,o=r.focusNode;r=r.focusOffset;try{t.nodeType,o.nodeType}catch{t=null;break e}var u=0,i=-1,s=-1,f=0,h=0,m=e,p=null;n:for(;;){for(var g;m!==t||l!==0&&m.nodeType!==3||(i=u+l),m!==o||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(g=m.firstChild)!==null;)p=m,m=g;for(;;){if(m===e)break n;if(p===t&&++f===l&&(i=u),p===o&&++h===r&&(s=u),(g=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=g}t=i===-1||s===-1?null:{start:i,end:s}}else t=null}t=t||{start:0,end:0}}else t=null;for(ho={focusedElem:e,selectionRange:t},Dr=!1,S=n;S!==null;)if(n=S,e=n.child,(n.subtreeFlags&1028)!==0&&e!==null)e.return=n,S=e;else for(;S!==null;){n=S;try{var w=n.alternate;if(n.flags&1024)switch(n.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,j=w.memoizedState,c=n.stateNode,a=c.getSnapshotBeforeUpdate(n.elementType===n.type?k:Le(n.type,k),j);c.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=n.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(n,n.return,v)}if(e=n.sibling,e!==null){e.return=n.return,S=e;break}S=n.return}return w=ji,ji=!1,w}function zt(e,n,t){var r=n.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var o=l.destroy;l.destroy=void 0,o!==void 0&&Ro(n,t,o)}l=l.next}while(l!==r)}}function al(e,n){if(n=n.updateQueue,n=n!==null?n.lastEffect:null,n!==null){var t=n=n.next;do{if((t.tag&e)===e){var r=t.create;t.destroy=r()}t=t.next}while(t!==n)}}function jo(e){var n=e.ref;if(n!==null){var t=e.stateNode;switch(e.tag){case 5:e=t;break;default:e=t}typeof n=="function"?n(e):n.current=e}}function Aa(e){var n=e.alternate;n!==null&&(e.alternate=null,Aa(n)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(n=e.stateNode,n!==null&&(delete n[Fe],delete n[Vt],delete n[go],delete n[Gf],delete n[Zf])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Va(e){return e.tag===5||e.tag===3||e.tag===4}function Oi(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Va(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Oo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.nodeType===8?t.parentNode.insertBefore(e,n):t.insertBefore(e,n):(t.nodeType===8?(n=t.parentNode,n.insertBefore(e,t)):(n=t,n.appendChild(e)),t=t._reactRootContainer,t!=null||n.onclick!==null||(n.onclick=Ur));else if(r!==4&&(e=e.child,e!==null))for(Oo(e,n,t),e=e.sibling;e!==null;)Oo(e,n,t),e=e.sibling}function Mo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.insertBefore(e,n):t.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Mo(e,n,t),e=e.sibling;e!==null;)Mo(e,n,t),e=e.sibling}var q=null,Te=!1;function Ze(e,n,t){for(t=t.child;t!==null;)Ba(e,n,t),t=t.sibling}function Ba(e,n,t){if(Ue&&typeof Ue.onCommitFiberUnmount=="function")try{Ue.onCommitFiberUnmount(nl,t)}catch{}switch(t.tag){case 5:re||Wn(t,n);case 6:var r=q,l=Te;q=null,Ze(e,n,t),q=r,Te=l,q!==null&&(Te?(e=q,t=t.stateNode,e.nodeType===8?e.parentNode.removeChild(t):e.removeChild(t)):q.removeChild(t.stateNode));break;case 18:q!==null&&(Te?(e=q,t=t.stateNode,e.nodeType===8?Dl(e.parentNode,t):e.nodeType===1&&Dl(e,t),It(e)):Dl(q,t.stateNode));break;case 4:r=q,l=Te,q=t.stateNode.containerInfo,Te=!0,Ze(e,n,t),q=r,Te=l;break;case 0:case 11:case 14:case 15:if(!re&&(r=t.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var o=l,u=o.destroy;o=o.tag,u!==void 0&&(o&2||o&4)&&Ro(t,n,u),l=l.next}while(l!==r)}Ze(e,n,t);break;case 1:if(!re&&(Wn(t,n),r=t.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=t.memoizedProps,r.state=t.memoizedState,r.componentWillUnmount()}catch(i){B(t,n,i)}Ze(e,n,t);break;case 21:Ze(e,n,t);break;case 22:t.mode&1?(re=(r=re)||t.memoizedState!==null,Ze(e,n,t),re=r):Ze(e,n,t);break;default:Ze(e,n,t)}}function Mi(e){var n=e.updateQueue;if(n!==null){e.updateQueue=null;var t=e.stateNode;t===null&&(t=e.stateNode=new dd),n.forEach(function(r){var l=xd.bind(null,e,r);t.has(r)||(t.add(r),r.then(l,l))})}}function ze(e,n){var t=n.deletions;if(t!==null)for(var r=0;rl&&(l=u),r&=~o}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*hd(r/1960))-r,10e?16:e,tn===null)var r=!1;else{if(e=tn,tn=null,Jr=0,O&6)throw Error(y(331));var l=O;for(O|=4,S=e.current;S!==null;){var o=S,u=o.child;if(S.flags&16){var i=o.deletions;if(i!==null){for(var s=0;sQ()-Cu?Cn(e,0):Eu|=t),me(e,n)}function Za(e,n){n===0&&(e.mode&1?(n=ur,ur<<=1,!(ur&130023424)&&(ur=4194304)):n=1);var t=ue();e=Xe(e,n),e!==null&&(Gt(e,n,t),me(e,t))}function Sd(e){var n=e.memoizedState,t=0;n!==null&&(t=n.retryLane),Za(e,t)}function xd(e,n){var t=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(t=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(n),Za(e,t)}var Ja;Ja=function(e,n,t){if(e!==null)if(e.memoizedProps!==n.pendingProps||de.current)fe=!0;else{if(!(e.lanes&t)&&!(n.flags&128))return fe=!1,ad(e,n,t);fe=!!(e.flags&131072)}else fe=!1,U&&n.flags&1048576&&ea(n,Br,n.index);switch(n.lanes=0,n.tag){case 2:var r=n.type;Nr(e,n),e=n.pendingProps;var l=bn(n,le.current);Zn(n,t),l=yu(null,n,r,e,l,t);var o=gu();return n.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(n.tag=1,n.memoizedState=null,n.updateQueue=null,pe(r)?(o=!0,Ar(n)):o=!1,n.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,du(n),l.updater=il,n.stateNode=l,l._reactInternals=n,Co(n,r,e,t),n=Po(null,n,r,!0,o,t)):(n.tag=0,U&&o&&ou(n),oe(null,n,l,t),n=n.child),n;case 16:r=n.elementType;e:{switch(Nr(e,n),e=n.pendingProps,l=r._init,r=l(r._payload),n.type=r,l=n.tag=Cd(r),e=Le(r,e),l){case 0:n=No(null,n,r,e,t);break e;case 1:n=Li(null,n,r,e,t);break e;case 11:n=Pi(null,n,r,e,t);break e;case 14:n=zi(null,n,r,Le(r.type,e),t);break e}throw Error(y(306,r,""))}return n;case 0:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Le(r,l),No(e,n,r,l,t);case 1:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Le(r,l),Li(e,n,r,l,t);case 3:e:{if(Ma(n),e===null)throw Error(y(387));r=n.pendingProps,o=n.memoizedState,l=o.element,la(e,n),Qr(n,r,null,t);var u=n.memoizedState;if(r=u.element,o.isDehydrated)if(o={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},n.updateQueue.baseState=o,n.memoizedState=o,n.flags&256){l=rt(Error(y(423)),n),n=Ti(e,n,r,t,l);break e}else if(r!==l){l=rt(Error(y(424)),n),n=Ti(e,n,r,t,l);break e}else for(ve=un(n.stateNode.containerInfo.firstChild),ye=n,U=!0,Re=null,t=sa(n,null,r,t),n.child=t;t;)t.flags=t.flags&-3|4096,t=t.sibling;else{if(et(),r===l){n=Ye(e,n,t);break e}oe(e,n,r,t)}n=n.child}return n;case 5:return aa(n),e===null&&So(n),r=n.type,l=n.pendingProps,o=e!==null?e.memoizedProps:null,u=l.children,vo(r,l)?u=null:o!==null&&vo(r,o)&&(n.flags|=32),Oa(e,n),oe(e,n,u,t),n.child;case 6:return e===null&&So(n),null;case 13:return Da(e,n,t);case 4:return pu(n,n.stateNode.containerInfo),r=n.pendingProps,e===null?n.child=nt(n,null,r,t):oe(e,n,r,t),n.child;case 11:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Le(r,l),Pi(e,n,r,l,t);case 7:return oe(e,n,n.pendingProps,t),n.child;case 8:return oe(e,n,n.pendingProps.children,t),n.child;case 12:return oe(e,n,n.pendingProps.children,t),n.child;case 10:e:{if(r=n.type._context,l=n.pendingProps,o=n.memoizedProps,u=l.value,D(Hr,r._currentValue),r._currentValue=u,o!==null)if(Me(o.value,u)){if(o.children===l.children&&!de.current){n=Ye(e,n,t);break e}}else for(o=n.child,o!==null&&(o.return=n);o!==null;){var i=o.dependencies;if(i!==null){u=o.child;for(var s=i.firstContext;s!==null;){if(s.context===r){if(o.tag===1){s=We(-1,t&-t),s.tag=2;var f=o.updateQueue;if(f!==null){f=f.shared;var h=f.pending;h===null?s.next=s:(s.next=h.next,h.next=s),f.pending=s}}o.lanes|=t,s=o.alternate,s!==null&&(s.lanes|=t),xo(o.return,t,n),i.lanes|=t;break}s=s.next}}else if(o.tag===10)u=o.type===n.type?null:o.child;else if(o.tag===18){if(u=o.return,u===null)throw Error(y(341));u.lanes|=t,i=u.alternate,i!==null&&(i.lanes|=t),xo(u,t,n),u=o.sibling}else u=o.child;if(u!==null)u.return=o;else for(u=o;u!==null;){if(u===n){u=null;break}if(o=u.sibling,o!==null){o.return=u.return,u=o;break}u=u.return}o=u}oe(e,n,l.children,t),n=n.child}return n;case 9:return l=n.type,r=n.pendingProps.children,Zn(n,t),l=_e(l),r=r(l),n.flags|=1,oe(e,n,r,t),n.child;case 14:return r=n.type,l=Le(r,n.pendingProps),l=Le(r.type,l),zi(e,n,r,l,t);case 15:return Ra(e,n,n.type,n.pendingProps,t);case 17:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Le(r,l),Nr(e,n),n.tag=1,pe(r)?(e=!0,Ar(n)):e=!1,Zn(n,t),ua(n,r,l),Co(n,r,l,t),Po(null,n,r,!0,e,t);case 19:return Ia(e,n,t);case 22:return ja(e,n,t)}throw Error(y(156,n.tag))};function qa(e,n){return Cs(e,n)}function Ed(e,n,t,r){this.tag=e,this.key=t,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=n,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ee(e,n,t,r){return new Ed(e,n,t,r)}function zu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Cd(e){if(typeof e=="function")return zu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Xo)return 11;if(e===Yo)return 14}return 2}function fn(e,n){var t=e.alternate;return t===null?(t=Ee(e.tag,n,e.key,e.mode),t.elementType=e.elementType,t.type=e.type,t.stateNode=e.stateNode,t.alternate=e,e.alternate=t):(t.pendingProps=n,t.type=e.type,t.flags=0,t.subtreeFlags=0,t.deletions=null),t.flags=e.flags&14680064,t.childLanes=e.childLanes,t.lanes=e.lanes,t.child=e.child,t.memoizedProps=e.memoizedProps,t.memoizedState=e.memoizedState,t.updateQueue=e.updateQueue,n=e.dependencies,t.dependencies=n===null?null:{lanes:n.lanes,firstContext:n.firstContext},t.sibling=e.sibling,t.index=e.index,t.ref=e.ref,t}function Lr(e,n,t,r,l,o){var u=2;if(r=e,typeof e=="function")zu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Dn:return _n(t.children,l,o,n);case Ko:u=8,l|=8;break;case Xl:return e=Ee(12,t,n,l|2),e.elementType=Xl,e.lanes=o,e;case Yl:return e=Ee(13,t,n,l),e.elementType=Yl,e.lanes=o,e;case Gl:return e=Ee(19,t,n,l),e.elementType=Gl,e.lanes=o,e;case is:return fl(t,l,o,n);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case os:u=10;break e;case us:u=9;break e;case Xo:u=11;break e;case Yo:u=14;break e;case Je:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return n=Ee(u,t,n,l),n.elementType=e,n.type=r,n.lanes=o,n}function _n(e,n,t,r){return e=Ee(7,e,r,n),e.lanes=t,e}function fl(e,n,t,r){return e=Ee(22,e,r,n),e.elementType=is,e.lanes=t,e.stateNode={isHidden:!1},e}function Hl(e,n,t){return e=Ee(6,e,null,n),e.lanes=t,e}function Wl(e,n,t){return n=Ee(4,e.children!==null?e.children:[],e.key,n),n.lanes=t,n.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},n}function _d(e,n,t,r,l){this.tag=n,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Cl(0),this.expirationTimes=Cl(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Cl(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Lu(e,n,t,r,l,o,u,i,s){return e=new _d(e,n,t,i,s),n===1?(n=1,o===!0&&(n|=8)):n=0,o=Ee(3,null,null,n),e.current=o,o.stateNode=e,o.memoizedState={element:r,isDehydrated:t,cache:null,transitions:null,pendingSuspenseBoundaries:null},du(o),e}function Nd(e,n,t){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(tc)}catch(e){console.error(e)}}tc(),es.exports=we;var Rd=es.exports,Bi=Rd;Ql.createRoot=Bi.createRoot,Ql.hydrateRoot=Bi.hydrateRoot;const Hi=["bg-purple-300","bg-green-300","bg-yellow-300","bg-red-300","bg-blue-300"];function jd({text:e,position:n,margin:t}){return e!==` -`?L.jsx("span",{style:{marginLeft:t},className:`leading-5 inline-block ${Hi[n%Hi.length]}`,children:e}):L.jsx("br",{})}function Od(){var k;const[e,n]=ae.useState([]),[t,r]=ae.useState([]),[l,o]=ae.useState([]),[u,i]=ae.useState("text"),[s,f]=ae.useState("Xenova/gpt-4"),h=ae.useRef(null),m=ae.useRef(null),p=ae.useRef(null);ae.useEffect(()=>{p.current||(p.current=new Worker(new URL("/assets/worker-d3671fec.js",self.location),{type:"module"}));const j=c=>{n(c.data.token_ids),r(c.data.decoded),o(c.data.margins)};return p.current.addEventListener("message",j),()=>p.current.removeEventListener("message",j)},[]);const g=ae.useCallback(j=>{const c=s,a=j.target.value;a.length>1e4&&(i(null),console.log("User most likely pasted in a large body of text (> 10k chars), so we hide the output (until specifically requested by the user).")),p.current.postMessage({model_id:c,text:a})},[s]),w=ae.useCallback(j=>{const c=j.target.value;f(c),p.current.postMessage({model_id:c,text:h.current.value})},[]);return L.jsxs("div",{className:"w-full max-w-[720px] flex flex-col gap-4 items-center",children:[L.jsxs("div",{children:[L.jsx("h1",{className:"text-5xl font-bold mb-2",children:"The Tokenizer Playground"}),L.jsxs("h2",{className:"text-lg font-normal",children:["Experiment with different tokenizers (running ",L.jsx("a",{className:"text-gray-900 underline",href:"https://github.com/xenova/transformers.js",children:"locally"})," in your browser)."]})]}),L.jsx("div",{children:L.jsxs("select",{value:s,onChange:w,className:"bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2",children:[L.jsx("option",{value:"Xenova/gpt-4",children:"gpt-4 / gpt-3.5-turbo / text-embedding-ada-002"}),L.jsx("option",{value:"Xenova/text-davinci-003",children:"text-davinci-003 / text-davinci-002"}),L.jsx("option",{value:"Xenova/gpt-3",children:"gpt-3"}),L.jsx("option",{value:"hf-internal-testing/llama-tokenizer",children:"LLaMA / Llama 2"}),L.jsx("option",{value:"Xenova/t5-small",children:"T5"}),L.jsx("option",{value:"Xenova/bert-base-cased",children:"bert-base-cased"})]})}),L.jsx("textarea",{ref:h,onChange:g,rows:"8",className:"font-mono text-lg block w-full p-2.5 text-gray-900 bg-gray-50 rounded-lg border border-gray-200",placeholder:"Enter some text"}),L.jsxs("div",{className:"flex justify-center gap-5",children:[L.jsxs("div",{className:"flex flex-col",children:[L.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Tokens"}),L.jsx("h3",{className:"font-semibold text-3xl",children:e.length.toLocaleString()})]}),L.jsxs("div",{className:"flex flex-col",children:[L.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Characters"}),L.jsx("h3",{className:"font-semibold text-3xl",children:(((k=h.current)==null?void 0:k.value.length)??0).toLocaleString()})]})]}),L.jsx("div",{ref:m,className:"font-mono text-lg p-2.5 w-full bg-gray-100 rounded-lg border border-gray-200 whitespace-pre-wrap text-left h-[200px] overflow-y-auto",children:u==="text"?t.map((j,c)=>L.jsx(jd,{text:j,position:c,margin:l[c]},c)):u==="token_ids"?`[${e.join(", ")}]`:null}),L.jsxs("div",{className:"flex items-center gap-2 self-end",children:[L.jsxs("div",{className:"flex items-center",children:[L.jsx("input",{checked:u==="text",onChange:()=>i("text"),id:"output-radio-1",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),L.jsx("label",{htmlFor:"output-radio-1",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Text"})]}),L.jsxs("div",{className:"flex items-center",children:[L.jsx("input",{checked:u==="token_ids",onChange:()=>i("token_ids"),id:"output-radio-2",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),L.jsx("label",{htmlFor:"output-radio-2",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Token IDs"})]}),L.jsxs("div",{className:"flex items-center",children:[L.jsx("input",{checked:u===null,onChange:()=>i(null),id:"output-radio-3",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),L.jsx("label",{htmlFor:"output-radio-3",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Hide"})]})]})]})}Ql.createRoot(document.getElementById("root")).render(L.jsx(kc.StrictMode,{children:L.jsx(Od,{})})); diff --git a/spaces/Xenova/the-tokenizer-playground/assets/worker-d3671fec.js b/spaces/Xenova/the-tokenizer-playground/assets/worker-d3671fec.js deleted file mode 100644 index d0be752ae92493a24289512157dd9b2529f248bd..0000000000000000000000000000000000000000 --- a/spaces/Xenova/the-tokenizer-playground/assets/worker-d3671fec.js +++ /dev/null @@ -1,1790 +0,0 @@ -var fn=Object.defineProperty;var gn=(tt,y,n)=>y in tt?fn(tt,y,{enumerable:!0,configurable:!0,writable:!0,value:n}):tt[y]=n;var jt=(tt,y,n)=>(gn(tt,typeof y!="symbol"?y+"":y,n),n);(function(){var tt;"use strict";function _mergeNamespaces(y,n){return n.forEach(function(u){u&&typeof u!="string"&&!Array.isArray(u)&&Object.keys(u).forEach(function(d){if(d!=="default"&&!(d in y)){var l=Object.getOwnPropertyDescriptor(u,d);Object.defineProperty(y,d,l.get?l:{enumerable:!0,get:function(){return u[d]}})}})}),Object.freeze(y)}function dispatchCallback(y,n){y!==null&&y(n)}function reverseDictionary(y){return Object.fromEntries(Object.entries(y).map(([n,u])=>[u,n]))}function escapeRegExp(y){return y.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}const Callable=class{constructor(){let y=function(...n){return y._call(...n)};return Object.setPrototypeOf(y,new.target.prototype)}_call(...y){throw Error("Must implement _call method in subclass")}};function isTypedArray(y){var n,u,d;return((d=(u=(n=y==null?void 0:y.prototype)==null?void 0:n.__proto__)==null?void 0:u.constructor)==null?void 0:d.name)==="TypedArray"}function isIntegralNumber(y){return Number.isInteger(y)||typeof y=="bigint"}function exists(y){return y!=null}function mergeArrays(...y){return Array.prototype.concat.apply([],y)}var sharp={},ONNX_NODE=Object.freeze({__proto__:null,default:sharp});function getDefaultExportFromCjs(y){return y&&y.__esModule&&Object.prototype.hasOwnProperty.call(y,"default")?y.default:y}function getAugmentedNamespace(y){if(y.__esModule)return y;var n=y.default;if(typeof n=="function"){var u=function d(){return this instanceof d?Reflect.construct(n,arguments,this.constructor):n.apply(this,arguments)};u.prototype=n.prototype}else u={};return Object.defineProperty(u,"__esModule",{value:!0}),Object.keys(y).forEach(function(d){var l=Object.getOwnPropertyDescriptor(y,d);Object.defineProperty(u,d,l.get?l:{enumerable:!0,get:function(){return y[d]}})}),u}var ortWeb_min$1={exports:{}};const backends={},backendsSortedByPriority=[],registerBackend=(y,n,u)=>{if(n&&typeof n.init=="function"&&typeof n.createSessionHandler=="function"){const d=backends[y];if(d===void 0)backends[y]={backend:n,priority:u};else{if(d.priority>u)return;if(d.priority===u&&d.backend!==n)throw new Error(`cannot register backend "${y}" using priority ${u}`)}if(u>=0){const l=backendsSortedByPriority.indexOf(y);l!==-1&&backendsSortedByPriority.splice(l,1);for(let p=0;p{const n=y.length===0?backendsSortedByPriority:y,u=[];for(const d of n){const l=backends[d];if(l){if(l.initialized)return l.backend;if(l.aborted)continue;const p=!!l.initPromise;try{return p||(l.initPromise=l.backend.init()),await l.initPromise,l.initialized=!0,l.backend}catch(s){p||u.push({name:d,err:s}),l.aborted=!0}finally{delete l.initPromise}}}throw new Error(`no available backend found. ERR: ${u.map(d=>`[${d.name}] ${d.err}`).join(", ")}`)};class EnvImpl{constructor(){this.wasm={},this.webgl={},this.logLevelInternal="warning"}set logLevel(n){if(n!==void 0){if(typeof n!="string"||["verbose","info","warning","error","fatal"].indexOf(n)===-1)throw new Error(`Unsupported logging level: ${n}`);this.logLevelInternal=n}}get logLevel(){return this.logLevelInternal}}const env$1=new EnvImpl,isBigInt64ArrayAvailable=typeof BigInt64Array<"u"&&typeof BigInt64Array.from=="function",isBigUint64ArrayAvailable=typeof BigUint64Array<"u"&&typeof BigUint64Array.from=="function",NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP=new Map([["float32",Float32Array],["uint8",Uint8Array],["int8",Int8Array],["uint16",Uint16Array],["int16",Int16Array],["int32",Int32Array],["bool",Uint8Array],["float64",Float64Array],["uint32",Uint32Array]]),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP=new Map([[Float32Array,"float32"],[Uint8Array,"uint8"],[Int8Array,"int8"],[Uint16Array,"uint16"],[Int16Array,"int16"],[Int32Array,"int32"],[Float64Array,"float64"],[Uint32Array,"uint32"]]);isBigInt64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("int64",BigInt64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigInt64Array,"int64")),isBigUint64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("uint64",BigUint64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigUint64Array,"uint64"));const calculateSize=y=>{let n=1;for(let u=0;u{const t=document.createElement("canvas"),e=t.getContext("2d");if(!n||!e)return o();const r=new Image;r.crossOrigin="Anonymous",r.src=n,r.onload=()=>{t.width=r.width,t.height=r.height,e.drawImage(r,0,0,t.width,t.height);const i=e.getImageData(0,0,t.width,t.height);if(u!==void 0){if(u.height!==void 0&&u.height!==t.height)throw new Error("Image input config height doesn't match ImageBitmap height");if(f.height=t.height,u.width!==void 0&&u.width!==t.width)throw new Error("Image input config width doesn't match ImageBitmap width");f.width=t.width}else f.height=t.height,f.width=t.width;a(at.bufferToTensor(i.data,f))}});throw new Error("Input data provided is not supported - aborted tensor creation")}if(h!==void 0)return at.bufferToTensor(h,f);throw new Error("Input data provided is not supported - aborted tensor creation")}toImageData(n){var u,d;const l=document.createElement("canvas").getContext("2d");let p;if(l!=null){const s=this.dims[3],h=this.dims[2],f=this.dims[1],a=n!==void 0&&n.format!==void 0?n.format:"RGB",o=n!==void 0&&((u=n.norm)===null||u===void 0?void 0:u.mean)!==void 0?n.norm.mean:255,t=n!==void 0&&((d=n.norm)===null||d===void 0?void 0:d.bias)!==void 0?n.norm.bias:0,e=h*s;if(n!==void 0){if(n.height!==void 0&&n.height!==h)throw new Error("Image output config height doesn't match tensor height");if(n.width!==void 0&&n.width!==s)throw new Error("Image output config width doesn't match tensor width");if(n.format!==void 0&&f===4&&n.format!=="RGBA"||f===3&&n.format!=="RGB"&&n.format!=="BGR")throw new Error("Tensor format doesn't match input tensor dims")}const r=4;let i=0,c=1,g=2,m=3,b=0,_=e,w=e*2,v=-1;a==="RGBA"?(b=0,_=e,w=e*2,v=e*3):a==="RGB"?(b=0,_=e,w=e*2):a==="RBG"&&(b=0,w=e,_=e*2),p=l.createImageData(s,h);for(let S=0;S"u")throw new Error(`input '${a}' is missing in 'feeds'.`);if(s)for(const a of this.outputNames)l[a]=null;const h=await this.handler.run(n,l,p),f={};for(const a in h)Object.hasOwnProperty.call(h,a)&&(f[a]=new Tensor$1(h[a].type,h[a].data,h[a].dims));return f}static async create(n,u,d,l){let p,s={};if(typeof n=="string"){if(p=n,typeof u=="object"&&u!==null)s=u;else if(typeof u<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof Uint8Array){if(p=n,typeof u=="object"&&u!==null)s=u;else if(typeof u<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof ArrayBuffer||typeof SharedArrayBuffer<"u"&&n instanceof SharedArrayBuffer){const t=n;let e=0,r=n.byteLength;if(typeof u=="object"&&u!==null)s=u;else if(typeof u=="number"){if(e=u,!Number.isSafeInteger(e))throw new RangeError("'byteOffset' must be an integer.");if(e<0||e>=t.byteLength)throw new RangeError(`'byteOffset' is out of range [0, ${t.byteLength}).`);if(r=n.byteLength-e,typeof d=="number"){if(r=d,!Number.isSafeInteger(r))throw new RangeError("'byteLength' must be an integer.");if(r<=0||e+r>t.byteLength)throw new RangeError(`'byteLength' is out of range (0, ${t.byteLength-e}].`);if(typeof l=="object"&&l!==null)s=l;else if(typeof l<"u")throw new TypeError("'options' must be an object.")}else if(typeof d<"u")throw new TypeError("'byteLength' must be a number.")}else if(typeof u<"u")throw new TypeError("'options' must be an object.");p=new Uint8Array(t,e,r)}else throw new TypeError("Unexpected argument[0]: must be 'path' or 'buffer'.");const f=(s.executionProviders||[]).map(t=>typeof t=="string"?t:t.name),o=await(await resolveBackend(f)).createSessionHandler(p,s);return new dn(o)}startProfiling(){this.handler.startProfiling()}endProfiling(){this.handler.endProfiling()}get inputNames(){return this.handler.inputNames}get outputNames(){return this.handler.outputNames}};const InferenceSession$1=InferenceSession$2;var lib=Object.freeze({__proto__:null,InferenceSession:InferenceSession$1,Tensor:Tensor$1,env:env$1,registerBackend}),require$$0=getAugmentedNamespace(lib);/*! -* ONNX Runtime Web v1.14.0 -* Copyright (c) Microsoft Corporation. All rights reserved. -* Licensed under the MIT License. -*/(function(module,exports){(function(y,n){module.exports=n(require$$0)})(self,__WEBPACK_EXTERNAL_MODULE__1670__=>(()=>{var __webpack_modules__={3474:(y,n,u)=>{var d,l=(d=(d=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(p){function s(){return X.buffer!=ne&&Ee(X.buffer),me}function h(){return X.buffer!=ne&&Ee(X.buffer),Ie}function f(){return X.buffer!=ne&&Ee(X.buffer),Oe}function a(){return X.buffer!=ne&&Ee(X.buffer),ce}function o(){return X.buffer!=ne&&Ee(X.buffer),Te}var t,e,r;p=p||{},t||(t=p!==void 0?p:{}),t.ready=new Promise(function(x,A){e=x,r=A});var i,c,g,m,b,_,w=Object.assign({},t),v="./this.program",S=(x,A)=>{throw A},O=typeof window=="object",E=typeof importScripts=="function",T=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",I=t.ENVIRONMENT_IS_PTHREAD||!1,C="";function B(x){return t.locateFile?t.locateFile(x,C):C+x}if(T){let x;C=E?u(908).dirname(C)+"/":"//",_=()=>{b||(m=u(1384),b=u(908))},i=function(A,k){return _(),A=b.normalize(A),m.readFileSync(A,k?void 0:"utf8")},g=A=>((A=i(A,!0)).buffer||(A=new Uint8Array(A)),A),c=(A,k,M)=>{_(),A=b.normalize(A),m.readFile(A,function(j,V){j?M(j):k(V.buffer)})},1{if(Ve())throw process.exitCode=A,k;k instanceof Ze||z("exiting due to exception: "+k),process.exit(A)},t.inspect=function(){return"[Emscripten Module object]"};try{x=u(9925)}catch(A){throw console.error('The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?'),A}u.g.Worker=x.Worker}else(O||E)&&(E?C=self.location.href:typeof document<"u"&&document.currentScript&&(C=document.currentScript.src),d&&(C=d),C=C.indexOf("blob:")!==0?C.substr(0,C.replace(/[?#].*/,"").lastIndexOf("/")+1):"",T||(i=x=>{var A=new XMLHttpRequest;return A.open("GET",x,!1),A.send(null),A.responseText},E&&(g=x=>{var A=new XMLHttpRequest;return A.open("GET",x,!1),A.responseType="arraybuffer",A.send(null),new Uint8Array(A.response)}),c=(x,A,k)=>{var M=new XMLHttpRequest;M.open("GET",x,!0),M.responseType="arraybuffer",M.onload=()=>{M.status==200||M.status==0&&M.response?A(M.response):k()},M.onerror=k,M.send(null)}));T&&typeof performance>"u"&&(u.g.performance=u(6953).performance);var F=console.log.bind(console),N=console.warn.bind(console);T&&(_(),F=x=>m.writeSync(1,x+` -`),N=x=>m.writeSync(2,x+` -`));var H,$=t.print||F,z=t.printErr||N;Object.assign(t,w),w=null,t.thisProgram&&(v=t.thisProgram),t.quit&&(S=t.quit),t.wasmBinary&&(H=t.wasmBinary);var J=t.noExitRuntime||!1;typeof WebAssembly!="object"&&pe("no native wasm support detected");var X,te,ne,me,Ie,Oe,ce,Te,_e=!1,Le=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function We(x,A,k){var M=(A>>>=0)+k;for(k=A;x[k]&&!(k>=M);)++k;if(16(j=(240&j)==224?(15&j)<<12|V<<6|K:(7&j)<<18|V<<12|K<<6|63&x[A++])?M+=String.fromCharCode(j):(j-=65536,M+=String.fromCharCode(55296|j>>10,56320|1023&j))}}else M+=String.fromCharCode(j)}return M}function Ae(x,A){return(x>>>=0)?We(h(),x,A):""}function Ce(x,A,k,M){if(!(0>>=0;M=k+M-1;for(var V=0;V=K&&(K=65536+((1023&K)<<10)|1023&x.charCodeAt(++V)),127>=K){if(k>=M)break;A[k++>>>0]=K}else{if(2047>=K){if(k+1>=M)break;A[k++>>>0]=192|K>>6}else{if(65535>=K){if(k+2>=M)break;A[k++>>>0]=224|K>>12}else{if(k+3>=M)break;A[k++>>>0]=240|K>>18,A[k++>>>0]=128|K>>12&63}A[k++>>>0]=128|K>>6&63}A[k++>>>0]=128|63&K}}return A[k>>>0]=0,k-j}function Me(x){for(var A=0,k=0;k=M?A++:2047>=M?A+=2:55296<=M&&57343>=M?(A+=4,++k):A+=3}return A}function Ee(x){ne=x,t.HEAP8=me=new Int8Array(x),t.HEAP16=new Int16Array(x),t.HEAP32=Oe=new Int32Array(x),t.HEAPU8=Ie=new Uint8Array(x),t.HEAPU16=new Uint16Array(x),t.HEAPU32=ce=new Uint32Array(x),t.HEAPF32=new Float32Array(x),t.HEAPF64=Te=new Float64Array(x)}I&&(ne=t.buffer);var ve=t.INITIAL_MEMORY||16777216;if(I)X=t.wasmMemory,ne=t.buffer;else if(t.wasmMemory)X=t.wasmMemory;else if(!((X=new WebAssembly.Memory({initial:ve/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw z("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),T&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");X&&(ne=X.buffer),ve=ne.byteLength,Ee(ne);var je,ze=[],Ue=[],He=[],Ke=[];function Ve(){return J||!1}function Be(){var x=t.preRun.shift();ze.unshift(x)}var Se,Fe=0,Xe=null;function pe(x){throw I?postMessage({cmd:"onAbort",arg:x}):t.onAbort&&t.onAbort(x),z(x="Aborted("+x+")"),_e=!0,x=new WebAssembly.RuntimeError(x+". Build with -sASSERTIONS for more info."),r(x),x}function ht(){return Se.startsWith("data:application/octet-stream;base64,")}function ut(){var x=Se;try{if(x==Se&&H)return new Uint8Array(H);if(g)return g(x);throw"both async and sync fetching of the wasm failed"}catch(A){pe(A)}}Se="ort-wasm-threaded.wasm",ht()||(Se=B(Se));var Et={};function Ze(x){this.name="ExitStatus",this.message="Program terminated with exit("+x+")",this.status=x}function lt(x){(x=re.Vb[x])||pe(),re.mc(x)}function ct(x){var A=re.Cc();if(!A)return 6;re.ac.push(A),re.Vb[x.Ub]=A,A.Ub=x.Ub;var k={cmd:"run",start_routine:x.Ic,arg:x.zc,pthread_ptr:x.Ub};return A.$b=()=>{k.time=performance.now(),A.postMessage(k,x.Nc)},A.loaded&&(A.$b(),delete A.$b),0}function Ne(x){if(I)return Z(1,1,x);Ve()||(re.oc(),t.onExit&&t.onExit(x),_e=!0),S(x,new Ze(x))}function rt(x,A){if(!A&&I)throw It(x),"unwind";Ve()||I||(Wt(),nt(He),qt(0),Ct[1].length&&Nt(1,10),Ct[2].length&&Nt(2,10),re.oc()),Ne(x)}var re={Yb:[],ac:[],qc:[],Vb:{},fc:function(){I&&re.Ec()},Pc:function(){},Ec:function(){re.receiveObjectTransfer=re.Gc,re.threadInitTLS=re.pc,re.setExitStatus=re.nc,J=!1},nc:function(){},oc:function(){for(var x of Object.values(re.Vb))re.mc(x);for(x of re.Yb)x.terminate();re.Yb=[]},mc:function(x){var A=x.Ub;delete re.Vb[A],re.Yb.push(x),re.ac.splice(re.ac.indexOf(x),1),x.Ub=0,Ft(A)},Gc:function(){},pc:function(){re.qc.forEach(x=>x())},Fc:function(x,A){x.onmessage=k=>{var M=(k=k.data).cmd;if(x.Ub&&(re.Bc=x.Ub),k.targetThread&&k.targetThread!=Dt()){var j=re.Vb[k.Qc];j?j.postMessage(k,k.transferList):z('Internal error! Worker sent a message "'+M+'" to target pthread '+k.targetThread+", but that thread no longer exists!")}else M==="processProxyingQueue"?L(k.queue):M==="spawnThread"?ct(k):M==="cleanupThread"?lt(k.thread):M==="killThread"?(k=k.thread,M=re.Vb[k],delete re.Vb[k],M.terminate(),Ft(k),re.ac.splice(re.ac.indexOf(M),1),M.Ub=0):M==="cancelThread"?re.Vb[k.thread].postMessage({cmd:"cancel"}):M==="loaded"?(x.loaded=!0,A&&A(x),x.$b&&(x.$b(),delete x.$b)):M==="print"?$("Thread "+k.threadId+": "+k.text):M==="printErr"?z("Thread "+k.threadId+": "+k.text):M==="alert"?alert("Thread "+k.threadId+": "+k.text):k.target==="setimmediate"?x.postMessage(k):M==="onAbort"?t.onAbort&&t.onAbort(k.arg):M&&z("worker sent an unknown command "+M);re.Bc=void 0},x.onerror=k=>{throw z("worker sent an error! "+k.filename+":"+k.lineno+": "+k.message),k},T&&(x.on("message",function(k){x.onmessage({data:k})}),x.on("error",function(k){x.onerror(k)}),x.on("detachedExit",function(){})),x.postMessage({cmd:"load",urlOrBlob:t.mainScriptUrlOrBlob||d,wasmMemory:X,wasmModule:te})},yc:function(){var x=B("ort-wasm-threaded.worker.js");re.Yb.push(new Worker(x))},Cc:function(){return re.Yb.length==0&&(re.yc(),re.Fc(re.Yb[0])),re.Yb.pop()}};function nt(x){for(;0>2>>>0];x=f()[x+48>>2>>>0],Zt(A,A-x),ue(A)};var Je=[];function ye(x){var A=Je[x];return A||(x>=Je.length&&(Je.length=x+1),Je[x]=A=je.get(x)),A}t.invokeEntryPoint=function(x,A){x=ye(x)(A),Ve()?re.nc(x):Kt(x)};var it,pt,ot=[],se=0,ie=0;function oe(x){this.Zb=x,this.Sb=x-24,this.xc=function(A){a()[this.Sb+4>>2>>>0]=A},this.bc=function(){return a()[this.Sb+4>>2>>>0]},this.wc=function(A){a()[this.Sb+8>>2>>>0]=A},this.Dc=function(){return a()[this.Sb+8>>2>>>0]},this.rc=function(){f()[this.Sb>>2>>>0]=0},this.hc=function(A){A=A?1:0,s()[this.Sb+12>>0>>>0]=A},this.uc=function(){return s()[this.Sb+12>>0>>>0]!=0},this.ic=function(A){A=A?1:0,s()[this.Sb+13>>0>>>0]=A},this.kc=function(){return s()[this.Sb+13>>0>>>0]!=0},this.fc=function(A,k){this.cc(0),this.xc(A),this.wc(k),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(f(),this.Sb>>2,1)},this.Hc=function(){return Atomics.sub(f(),this.Sb>>2,1)===1},this.cc=function(A){a()[this.Sb+16>>2>>>0]=A},this.tc=function(){return a()[this.Sb+16>>2>>>0]},this.vc=function(){if(Jt(this.bc()))return a()[this.Zb>>2>>>0];var A=this.tc();return A!==0?A:this.Zb}}function ft(x){return Gt(new oe(x).Sb)}function st(x,A,k,M){return I?Z(3,1,x,A,k,M):gt(x,A,k,M)}function gt(x,A,k,M){if(typeof SharedArrayBuffer>"u")return z("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var j=[];return I&&j.length===0?st(x,A,k,M):(x={Ic:k,Ub:x,zc:M,Nc:j},I?(x.Oc="spawnThread",postMessage(x,j),0):ct(x))}function mt(x,A,k){return I?Z(4,1,x,A,k):0}function bt(x,A){if(I)return Z(5,1,x,A)}function yt(x,A){if(I)return Z(6,1,x,A)}function _t(x,A,k){if(I)return Z(7,1,x,A,k)}function wt(x,A,k){return I?Z(8,1,x,A,k):0}function vt(x,A){if(I)return Z(9,1,x,A)}function Tt(x,A,k){if(I)return Z(10,1,x,A,k)}function xt(x,A,k,M){if(I)return Z(11,1,x,A,k,M)}function St(x,A,k,M){if(I)return Z(12,1,x,A,k,M)}function Ot(x,A,k,M){if(I)return Z(13,1,x,A,k,M)}function At(x){if(I)return Z(14,1,x)}function P(x,A){if(I)return Z(15,1,x,A)}function D(x,A,k){if(I)return Z(16,1,x,A,k)}function L(x){Atomics.store(f(),x>>2,1),Dt()&&Yt(x),Atomics.compareExchange(f(),x>>2,1,0)}function R(x){return a()[x>>>2]+4294967296*f()[x+4>>>2]}function U(x,A,k,M,j,V){return I?Z(17,1,x,A,k,M,j,V):-52}function W(x,A,k,M,j,V){if(I)return Z(18,1,x,A,k,M,j,V)}function Y(x){var A=Me(x)+1,k=Lt(A);return k&&Ce(x,s(),k,A),k}function Q(x,A,k){function M(fe){return(fe=fe.toTimeString().match(/\(([A-Za-z ]+)\)$/))?fe[1]:"GMT"}if(I)return Z(19,1,x,A,k);var j=new Date().getFullYear(),V=new Date(j,0,1),K=new Date(j,6,1);j=V.getTimezoneOffset();var ee=K.getTimezoneOffset(),he=Math.max(j,ee);f()[x>>2>>>0]=60*he,f()[A>>2>>>0]=+(j!=ee),x=M(V),A=M(K),x=Y(x),A=Y(A),ee>2>>>0]=x,a()[k+4>>2>>>0]=A):(a()[k>>2>>>0]=A,a()[k+4>>2>>>0]=x)}function Z(x,A){var k=arguments.length-2,M=arguments;return Pt(()=>{for(var j=Rt(8*k),V=j>>3,K=0;K>>0]=ee}return Xt(x,k,j,A)})}t.executeNotifiedProxyingQueue=L,pt=T?()=>{var x=process.hrtime();return 1e3*x[0]+x[1]/1e6}:I?()=>performance.now()-t.__performance_now_clock_drift:()=>performance.now();var ae,we=[],$e={};function De(){if(!ae){var x,A={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:v||"./this.program"};for(x in $e)$e[x]===void 0?delete A[x]:A[x]=$e[x];var k=[];for(x in A)k.push(x+"="+A[x]);ae=k}return ae}function G(x,A){if(I)return Z(20,1,x,A);var k=0;return De().forEach(function(M,j){var V=A+k;for(j=a()[x+4*j>>2>>>0]=V,V=0;V>0>>>0]=M.charCodeAt(V);s()[j>>0>>>0]=0,k+=M.length+1}),0}function ge(x,A){if(I)return Z(21,1,x,A);var k=De();a()[x>>2>>>0]=k.length;var M=0;return k.forEach(function(j){M+=j.length+1}),a()[A>>2>>>0]=M,0}function xe(x){return I?Z(22,1,x):52}function Ge(x,A,k,M){return I?Z(23,1,x,A,k,M):52}function Qe(x,A,k,M,j){return I?Z(24,1,x,A,k,M,j):70}var Ct=[null,[],[]];function Nt(x,A){var k=Ct[x];A===0||A===10?((x===1?$:z)(We(k,0)),k.length=0):k.push(A)}function Bt(x,A,k,M){if(I)return Z(25,1,x,A,k,M);for(var j=0,V=0;V>2>>>0],ee=a()[A+4>>2>>>0];A+=8;for(var he=0;he>>0]);j+=ee}return a()[M>>2>>>0]=j,0}var Re=0;function kt(x){return x%4==0&&(x%100!=0||x%400==0)}var zt=[31,29,31,30,31,30,31,31,30,31,30,31],Ut=[31,28,31,30,31,30,31,31,30,31,30,31];function Vt(x,A,k,M){function j(q,be,Pe){for(q=typeof q=="number"?q.toString():q||"";q.lengthdt?-1:0et-q.getDate())){q.setDate(q.getDate()+be);break}be-=et-q.getDate()+1,q.setDate(1),11>Pe?q.setMonth(Pe+1):(q.setMonth(0),q.setFullYear(q.getFullYear()+1))}return Pe=new Date(q.getFullYear()+1,0,4),be=ee(new Date(q.getFullYear(),0,4)),Pe=ee(Pe),0>=K(be,q)?0>=K(Pe,q)?q.getFullYear()+1:q.getFullYear():q.getFullYear()-1}var fe=f()[M+40>>2>>>0];for(var ke in M={Lc:f()[M>>2>>>0],Kc:f()[M+4>>2>>>0],dc:f()[M+8>>2>>>0],jc:f()[M+12>>2>>>0],ec:f()[M+16>>2>>>0],Xb:f()[M+20>>2>>>0],Tb:f()[M+24>>2>>>0],Wb:f()[M+28>>2>>>0],Rc:f()[M+32>>2>>>0],Jc:f()[M+36>>2>>>0],Mc:fe?Ae(fe):""},k=Ae(k),fe={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})k=k.replace(new RegExp(ke,"g"),fe[ke]);var Ye="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),qe="January February March April May June July August September October November December".split(" ");for(ke in fe={"%a":function(q){return Ye[q.Tb].substring(0,3)},"%A":function(q){return Ye[q.Tb]},"%b":function(q){return qe[q.ec].substring(0,3)},"%B":function(q){return qe[q.ec]},"%C":function(q){return V((q.Xb+1900)/100|0,2)},"%d":function(q){return V(q.jc,2)},"%e":function(q){return j(q.jc,2," ")},"%g":function(q){return he(q).toString().substring(2)},"%G":function(q){return he(q)},"%H":function(q){return V(q.dc,2)},"%I":function(q){return(q=q.dc)==0?q=12:12q.dc?"AM":"PM"},"%S":function(q){return V(q.Lc,2)},"%t":function(){return" "},"%u":function(q){return q.Tb||7},"%U":function(q){return V(Math.floor((q.Wb+7-q.Tb)/7),2)},"%V":function(q){var be=Math.floor((q.Wb+7-(q.Tb+6)%7)/7);if(2>=(q.Tb+371-q.Wb-2)%7&&be++,be)be==53&&((Pe=(q.Tb+371-q.Wb)%7)==4||Pe==3&&kt(q.Xb)||(be=1));else{be=52;var Pe=(q.Tb+7-q.Wb-1)%7;(Pe==4||Pe==5&&kt(q.Xb%400-1))&&be++}return V(be,2)},"%w":function(q){return q.Tb},"%W":function(q){return V(Math.floor((q.Wb+7-(q.Tb+6)%7)/7),2)},"%y":function(q){return(q.Xb+1900).toString().substring(2)},"%Y":function(q){return q.Xb+1900},"%z":function(q){var be=0<=(q=q.Jc);return q=Math.abs(q)/60,(be?"+":"-")+("0000"+(q/60*100+q%60)).slice(-4)},"%Z":function(q){return q.Mc},"%%":function(){return"%"}},k=k.replace(/%%/g,"\0\0"),fe)k.includes(ke)&&(k=k.replace(new RegExp(ke,"g"),fe[ke](M)));return ke=function(q){var be=Array(Me(q)+1);return Ce(q,be,0,be.length),be}(k=k.replace(/\0\0/g,"%")),ke.length>A?0:(function(q,be){s().set(q,be>>>0)}(ke,x),ke.length-1)}re.fc();var hn=[null,Ne,It,st,mt,bt,yt,_t,wt,vt,Tt,xt,St,Ot,At,P,D,U,W,Q,G,ge,xe,Ge,Qe,Bt],pn={b:function(x){return Lt(x+24)+24},n:function(x){return(x=new oe(x)).uc()||(x.hc(!0),se--),x.ic(!1),ot.push(x),x.sc(),x.vc()},ma:function(x){throw z("Unexpected exception thrown, this is not properly supported - aborting"),_e=!0,x},x:function(){de(0);var x=ot.pop();if(x.Hc()&&!x.kc()){var A=x.Dc();A&&ye(A)(x.Zb),ft(x.Zb)}ie=0},e:function(){var x=ie;if(!x)return Re=0;var A=new oe(x);A.cc(x);var k=A.bc();if(!k)return Re=0,x;for(var M=Array.prototype.slice.call(arguments),j=0;jL(M));else if(I)postMessage({targetThread:x,cmd:"processProxyingQueue",queue:M});else{if(!(x=re.Vb[x]))return;x.postMessage({cmd:"processProxyingQueue",queue:M})}return 1},Ea:function(){return-1},Pa:function(x,A){x=new Date(1e3*R(x)),f()[A>>2>>>0]=x.getUTCSeconds(),f()[A+4>>2>>>0]=x.getUTCMinutes(),f()[A+8>>2>>>0]=x.getUTCHours(),f()[A+12>>2>>>0]=x.getUTCDate(),f()[A+16>>2>>>0]=x.getUTCMonth(),f()[A+20>>2>>>0]=x.getUTCFullYear()-1900,f()[A+24>>2>>>0]=x.getUTCDay(),x=(x.getTime()-Date.UTC(x.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,f()[A+28>>2>>>0]=x},Qa:function(x,A){x=new Date(1e3*R(x)),f()[A>>2>>>0]=x.getSeconds(),f()[A+4>>2>>>0]=x.getMinutes(),f()[A+8>>2>>>0]=x.getHours(),f()[A+12>>2>>>0]=x.getDate(),f()[A+16>>2>>>0]=x.getMonth(),f()[A+20>>2>>>0]=x.getFullYear()-1900,f()[A+24>>2>>>0]=x.getDay();var k=new Date(x.getFullYear(),0,1),M=(x.getTime()-k.getTime())/864e5|0;f()[A+28>>2>>>0]=M,f()[A+36>>2>>>0]=-60*x.getTimezoneOffset(),M=new Date(x.getFullYear(),6,1).getTimezoneOffset(),x=0|(M!=(k=k.getTimezoneOffset())&&x.getTimezoneOffset()==Math.min(k,M)),f()[A+32>>2>>>0]=x},Ra:function(x){var A=new Date(f()[x+20>>2>>>0]+1900,f()[x+16>>2>>>0],f()[x+12>>2>>>0],f()[x+8>>2>>>0],f()[x+4>>2>>>0],f()[x>>2>>>0],0),k=f()[x+32>>2>>>0],M=A.getTimezoneOffset(),j=new Date(A.getFullYear(),0,1),V=new Date(A.getFullYear(),6,1).getTimezoneOffset(),K=j.getTimezoneOffset(),ee=Math.min(K,V);return 0>k?f()[x+32>>2>>>0]=+(V!=K&&ee==M):0>2>>>0]=A.getDay(),k=(A.getTime()-j.getTime())/864e5|0,f()[x+28>>2>>>0]=k,f()[x>>2>>>0]=A.getSeconds(),f()[x+4>>2>>>0]=A.getMinutes(),f()[x+8>>2>>>0]=A.getHours(),f()[x+12>>2>>>0]=A.getDate(),f()[x+16>>2>>>0]=A.getMonth(),A.getTime()/1e3|0},Aa:U,Ba:W,Sa:function x(A,k,M){x.Ac||(x.Ac=!0,Q(A,k,M))},y:function(){pe("")},U:function(){if(!T&&!E){var x="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";it||(it={}),it[x]||(it[x]=1,T&&(x="warning: "+x),z(x))}},ra:function(){return 4294901760},B:pt,Ia:function(x,A,k){h().copyWithin(x>>>0,A>>>0,A+k>>>0)},F:function(){return T?u(3993).cpus().length:navigator.hardwareConcurrency},Da:function(x,A,k){we.length=A,k>>=3;for(var M=0;M>>0];return(0>x?Et[-x-1]:hn[x]).apply(null,we)},qa:function(x){var A=h().length;if((x>>>=0)<=A||4294901760=k;k*=2){var M=A*(1+.2/k);M=Math.min(M,x+100663296);var j=Math;M=Math.max(x,M),j=j.min.call(j,4294901760,M+(65536-M%65536)%65536);e:{try{X.grow(j-ne.byteLength+65535>>>16),Ee(X.buffer);var V=1;break e}catch{}V=void 0}if(V)return!0}return!1},Na:function(){throw"unwind"},Ga:G,Ha:ge,J:rt,I:xe,S:Ge,ga:Qe,R:Bt,d:function(){return Re},na:function x(A,k){x.lc||(x.lc=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var j=new Uint8Array(1);return()=>(crypto.getRandomValues(j),j[0])}if(T)try{var V=u(Object(function(){var K=new Error("Cannot find module 'crypto'");throw K.code="MODULE_NOT_FOUND",K}()));return()=>V.randomBytes(1)[0]}catch{}return()=>pe("randomDevice")}());for(var M=0;M>0>>>0]=x.lc();return 0},ia:function(x,A,k){var M=le();try{return ye(x)(A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},ja:function(x,A,k){var M=le();try{return ye(x)(A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},K:function(x){var A=le();try{return ye(x)()}catch(k){if(ue(A),k!==k+0)throw k;de(1,0)}},f:function(x,A){var k=le();try{return ye(x)(A)}catch(M){if(ue(k),M!==M+0)throw M;de(1,0)}},P:function(x,A,k){var M=le();try{return ye(x)(A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},Q:function(x,A,k){var M=le();try{return ye(x)(A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},k:function(x,A,k){var M=le();try{return ye(x)(A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},p:function(x,A,k,M){var j=le();try{return ye(x)(A,k,M)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},q:function(x,A,k,M,j){var V=le();try{return ye(x)(A,k,M,j)}catch(K){if(ue(V),K!==K+0)throw K;de(1,0)}},N:function(x,A,k,M,j,V){var K=le();try{return ye(x)(A,k,M,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},s:function(x,A,k,M,j,V){var K=le();try{return ye(x)(A,k,M,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},w:function(x,A,k,M,j,V,K){var ee=le();try{return ye(x)(A,k,M,j,V,K)}catch(he){if(ue(ee),he!==he+0)throw he;de(1,0)}},L:function(x,A,k,M,j,V,K,ee){var he=le();try{return ye(x)(A,k,M,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},E:function(x,A,k,M,j,V,K,ee,he,fe,ke,Ye){var qe=le();try{return ye(x)(A,k,M,j,V,K,ee,he,fe,ke,Ye)}catch(q){if(ue(qe),q!==q+0)throw q;de(1,0)}},aa:function(x,A,k,M,j,V,K,ee){var he=le();try{return un(x,A,k,M,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},_:function(x,A,k,M,j,V,K){var ee=le();try{return en(x,A,k,M,j,V,K)}catch(he){if(ue(ee),he!==he+0)throw he;de(1,0)}},Z:function(x,A,k,M,j){var V=le();try{return ln(x,A,k,M,j)}catch(K){if(ue(V),K!==K+0)throw K;de(1,0)}},ca:function(x,A,k,M){var j=le();try{return sn(x,A,k,M)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},$:function(x){var A=le();try{return Qt(x)}catch(k){if(ue(A),k!==k+0)throw k;de(1,0)}},ba:function(x,A){var k=le();try{return an(x,A)}catch(M){if(ue(k),M!==M+0)throw M;de(1,0)}},Y:function(x,A,k){var M=le();try{return tn(x,A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},g:function(x){var A=le();try{ye(x)()}catch(k){if(ue(A),k!==k+0)throw k;de(1,0)}},r:function(x,A){var k=le();try{ye(x)(A)}catch(M){if(ue(k),M!==M+0)throw M;de(1,0)}},i:function(x,A,k){var M=le();try{ye(x)(A,k)}catch(j){if(ue(M),j!==j+0)throw j;de(1,0)}},ha:function(x,A,k,M){var j=le();try{ye(x)(A,k,M)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},m:function(x,A,k,M){var j=le();try{ye(x)(A,k,M)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},v:function(x,A,k,M,j){var V=le();try{ye(x)(A,k,M,j)}catch(K){if(ue(V),K!==K+0)throw K;de(1,0)}},u:function(x,A,k,M,j,V){var K=le();try{ye(x)(A,k,M,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},O:function(x,A,k,M,j,V,K){var ee=le();try{ye(x)(A,k,M,j,V,K)}catch(he){if(ue(ee),he!==he+0)throw he;de(1,0)}},A:function(x,A,k,M,j,V,K,ee){var he=le();try{ye(x)(A,k,M,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},ka:function(x,A,k,M,j,V,K,ee,he){var fe=le();try{ye(x)(A,k,M,j,V,K,ee,he)}catch(ke){if(ue(fe),ke!==ke+0)throw ke;de(1,0)}},C:function(x,A,k,M,j,V,K,ee,he,fe,ke){var Ye=le();try{ye(x)(A,k,M,j,V,K,ee,he,fe,ke)}catch(qe){if(ue(Ye),qe!==qe+0)throw qe;de(1,0)}},D:function(x,A,k,M,j,V,K,ee,he,fe,ke,Ye,qe,q,be,Pe){var et=le();try{ye(x)(A,k,M,j,V,K,ee,he,fe,ke,Ye,qe,q,be,Pe)}catch(dt){if(ue(et),dt!==dt+0)throw dt;de(1,0)}},fa:function(x,A,k,M,j,V,K,ee){var he=le();try{nn(x,A,k,M,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},da:function(x,A,k,M,j,V,K,ee,he,fe,ke,Ye){var qe=le();try{on(x,A,k,M,j,V,K,ee,he,fe,ke,Ye)}catch(q){if(ue(qe),q!==q+0)throw q;de(1,0)}},ea:function(x,A,k,M,j,V){var K=le();try{rn(x,A,k,M,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},o:function(x){return x},a:X||t.wasmMemory,G:function(x){Re=x},la:Vt,z:function(x,A,k,M){return Vt(x,A,k,M)}};(function(){function x(j,V){t.asm=j.exports,re.qc.push(t.asm.sb),je=t.asm.ub,Ue.unshift(t.asm.Va),te=V,I||(Fe--,t.monitorRunDependencies&&t.monitorRunDependencies(Fe),Fe==0&&Xe&&(j=Xe,Xe=null,j()))}function A(j){x(j.instance,j.module)}function k(j){return function(){if(!H&&(O||E)){if(typeof fetch=="function"&&!Se.startsWith("file://"))return fetch(Se,{credentials:"same-origin"}).then(function(V){if(!V.ok)throw"failed to load wasm binary file at '"+Se+"'";return V.arrayBuffer()}).catch(function(){return ut()});if(c)return new Promise(function(V,K){c(Se,function(ee){V(new Uint8Array(ee))},K)})}return Promise.resolve().then(function(){return ut()})}().then(function(V){return WebAssembly.instantiate(V,M)}).then(function(V){return V}).then(j,function(V){z("failed to asynchronously prepare wasm: "+V),pe(V)})}var M={a:pn};if(I||(Fe++,t.monitorRunDependencies&&t.monitorRunDependencies(Fe)),t.instantiateWasm)try{return t.instantiateWasm(M,x)}catch(j){return z("Module.instantiateWasm callback failed with error: "+j),!1}(H||typeof WebAssembly.instantiateStreaming!="function"||ht()||Se.startsWith("file://")||T||typeof fetch!="function"?k(A):fetch(Se,{credentials:"same-origin"}).then(function(j){return WebAssembly.instantiateStreaming(j,M).then(A,function(V){return z("wasm streaming compile failed: "+V),z("falling back to ArrayBuffer instantiation"),k(A)})})).catch(r)})(),t.___wasm_call_ctors=function(){return(t.___wasm_call_ctors=t.asm.Va).apply(null,arguments)},t._OrtInit=function(){return(t._OrtInit=t.asm.Wa).apply(null,arguments)},t._OrtCreateSessionOptions=function(){return(t._OrtCreateSessionOptions=t.asm.Xa).apply(null,arguments)},t._OrtAppendExecutionProvider=function(){return(t._OrtAppendExecutionProvider=t.asm.Ya).apply(null,arguments)},t._OrtAddSessionConfigEntry=function(){return(t._OrtAddSessionConfigEntry=t.asm.Za).apply(null,arguments)},t._OrtReleaseSessionOptions=function(){return(t._OrtReleaseSessionOptions=t.asm._a).apply(null,arguments)},t._OrtCreateSession=function(){return(t._OrtCreateSession=t.asm.$a).apply(null,arguments)},t._OrtReleaseSession=function(){return(t._OrtReleaseSession=t.asm.ab).apply(null,arguments)},t._OrtGetInputCount=function(){return(t._OrtGetInputCount=t.asm.bb).apply(null,arguments)},t._OrtGetOutputCount=function(){return(t._OrtGetOutputCount=t.asm.cb).apply(null,arguments)},t._OrtGetInputName=function(){return(t._OrtGetInputName=t.asm.db).apply(null,arguments)},t._OrtGetOutputName=function(){return(t._OrtGetOutputName=t.asm.eb).apply(null,arguments)},t._OrtFree=function(){return(t._OrtFree=t.asm.fb).apply(null,arguments)},t._OrtCreateTensor=function(){return(t._OrtCreateTensor=t.asm.gb).apply(null,arguments)},t._OrtGetTensorData=function(){return(t._OrtGetTensorData=t.asm.hb).apply(null,arguments)},t._OrtReleaseTensor=function(){return(t._OrtReleaseTensor=t.asm.ib).apply(null,arguments)},t._OrtCreateRunOptions=function(){return(t._OrtCreateRunOptions=t.asm.jb).apply(null,arguments)},t._OrtAddRunConfigEntry=function(){return(t._OrtAddRunConfigEntry=t.asm.kb).apply(null,arguments)},t._OrtReleaseRunOptions=function(){return(t._OrtReleaseRunOptions=t.asm.lb).apply(null,arguments)},t._OrtRun=function(){return(t._OrtRun=t.asm.mb).apply(null,arguments)},t._OrtEndProfiling=function(){return(t._OrtEndProfiling=t.asm.nb).apply(null,arguments)};var Dt=t._pthread_self=function(){return(Dt=t._pthread_self=t.asm.ob).apply(null,arguments)},Lt=t._malloc=function(){return(Lt=t._malloc=t.asm.pb).apply(null,arguments)},Gt=t._free=function(){return(Gt=t._free=t.asm.qb).apply(null,arguments)},qt=t._fflush=function(){return(qt=t._fflush=t.asm.rb).apply(null,arguments)};t.__emscripten_tls_init=function(){return(t.__emscripten_tls_init=t.asm.sb).apply(null,arguments)};var Wt=t.___funcs_on_exit=function(){return(Wt=t.___funcs_on_exit=t.asm.tb).apply(null,arguments)},Ht=t.__emscripten_thread_init=function(){return(Ht=t.__emscripten_thread_init=t.asm.vb).apply(null,arguments)};t.__emscripten_thread_crashed=function(){return(t.__emscripten_thread_crashed=t.asm.wb).apply(null,arguments)};var Mt,Xt=t._emscripten_run_in_main_runtime_thread_js=function(){return(Xt=t._emscripten_run_in_main_runtime_thread_js=t.asm.xb).apply(null,arguments)},Yt=t.__emscripten_proxy_execute_task_queue=function(){return(Yt=t.__emscripten_proxy_execute_task_queue=t.asm.yb).apply(null,arguments)},Ft=t.__emscripten_thread_free_data=function(){return(Ft=t.__emscripten_thread_free_data=t.asm.zb).apply(null,arguments)},Kt=t.__emscripten_thread_exit=function(){return(Kt=t.__emscripten_thread_exit=t.asm.Ab).apply(null,arguments)},de=t._setThrew=function(){return(de=t._setThrew=t.asm.Bb).apply(null,arguments)},Zt=t._emscripten_stack_set_limits=function(){return(Zt=t._emscripten_stack_set_limits=t.asm.Cb).apply(null,arguments)},le=t.stackSave=function(){return(le=t.stackSave=t.asm.Db).apply(null,arguments)},ue=t.stackRestore=function(){return(ue=t.stackRestore=t.asm.Eb).apply(null,arguments)},Rt=t.stackAlloc=function(){return(Rt=t.stackAlloc=t.asm.Fb).apply(null,arguments)},$t=t.___cxa_can_catch=function(){return($t=t.___cxa_can_catch=t.asm.Gb).apply(null,arguments)},Jt=t.___cxa_is_pointer_type=function(){return(Jt=t.___cxa_is_pointer_type=t.asm.Hb).apply(null,arguments)},Qt=t.dynCall_j=function(){return(Qt=t.dynCall_j=t.asm.Ib).apply(null,arguments)},en=t.dynCall_iiiiij=function(){return(en=t.dynCall_iiiiij=t.asm.Jb).apply(null,arguments)},tn=t.dynCall_jii=function(){return(tn=t.dynCall_jii=t.asm.Kb).apply(null,arguments)},nn=t.dynCall_viiiiij=function(){return(nn=t.dynCall_viiiiij=t.asm.Lb).apply(null,arguments)},rn=t.dynCall_vjji=function(){return(rn=t.dynCall_vjji=t.asm.Mb).apply(null,arguments)},on=t.dynCall_viiijjjii=function(){return(on=t.dynCall_viiijjjii=t.asm.Nb).apply(null,arguments)},sn=t.dynCall_iij=function(){return(sn=t.dynCall_iij=t.asm.Ob).apply(null,arguments)},an=t.dynCall_ji=function(){return(an=t.dynCall_ji=t.asm.Pb).apply(null,arguments)},un=t.dynCall_iiiiiij=function(){return(un=t.dynCall_iiiiiij=t.asm.Qb).apply(null,arguments)},ln=t.dynCall_iiij=function(){return(ln=t.dynCall_iiij=t.asm.Rb).apply(null,arguments)};function cn(){function x(){if(!Mt&&(Mt=!0,t.calledRun=!0,!_e)&&(I||nt(Ue),e(t),t.onRuntimeInitialized&&t.onRuntimeInitialized(),!I)){if(t.postRun)for(typeof t.postRun=="function"&&(t.postRun=[t.postRun]);t.postRun.length;){var A=t.postRun.shift();Ke.unshift(A)}nt(Ke)}}if(!(0{var d,l=(d=(d=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(p){var s,h,f;p=p||{},s||(s=p!==void 0?p:{}),s.ready=new Promise(function(P,D){h=P,f=D});var a,o,t,e,r,i,c=Object.assign({},s),g="./this.program",m=(P,D)=>{throw D},b=typeof window=="object",_=typeof importScripts=="function",w=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",v="";w?(v=_?u(908).dirname(v)+"/":"//",i=()=>{r||(e=u(1384),r=u(908))},a=function(P,D){return i(),P=r.normalize(P),e.readFileSync(P,D?void 0:"utf8")},t=P=>((P=a(P,!0)).buffer||(P=new Uint8Array(P)),P),o=(P,D,L)=>{i(),P=r.normalize(P),e.readFile(P,function(R,U){R?L(R):D(U.buffer)})},1{if(T||0{var D=new XMLHttpRequest;return D.open("GET",P,!1),D.send(null),D.responseText},_&&(t=P=>{var D=new XMLHttpRequest;return D.open("GET",P,!1),D.responseType="arraybuffer",D.send(null),new Uint8Array(D.response)}),o=(P,D,L)=>{var R=new XMLHttpRequest;R.open("GET",P,!0),R.responseType="arraybuffer",R.onload=()=>{R.status==200||R.status==0&&R.response?D(R.response):L()},R.onerror=L,R.send(null)});var S,O=s.print||console.log.bind(console),E=s.printErr||console.warn.bind(console);Object.assign(s,c),c=null,s.thisProgram&&(g=s.thisProgram),s.quit&&(m=s.quit),s.wasmBinary&&(S=s.wasmBinary);var T=s.noExitRuntime||!1;typeof WebAssembly!="object"&&Ee("no native wasm support detected");var I,C,B,F,N,H,$=!1,z=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function J(P,D,L){var R=(D>>>=0)+L;for(L=D;P[L]&&!(L>=R);)++L;if(16(U=(240&U)==224?(15&U)<<12|W<<6|Y:(7&U)<<18|W<<12|Y<<6|63&P[D++])?R+=String.fromCharCode(U):(U-=65536,R+=String.fromCharCode(55296|U>>10,56320|1023&U))}}else R+=String.fromCharCode(U)}return R}function X(P,D){return(P>>>=0)?J(F,P,D):""}function te(P,D,L,R){if(!(0>>=0;R=L+R-1;for(var W=0;W=Y&&(Y=65536+((1023&Y)<<10)|1023&P.charCodeAt(++W)),127>=Y){if(L>=R)break;D[L++>>>0]=Y}else{if(2047>=Y){if(L+1>=R)break;D[L++>>>0]=192|Y>>6}else{if(65535>=Y){if(L+2>=R)break;D[L++>>>0]=224|Y>>12}else{if(L+3>=R)break;D[L++>>>0]=240|Y>>18,D[L++>>>0]=128|Y>>12&63}D[L++>>>0]=128|Y>>6&63}D[L++>>>0]=128|63&Y}}return D[L>>>0]=0,L-U}function ne(P){for(var D=0,L=0;L=R?D++:2047>=R?D+=2:55296<=R&&57343>=R?(D+=4,++L):D+=3}return D}function me(){var P=I.buffer;C=P,s.HEAP8=B=new Int8Array(P),s.HEAP16=new Int16Array(P),s.HEAP32=N=new Int32Array(P),s.HEAPU8=F=new Uint8Array(P),s.HEAPU16=new Uint16Array(P),s.HEAPU32=H=new Uint32Array(P),s.HEAPF32=new Float32Array(P),s.HEAPF64=new Float64Array(P)}var Ie,Oe=[],ce=[],Te=[],_e=[],Le=0;function We(){var P=s.preRun.shift();Oe.unshift(P)}var Ae,Ce=0,Me=null;function Ee(P){throw s.onAbort&&s.onAbort(P),E(P="Aborted("+P+")"),$=!0,P=new WebAssembly.RuntimeError(P+". Build with -sASSERTIONS for more info."),f(P),P}function ve(){return Ae.startsWith("data:application/octet-stream;base64,")}if(Ae="ort-wasm.wasm",!ve()){var je=Ae;Ae=s.locateFile?s.locateFile(je,v):v+je}function ze(){var P=Ae;try{if(P==Ae&&S)return new Uint8Array(S);if(t)return t(P);throw"both async and sync fetching of the wasm failed"}catch(D){Ee(D)}}function Ue(P){this.name="ExitStatus",this.message="Program terminated with exit("+P+")",this.status=P}function He(P){for(;0>2>>>0]=D},this.Eb=function(){return H[this.zb+4>>2>>>0]},this.Sb=function(D){H[this.zb+8>>2>>>0]=D},this.Wb=function(){return H[this.zb+8>>2>>>0]},this.Tb=function(){N[this.zb>>2>>>0]=0},this.Ib=function(D){B[this.zb+12>>0>>>0]=D?1:0},this.Pb=function(){return B[this.zb+12>>0>>>0]!=0},this.Jb=function(D){B[this.zb+13>>0>>>0]=D?1:0},this.Lb=function(){return B[this.zb+13>>0>>>0]!=0},this.Rb=function(D,L){this.Fb(0),this.Ub(D),this.Sb(L),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){N[this.zb>>2>>>0]+=1},this.Xb=function(){var D=N[this.zb>>2>>>0];return N[this.zb>>2>>>0]=D-1,D===1},this.Fb=function(D){H[this.zb+16>>2>>>0]=D},this.Ob=function(){return H[this.zb+16>>2>>>0]},this.Qb=function(){if(gt(this.Eb()))return H[this.Db>>2>>>0];var D=this.Ob();return D!==0?D:this.Db}}function Fe(P){return it(new Se(P).zb)}var Xe=[];function pe(P){var D=Xe[P];return D||(P>=Xe.length&&(Xe.length=P+1),Xe[P]=D=Ie.get(P)),D}function ht(P){var D=ne(P)+1,L=ye(D);return L&&te(P,B,L,D),L}var ut={};function Et(){if(!Ze){var P,D={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:g||"./this.program"};for(P in ut)ut[P]===void 0?delete D[P]:D[P]=ut[P];var L=[];for(P in D)L.push(P+"="+D[P]);Ze=L}return Ze}var Ze,lt=[null,[],[]];function ct(P,D){var L=lt[P];D===0||D===10?((P===1?O:E)(J(L,0)),L.length=0):L.push(D)}var Ne=0;function rt(P){return P%4==0&&(P%100!=0||P%400==0)}var re=[31,29,31,30,31,30,31,31,30,31,30,31],nt=[31,28,31,30,31,30,31,31,30,31,30,31];function Pt(P,D,L,R){function U(G,ge,xe){for(G=typeof G=="number"?G.toString():G||"";G.lengthQe?-1:0Ge-G.getDate())){G.setDate(G.getDate()+ge);break}ge-=Ge-G.getDate()+1,G.setDate(1),11>xe?G.setMonth(xe+1):(G.setMonth(0),G.setFullYear(G.getFullYear()+1))}return xe=new Date(G.getFullYear()+1,0,4),ge=Q(new Date(G.getFullYear(),0,4)),xe=Q(xe),0>=Y(ge,G)?0>=Y(xe,G)?G.getFullYear()+1:G.getFullYear():G.getFullYear()-1}var ae=N[R+40>>2>>>0];for(var we in R={$b:N[R>>2>>>0],Zb:N[R+4>>2>>>0],Gb:N[R+8>>2>>>0],Kb:N[R+12>>2>>>0],Hb:N[R+16>>2>>>0],Cb:N[R+20>>2>>>0],Ab:N[R+24>>2>>>0],Bb:N[R+28>>2>>>0],bc:N[R+32>>2>>>0],Yb:N[R+36>>2>>>0],ac:ae?X(ae):""},L=X(L),ae={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})L=L.replace(new RegExp(we,"g"),ae[we]);var $e="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),De="January February March April May June July August September October November December".split(" ");for(we in ae={"%a":function(G){return $e[G.Ab].substring(0,3)},"%A":function(G){return $e[G.Ab]},"%b":function(G){return De[G.Hb].substring(0,3)},"%B":function(G){return De[G.Hb]},"%C":function(G){return W((G.Cb+1900)/100|0,2)},"%d":function(G){return W(G.Kb,2)},"%e":function(G){return U(G.Kb,2," ")},"%g":function(G){return Z(G).toString().substring(2)},"%G":function(G){return Z(G)},"%H":function(G){return W(G.Gb,2)},"%I":function(G){return(G=G.Gb)==0?G=12:12G.Gb?"AM":"PM"},"%S":function(G){return W(G.$b,2)},"%t":function(){return" "},"%u":function(G){return G.Ab||7},"%U":function(G){return W(Math.floor((G.Bb+7-G.Ab)/7),2)},"%V":function(G){var ge=Math.floor((G.Bb+7-(G.Ab+6)%7)/7);if(2>=(G.Ab+371-G.Bb-2)%7&&ge++,ge)ge==53&&((xe=(G.Ab+371-G.Bb)%7)==4||xe==3&&rt(G.Cb)||(ge=1));else{ge=52;var xe=(G.Ab+7-G.Bb-1)%7;(xe==4||xe==5&&rt(G.Cb%400-1))&&ge++}return W(ge,2)},"%w":function(G){return G.Ab},"%W":function(G){return W(Math.floor((G.Bb+7-(G.Ab+6)%7)/7),2)},"%y":function(G){return(G.Cb+1900).toString().substring(2)},"%Y":function(G){return G.Cb+1900},"%z":function(G){var ge=0<=(G=G.Yb);return G=Math.abs(G)/60,(ge?"+":"-")+("0000"+(G/60*100+G%60)).slice(-4)},"%Z":function(G){return G.ac},"%%":function(){return"%"}},L=L.replace(/%%/g,"\0\0"),ae)L.includes(we)&&(L=L.replace(new RegExp(we,"g"),ae[we](R)));return we=function(G){var ge=Array(ne(G)+1);return te(G,ge,0,ge.length),ge}(L=L.replace(/\0\0/g,"%")),we.length>D?0:(B.set(we,P>>>0),we.length-1)}var It={a:function(P){return ye(P+24)+24},m:function(P){return(P=new Se(P)).Pb()||(P.Ib(!0),Ve--),P.Jb(!1),Ke.push(P),P.Nb(),P.Qb()},ia:function(P){throw E("Unexpected exception thrown, this is not properly supported - aborting"),$=!0,P},w:function(){se(0);var P=Ke.pop();if(P.Xb()&&!P.Lb()){var D=P.Wb();D&&pe(D)(P.Db),Fe(P.Db)}Be=0},d:function(){var P=Be;if(!P)return Ne=0;var D=new Se(P);D.Fb(P);var L=D.Eb();if(!L)return Ne=0,P;for(var R=Array.prototype.slice.call(arguments),U=0;U>>2]+4294967296*N[P+4>>>2])),N[D>>2>>>0]=P.getUTCSeconds(),N[D+4>>2>>>0]=P.getUTCMinutes(),N[D+8>>2>>>0]=P.getUTCHours(),N[D+12>>2>>>0]=P.getUTCDate(),N[D+16>>2>>>0]=P.getUTCMonth(),N[D+20>>2>>>0]=P.getUTCFullYear()-1900,N[D+24>>2>>>0]=P.getUTCDay(),N[D+28>>2>>>0]=(P.getTime()-Date.UTC(P.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(P,D){P=new Date(1e3*(H[P>>>2]+4294967296*N[P+4>>>2])),N[D>>2>>>0]=P.getSeconds(),N[D+4>>2>>>0]=P.getMinutes(),N[D+8>>2>>>0]=P.getHours(),N[D+12>>2>>>0]=P.getDate(),N[D+16>>2>>>0]=P.getMonth(),N[D+20>>2>>>0]=P.getFullYear()-1900,N[D+24>>2>>>0]=P.getDay();var L=new Date(P.getFullYear(),0,1);N[D+28>>2>>>0]=(P.getTime()-L.getTime())/864e5|0,N[D+36>>2>>>0]=-60*P.getTimezoneOffset();var R=new Date(P.getFullYear(),6,1).getTimezoneOffset();L=L.getTimezoneOffset(),N[D+32>>2>>>0]=0|(R!=L&&P.getTimezoneOffset()==Math.min(L,R))},Fa:function(P){var D=new Date(N[P+20>>2>>>0]+1900,N[P+16>>2>>>0],N[P+12>>2>>>0],N[P+8>>2>>>0],N[P+4>>2>>>0],N[P>>2>>>0],0),L=N[P+32>>2>>>0],R=D.getTimezoneOffset(),U=new Date(D.getFullYear(),0,1),W=new Date(D.getFullYear(),6,1).getTimezoneOffset(),Y=U.getTimezoneOffset(),Q=Math.min(Y,W);return 0>L?N[P+32>>2>>>0]=+(W!=Y&&Q==R):0>2>>>0]=D.getDay(),N[P+28>>2>>>0]=(D.getTime()-U.getTime())/864e5|0,N[P>>2>>>0]=D.getSeconds(),N[P+4>>2>>>0]=D.getMinutes(),N[P+8>>2>>>0]=D.getHours(),N[P+12>>2>>>0]=D.getDate(),N[P+16>>2>>>0]=D.getMonth(),D.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function P(D,L,R){P.Vb||(P.Vb=!0,function(U,W,Y){function Q(De){return(De=De.toTimeString().match(/\(([A-Za-z ]+)\)$/))?De[1]:"GMT"}var Z=new Date().getFullYear(),ae=new Date(Z,0,1),we=new Date(Z,6,1);Z=ae.getTimezoneOffset();var $e=we.getTimezoneOffset();N[U>>2>>>0]=60*Math.max(Z,$e),N[W>>2>>>0]=+(Z!=$e),U=Q(ae),W=Q(we),U=ht(U),W=ht(W),$e>2>>>0]=U,H[Y+4>>2>>>0]=W):(H[Y>>2>>>0]=W,H[Y+4>>2>>>0]=U)}(D,L,R))},B:function(){Ee("")},ma:function(){return 4294901760},I:w?()=>{var P=process.hrtime();return 1e3*P[0]+P[1]/1e6}:()=>performance.now(),xa:function(P,D,L){F.copyWithin(P>>>0,D>>>0,D+L>>>0)},G:function(P){var D=F.length;if(4294901760<(P>>>=0))return!1;for(var L=1;4>=L;L*=2){var R=D*(1+.2/L);R=Math.min(R,P+100663296);var U=Math;R=Math.max(P,R),U=U.min.call(U,4294901760,R+(65536-R%65536)%65536);e:{try{I.grow(U-C.byteLength+65535>>>16),me();var W=1;break e}catch{}W=void 0}if(W)return!0}return!1},va:function(P,D){var L=0;return Et().forEach(function(R,U){var W=D+L;for(U=H[P+4*U>>2>>>0]=W,W=0;W>0>>>0]=R.charCodeAt(W);B[U>>0>>>0]=0,L+=R.length+1}),0},wa:function(P,D){var L=Et();H[P>>2>>>0]=L.length;var R=0;return L.forEach(function(U){R+=U.length+1}),H[D>>2>>>0]=R,0},ba:function(P){T||0>2>>>0],Q=H[D+4>>2>>>0];D+=8;for(var Z=0;Z>>0]);U+=Q}return H[R>>2>>>0]=U,0},c:function(){return Ne},ja:function P(D,L){P.Mb||(P.Mb=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var U=new Uint8Array(1);return()=>(crypto.getRandomValues(U),U[0])}if(w)try{var W=u(Object(function(){var Y=new Error("Cannot find module 'crypto'");throw Y.code="MODULE_NOT_FOUND",Y}()));return()=>W.randomBytes(1)[0]}catch{}return()=>Ee("randomDevice")}());for(var R=0;R>0>>>0]=P.Mb();return 0},ea:function(P,D,L){var R=ie();try{return pe(P)(D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},fa:function(P,D,L){var R=ie();try{return pe(P)(D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},J:function(P){var D=ie();try{return pe(P)()}catch(L){if(oe(D),L!==L+0)throw L;se(1,0)}},e:function(P,D){var L=ie();try{return pe(P)(D)}catch(R){if(oe(L),R!==R+0)throw R;se(1,0)}},N:function(P,D,L){var R=ie();try{return pe(P)(D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},O:function(P,D,L){var R=ie();try{return pe(P)(D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},j:function(P,D,L){var R=ie();try{return pe(P)(D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},o:function(P,D,L,R){var U=ie();try{return pe(P)(D,L,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},p:function(P,D,L,R,U){var W=ie();try{return pe(P)(D,L,R,U)}catch(Y){if(oe(W),Y!==Y+0)throw Y;se(1,0)}},M:function(P,D,L,R,U,W){var Y=ie();try{return pe(P)(D,L,R,U,W)}catch(Q){if(oe(Y),Q!==Q+0)throw Q;se(1,0)}},r:function(P,D,L,R,U,W){var Y=ie();try{return pe(P)(D,L,R,U,W)}catch(Q){if(oe(Y),Q!==Q+0)throw Q;se(1,0)}},v:function(P,D,L,R,U,W,Y){var Q=ie();try{return pe(P)(D,L,R,U,W,Y)}catch(Z){if(oe(Q),Z!==Z+0)throw Z;se(1,0)}},K:function(P,D,L,R,U,W,Y,Q){var Z=ie();try{return pe(P)(D,L,R,U,W,Y,Q)}catch(ae){if(oe(Z),ae!==ae+0)throw ae;se(1,0)}},D:function(P,D,L,R,U,W,Y,Q,Z,ae,we,$e){var De=ie();try{return pe(P)(D,L,R,U,W,Y,Q,Z,ae,we,$e)}catch(G){if(oe(De),G!==G+0)throw G;se(1,0)}},X:function(P,D,L,R,U,W,Y,Q){var Z=ie();try{return St(P,D,L,R,U,W,Y,Q)}catch(ae){if(oe(Z),ae!==ae+0)throw ae;se(1,0)}},V:function(P,D,L,R,U,W,Y){var Q=ie();try{return bt(P,D,L,R,U,W,Y)}catch(Z){if(oe(Q),Z!==Z+0)throw Z;se(1,0)}},U:function(P,D,L,R,U){var W=ie();try{return Ot(P,D,L,R,U)}catch(Y){if(oe(W),Y!==Y+0)throw Y;se(1,0)}},Z:function(P,D,L,R){var U=ie();try{return Tt(P,D,L,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},W:function(P){var D=ie();try{return mt(P)}catch(L){if(oe(D),L!==L+0)throw L;se(1,0)}},Y:function(P,D){var L=ie();try{return xt(P,D)}catch(R){if(oe(L),R!==R+0)throw R;se(1,0)}},T:function(P,D,L){var R=ie();try{return yt(P,D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},f:function(P){var D=ie();try{pe(P)()}catch(L){if(oe(D),L!==L+0)throw L;se(1,0)}},q:function(P,D){var L=ie();try{pe(P)(D)}catch(R){if(oe(L),R!==R+0)throw R;se(1,0)}},h:function(P,D,L){var R=ie();try{pe(P)(D,L)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},da:function(P,D,L,R){var U=ie();try{pe(P)(D,L,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},l:function(P,D,L,R){var U=ie();try{pe(P)(D,L,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},t:function(P,D,L,R,U){var W=ie();try{pe(P)(D,L,R,U)}catch(Y){if(oe(W),Y!==Y+0)throw Y;se(1,0)}},u:function(P,D,L,R,U,W){var Y=ie();try{pe(P)(D,L,R,U,W)}catch(Q){if(oe(Y),Q!==Q+0)throw Q;se(1,0)}},x:function(P,D,L,R,U,W,Y){var Q=ie();try{pe(P)(D,L,R,U,W,Y)}catch(Z){if(oe(Q),Z!==Z+0)throw Z;se(1,0)}},z:function(P,D,L,R,U,W,Y,Q){var Z=ie();try{pe(P)(D,L,R,U,W,Y,Q)}catch(ae){if(oe(Z),ae!==ae+0)throw ae;se(1,0)}},ga:function(P,D,L,R,U,W,Y,Q,Z){var ae=ie();try{pe(P)(D,L,R,U,W,Y,Q,Z)}catch(we){if(oe(ae),we!==we+0)throw we;se(1,0)}},A:function(P,D,L,R,U,W,Y,Q,Z,ae,we){var $e=ie();try{pe(P)(D,L,R,U,W,Y,Q,Z,ae,we)}catch(De){if(oe($e),De!==De+0)throw De;se(1,0)}},C:function(P,D,L,R,U,W,Y,Q,Z,ae,we,$e,De,G,ge,xe){var Ge=ie();try{pe(P)(D,L,R,U,W,Y,Q,Z,ae,we,$e,De,G,ge,xe)}catch(Qe){if(oe(Ge),Qe!==Qe+0)throw Qe;se(1,0)}},aa:function(P,D,L,R,U,W,Y,Q){var Z=ie();try{_t(P,D,L,R,U,W,Y,Q)}catch(ae){if(oe(Z),ae!==ae+0)throw ae;se(1,0)}},_:function(P,D,L,R,U,W,Y,Q,Z,ae,we,$e){var De=ie();try{vt(P,D,L,R,U,W,Y,Q,Z,ae,we,$e)}catch(G){if(oe(De),G!==G+0)throw G;se(1,0)}},$:function(P,D,L,R,U,W){var Y=ie();try{wt(P,D,L,R,U,W)}catch(Q){if(oe(Y),Q!==Q+0)throw Q;se(1,0)}},n:function(P){return P},F:function(P){Ne=P},ha:Pt,y:function(P,D,L,R){return Pt(P,D,L,R)}};(function(){function P(U){s.asm=U.exports,I=s.asm.Ka,me(),Ie=s.asm.ib,ce.unshift(s.asm.La),Ce--,s.monitorRunDependencies&&s.monitorRunDependencies(Ce),Ce==0&&Me&&(U=Me,Me=null,U())}function D(U){P(U.instance)}function L(U){return function(){if(!S&&(b||_)){if(typeof fetch=="function"&&!Ae.startsWith("file://"))return fetch(Ae,{credentials:"same-origin"}).then(function(W){if(!W.ok)throw"failed to load wasm binary file at '"+Ae+"'";return W.arrayBuffer()}).catch(function(){return ze()});if(o)return new Promise(function(W,Y){o(Ae,function(Q){W(new Uint8Array(Q))},Y)})}return Promise.resolve().then(function(){return ze()})}().then(function(W){return WebAssembly.instantiate(W,R)}).then(function(W){return W}).then(U,function(W){E("failed to asynchronously prepare wasm: "+W),Ee(W)})}var R={a:It};if(Ce++,s.monitorRunDependencies&&s.monitorRunDependencies(Ce),s.instantiateWasm)try{return s.instantiateWasm(R,P)}catch(U){return E("Module.instantiateWasm callback failed with error: "+U),!1}(S||typeof WebAssembly.instantiateStreaming!="function"||ve()||Ae.startsWith("file://")||w||typeof fetch!="function"?L(D):fetch(Ae,{credentials:"same-origin"}).then(function(U){return WebAssembly.instantiateStreaming(U,R).then(D,function(W){return E("wasm streaming compile failed: "+W),E("falling back to ArrayBuffer instantiation"),L(D)})})).catch(f)})(),s.___wasm_call_ctors=function(){return(s.___wasm_call_ctors=s.asm.La).apply(null,arguments)},s._OrtInit=function(){return(s._OrtInit=s.asm.Ma).apply(null,arguments)},s._OrtCreateSessionOptions=function(){return(s._OrtCreateSessionOptions=s.asm.Na).apply(null,arguments)},s._OrtAppendExecutionProvider=function(){return(s._OrtAppendExecutionProvider=s.asm.Oa).apply(null,arguments)},s._OrtAddSessionConfigEntry=function(){return(s._OrtAddSessionConfigEntry=s.asm.Pa).apply(null,arguments)},s._OrtReleaseSessionOptions=function(){return(s._OrtReleaseSessionOptions=s.asm.Qa).apply(null,arguments)},s._OrtCreateSession=function(){return(s._OrtCreateSession=s.asm.Ra).apply(null,arguments)},s._OrtReleaseSession=function(){return(s._OrtReleaseSession=s.asm.Sa).apply(null,arguments)},s._OrtGetInputCount=function(){return(s._OrtGetInputCount=s.asm.Ta).apply(null,arguments)},s._OrtGetOutputCount=function(){return(s._OrtGetOutputCount=s.asm.Ua).apply(null,arguments)},s._OrtGetInputName=function(){return(s._OrtGetInputName=s.asm.Va).apply(null,arguments)},s._OrtGetOutputName=function(){return(s._OrtGetOutputName=s.asm.Wa).apply(null,arguments)},s._OrtFree=function(){return(s._OrtFree=s.asm.Xa).apply(null,arguments)},s._OrtCreateTensor=function(){return(s._OrtCreateTensor=s.asm.Ya).apply(null,arguments)},s._OrtGetTensorData=function(){return(s._OrtGetTensorData=s.asm.Za).apply(null,arguments)},s._OrtReleaseTensor=function(){return(s._OrtReleaseTensor=s.asm._a).apply(null,arguments)},s._OrtCreateRunOptions=function(){return(s._OrtCreateRunOptions=s.asm.$a).apply(null,arguments)},s._OrtAddRunConfigEntry=function(){return(s._OrtAddRunConfigEntry=s.asm.ab).apply(null,arguments)},s._OrtReleaseRunOptions=function(){return(s._OrtReleaseRunOptions=s.asm.bb).apply(null,arguments)},s._OrtRun=function(){return(s._OrtRun=s.asm.cb).apply(null,arguments)},s._OrtEndProfiling=function(){return(s._OrtEndProfiling=s.asm.db).apply(null,arguments)};var Je,ye=s._malloc=function(){return(ye=s._malloc=s.asm.eb).apply(null,arguments)},it=s._free=function(){return(it=s._free=s.asm.fb).apply(null,arguments)},pt=s._fflush=function(){return(pt=s._fflush=s.asm.gb).apply(null,arguments)},ot=s.___funcs_on_exit=function(){return(ot=s.___funcs_on_exit=s.asm.hb).apply(null,arguments)},se=s._setThrew=function(){return(se=s._setThrew=s.asm.jb).apply(null,arguments)},ie=s.stackSave=function(){return(ie=s.stackSave=s.asm.kb).apply(null,arguments)},oe=s.stackRestore=function(){return(oe=s.stackRestore=s.asm.lb).apply(null,arguments)},ft=s.stackAlloc=function(){return(ft=s.stackAlloc=s.asm.mb).apply(null,arguments)},st=s.___cxa_can_catch=function(){return(st=s.___cxa_can_catch=s.asm.nb).apply(null,arguments)},gt=s.___cxa_is_pointer_type=function(){return(gt=s.___cxa_is_pointer_type=s.asm.ob).apply(null,arguments)},mt=s.dynCall_j=function(){return(mt=s.dynCall_j=s.asm.pb).apply(null,arguments)},bt=s.dynCall_iiiiij=function(){return(bt=s.dynCall_iiiiij=s.asm.qb).apply(null,arguments)},yt=s.dynCall_jii=function(){return(yt=s.dynCall_jii=s.asm.rb).apply(null,arguments)},_t=s.dynCall_viiiiij=function(){return(_t=s.dynCall_viiiiij=s.asm.sb).apply(null,arguments)},wt=s.dynCall_vjji=function(){return(wt=s.dynCall_vjji=s.asm.tb).apply(null,arguments)},vt=s.dynCall_viiijjjii=function(){return(vt=s.dynCall_viiijjjii=s.asm.ub).apply(null,arguments)},Tt=s.dynCall_iij=function(){return(Tt=s.dynCall_iij=s.asm.vb).apply(null,arguments)},xt=s.dynCall_ji=function(){return(xt=s.dynCall_ji=s.asm.wb).apply(null,arguments)},St=s.dynCall_iiiiiij=function(){return(St=s.dynCall_iiiiiij=s.asm.xb).apply(null,arguments)},Ot=s.dynCall_iiij=function(){return(Ot=s.dynCall_iiij=s.asm.yb).apply(null,arguments)};function At(){function P(){if(!Je&&(Je=!0,s.calledRun=!0,!$)){if(He(ce),h(s),s.onRuntimeInitialized&&s.onRuntimeInitialized(),s.postRun)for(typeof s.postRun=="function"&&(s.postRun=[s.postRun]);s.postRun.length;){var D=s.postRun.shift();_e.unshift(D)}He(_e)}}if(!(0{y.exports=function(n,u){for(var d=new Array(arguments.length-1),l=0,p=2,s=!0;p{var u=n;u.length=function(h){var f=h.length;if(!f)return 0;for(var a=0;--f%4>1&&h.charAt(f)==="=";)++a;return Math.ceil(3*h.length)/4-a};for(var d=new Array(64),l=new Array(123),p=0;p<64;)l[d[p]=p<26?p+65:p<52?p+71:p<62?p-4:p-59|43]=p++;u.encode=function(h,f,a){for(var o,t=null,e=[],r=0,i=0;f>2],o=(3&c)<<4,i=1;break;case 1:e[r++]=d[o|c>>4],o=(15&c)<<2,i=2;break;case 2:e[r++]=d[o|c>>6],e[r++]=d[63&c],i=0}r>8191&&((t||(t=[])).push(String.fromCharCode.apply(String,e)),r=0)}return i&&(e[r++]=d[o],e[r++]=61,i===1&&(e[r++]=61)),t?(r&&t.push(String.fromCharCode.apply(String,e.slice(0,r))),t.join("")):String.fromCharCode.apply(String,e.slice(0,r))};var s="invalid encoding";u.decode=function(h,f,a){for(var o,t=a,e=0,r=0;r1)break;if((i=l[i])===void 0)throw Error(s);switch(e){case 0:o=i,e=1;break;case 1:f[a++]=o<<2|(48&i)>>4,o=i,e=2;break;case 2:f[a++]=(15&o)<<4|(60&i)>>2,o=i,e=3;break;case 3:f[a++]=(3&o)<<6|i,e=0}}if(e===1)throw Error(s);return a-t},u.test=function(h){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(h)}},9211:y=>{function n(){this._listeners={}}y.exports=n,n.prototype.on=function(u,d,l){return(this._listeners[u]||(this._listeners[u]=[])).push({fn:d,ctx:l||this}),this},n.prototype.off=function(u,d){if(u===void 0)this._listeners={};else if(d===void 0)this._listeners[u]=[];else for(var l=this._listeners[u],p=0;p{function n(s){return typeof Float32Array<"u"?function(){var h=new Float32Array([-0]),f=new Uint8Array(h.buffer),a=f[3]===128;function o(i,c,g){h[0]=i,c[g]=f[0],c[g+1]=f[1],c[g+2]=f[2],c[g+3]=f[3]}function t(i,c,g){h[0]=i,c[g]=f[3],c[g+1]=f[2],c[g+2]=f[1],c[g+3]=f[0]}function e(i,c){return f[0]=i[c],f[1]=i[c+1],f[2]=i[c+2],f[3]=i[c+3],h[0]}function r(i,c){return f[3]=i[c],f[2]=i[c+1],f[1]=i[c+2],f[0]=i[c+3],h[0]}s.writeFloatLE=a?o:t,s.writeFloatBE=a?t:o,s.readFloatLE=a?e:r,s.readFloatBE=a?r:e}():function(){function h(a,o,t,e){var r=o<0?1:0;if(r&&(o=-o),o===0)a(1/o>0?0:2147483648,t,e);else if(isNaN(o))a(2143289344,t,e);else if(o>34028234663852886e22)a((r<<31|2139095040)>>>0,t,e);else if(o<11754943508222875e-54)a((r<<31|Math.round(o/1401298464324817e-60))>>>0,t,e);else{var i=Math.floor(Math.log(o)/Math.LN2);a((r<<31|i+127<<23|8388607&Math.round(o*Math.pow(2,-i)*8388608))>>>0,t,e)}}function f(a,o,t){var e=a(o,t),r=2*(e>>31)+1,i=e>>>23&255,c=8388607&e;return i===255?c?NaN:r*(1/0):i===0?1401298464324817e-60*r*c:r*Math.pow(2,i-150)*(c+8388608)}s.writeFloatLE=h.bind(null,u),s.writeFloatBE=h.bind(null,d),s.readFloatLE=f.bind(null,l),s.readFloatBE=f.bind(null,p)}(),typeof Float64Array<"u"?function(){var h=new Float64Array([-0]),f=new Uint8Array(h.buffer),a=f[7]===128;function o(i,c,g){h[0]=i,c[g]=f[0],c[g+1]=f[1],c[g+2]=f[2],c[g+3]=f[3],c[g+4]=f[4],c[g+5]=f[5],c[g+6]=f[6],c[g+7]=f[7]}function t(i,c,g){h[0]=i,c[g]=f[7],c[g+1]=f[6],c[g+2]=f[5],c[g+3]=f[4],c[g+4]=f[3],c[g+5]=f[2],c[g+6]=f[1],c[g+7]=f[0]}function e(i,c){return f[0]=i[c],f[1]=i[c+1],f[2]=i[c+2],f[3]=i[c+3],f[4]=i[c+4],f[5]=i[c+5],f[6]=i[c+6],f[7]=i[c+7],h[0]}function r(i,c){return f[7]=i[c],f[6]=i[c+1],f[5]=i[c+2],f[4]=i[c+3],f[3]=i[c+4],f[2]=i[c+5],f[1]=i[c+6],f[0]=i[c+7],h[0]}s.writeDoubleLE=a?o:t,s.writeDoubleBE=a?t:o,s.readDoubleLE=a?e:r,s.readDoubleBE=a?r:e}():function(){function h(a,o,t,e,r,i){var c=e<0?1:0;if(c&&(e=-e),e===0)a(0,r,i+o),a(1/e>0?0:2147483648,r,i+t);else if(isNaN(e))a(0,r,i+o),a(2146959360,r,i+t);else if(e>17976931348623157e292)a(0,r,i+o),a((c<<31|2146435072)>>>0,r,i+t);else{var g;if(e<22250738585072014e-324)a((g=e/5e-324)>>>0,r,i+o),a((c<<31|g/4294967296)>>>0,r,i+t);else{var m=Math.floor(Math.log(e)/Math.LN2);m===1024&&(m=1023),a(4503599627370496*(g=e*Math.pow(2,-m))>>>0,r,i+o),a((c<<31|m+1023<<20|1048576*g&1048575)>>>0,r,i+t)}}}function f(a,o,t,e,r){var i=a(e,r+o),c=a(e,r+t),g=2*(c>>31)+1,m=c>>>20&2047,b=4294967296*(1048575&c)+i;return m===2047?b?NaN:g*(1/0):m===0?5e-324*g*b:g*Math.pow(2,m-1075)*(b+4503599627370496)}s.writeDoubleLE=h.bind(null,u,0,4),s.writeDoubleBE=h.bind(null,d,4,0),s.readDoubleLE=f.bind(null,l,0,4),s.readDoubleBE=f.bind(null,p,4,0)}(),s}function u(s,h,f){h[f]=255&s,h[f+1]=s>>>8&255,h[f+2]=s>>>16&255,h[f+3]=s>>>24}function d(s,h,f){h[f]=s>>>24,h[f+1]=s>>>16&255,h[f+2]=s>>>8&255,h[f+3]=255&s}function l(s,h){return(s[h]|s[h+1]<<8|s[h+2]<<16|s[h+3]<<24)>>>0}function p(s,h){return(s[h]<<24|s[h+1]<<16|s[h+2]<<8|s[h+3])>>>0}y.exports=n(n)},7199:module=>{function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(y){}return null}module.exports=inquire},6662:y=>{y.exports=function(n,u,d){var l=d||8192,p=l>>>1,s=null,h=l;return function(f){if(f<1||f>p)return n(f);h+f>l&&(s=n(l),h=0);var a=u.call(s,h,h+=f);return 7&h&&(h=1+(7|h)),a}}},4997:(y,n)=>{var u=n;u.length=function(d){for(var l=0,p=0,s=0;s191&&s<224?f[a++]=(31&s)<<6|63&d[l++]:s>239&&s<365?(s=((7&s)<<18|(63&d[l++])<<12|(63&d[l++])<<6|63&d[l++])-65536,f[a++]=55296+(s>>10),f[a++]=56320+(1023&s)):f[a++]=(15&s)<<12|(63&d[l++])<<6|63&d[l++],a>8191&&((h||(h=[])).push(String.fromCharCode.apply(String,f)),a=0);return h?(a&&h.push(String.fromCharCode.apply(String,f.slice(0,a))),h.join("")):String.fromCharCode.apply(String,f.slice(0,a))},u.write=function(d,l,p){for(var s,h,f=p,a=0;a>6|192,l[p++]=63&s|128):(64512&s)==55296&&(64512&(h=d.charCodeAt(a+1)))==56320?(s=65536+((1023&s)<<10)+(1023&h),++a,l[p++]=s>>18|240,l[p++]=s>>12&63|128,l[p++]=s>>6&63|128,l[p++]=63&s|128):(l[p++]=s>>12|224,l[p++]=s>>6&63|128,l[p++]=63&s|128);return p-f}},3442:(y,n)=>{n.__esModule=!0;var u=function(){function d(l){if(!l)throw new TypeError("Invalid argument; `value` has no value.");this.value=d.EMPTY,l&&d.isGuid(l)&&(this.value=l)}return d.isGuid=function(l){var p=l.toString();return l&&(l instanceof d||d.validator.test(p))},d.create=function(){return new d([d.gen(2),d.gen(1),d.gen(1),d.gen(1),d.gen(3)].join("-"))},d.createEmpty=function(){return new d("emptyguid")},d.parse=function(l){return new d(l)},d.raw=function(){return[d.gen(2),d.gen(1),d.gen(1),d.gen(1),d.gen(3)].join("-")},d.gen=function(l){for(var p="",s=0;s{y.exports=u;var n=null;try{n=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch{}function u(T,I,C){this.low=0|T,this.high=0|I,this.unsigned=!!C}function d(T){return(T&&T.__isLong__)===!0}u.prototype.__isLong__,Object.defineProperty(u.prototype,"__isLong__",{value:!0}),u.isLong=d;var l={},p={};function s(T,I){var C,B,F;return I?(F=0<=(T>>>=0)&&T<256)&&(B=p[T])?B:(C=f(T,(0|T)<0?-1:0,!0),F&&(p[T]=C),C):(F=-128<=(T|=0)&&T<128)&&(B=l[T])?B:(C=f(T,T<0?-1:0,!1),F&&(l[T]=C),C)}function h(T,I){if(isNaN(T))return I?m:g;if(I){if(T<0)return m;if(T>=r)return S}else{if(T<=-i)return O;if(T+1>=i)return v}return T<0?h(-T,I).neg():f(T%e|0,T/e|0,I)}function f(T,I,C){return new u(T,I,C)}u.fromInt=s,u.fromNumber=h,u.fromBits=f;var a=Math.pow;function o(T,I,C){if(T.length===0)throw Error("empty string");if(T==="NaN"||T==="Infinity"||T==="+Infinity"||T==="-Infinity")return g;if(typeof I=="number"?(C=I,I=!1):I=!!I,(C=C||10)<2||360)throw Error("interior hyphen");if(B===0)return o(T.substring(1),I,C).neg();for(var F=h(a(C,8)),N=g,H=0;H>>0:this.low},E.toNumber=function(){return this.unsigned?(this.high>>>0)*e+(this.low>>>0):this.high*e+(this.low>>>0)},E.toString=function(T){if((T=T||10)<2||36>>0).toString(T);if((N=$).isZero())return z+H;for(;z.length<6;)z="0"+z;H=""+z+H}},E.getHighBits=function(){return this.high},E.getHighBitsUnsigned=function(){return this.high>>>0},E.getLowBits=function(){return this.low},E.getLowBitsUnsigned=function(){return this.low>>>0},E.getNumBitsAbs=function(){if(this.isNegative())return this.eq(O)?64:this.neg().getNumBitsAbs();for(var T=this.high!=0?this.high:this.low,I=31;I>0&&!(T&1<=0},E.isOdd=function(){return(1&this.low)==1},E.isEven=function(){return(1&this.low)==0},E.equals=function(T){return d(T)||(T=t(T)),(this.unsigned===T.unsigned||this.high>>>31!=1||T.high>>>31!=1)&&this.high===T.high&&this.low===T.low},E.eq=E.equals,E.notEquals=function(T){return!this.eq(T)},E.neq=E.notEquals,E.ne=E.notEquals,E.lessThan=function(T){return this.comp(T)<0},E.lt=E.lessThan,E.lessThanOrEqual=function(T){return this.comp(T)<=0},E.lte=E.lessThanOrEqual,E.le=E.lessThanOrEqual,E.greaterThan=function(T){return this.comp(T)>0},E.gt=E.greaterThan,E.greaterThanOrEqual=function(T){return this.comp(T)>=0},E.gte=E.greaterThanOrEqual,E.ge=E.greaterThanOrEqual,E.compare=function(T){if(d(T)||(T=t(T)),this.eq(T))return 0;var I=this.isNegative(),C=T.isNegative();return I&&!C?-1:!I&&C?1:this.unsigned?T.high>>>0>this.high>>>0||T.high===this.high&&T.low>>>0>this.low>>>0?-1:1:this.sub(T).isNegative()?-1:1},E.comp=E.compare,E.negate=function(){return!this.unsigned&&this.eq(O)?O:this.not().add(b)},E.neg=E.negate,E.add=function(T){d(T)||(T=t(T));var I=this.high>>>16,C=65535&this.high,B=this.low>>>16,F=65535&this.low,N=T.high>>>16,H=65535&T.high,$=T.low>>>16,z=0,J=0,X=0,te=0;return X+=(te+=F+(65535&T.low))>>>16,J+=(X+=B+$)>>>16,z+=(J+=C+H)>>>16,z+=I+N,f((X&=65535)<<16|(te&=65535),(z&=65535)<<16|(J&=65535),this.unsigned)},E.subtract=function(T){return d(T)||(T=t(T)),this.add(T.neg())},E.sub=E.subtract,E.multiply=function(T){if(this.isZero())return g;if(d(T)||(T=t(T)),n)return f(n.mul(this.low,this.high,T.low,T.high),n.get_high(),this.unsigned);if(T.isZero())return g;if(this.eq(O))return T.isOdd()?O:g;if(T.eq(O))return this.isOdd()?O:g;if(this.isNegative())return T.isNegative()?this.neg().mul(T.neg()):this.neg().mul(T).neg();if(T.isNegative())return this.mul(T.neg()).neg();if(this.lt(c)&&T.lt(c))return h(this.toNumber()*T.toNumber(),this.unsigned);var I=this.high>>>16,C=65535&this.high,B=this.low>>>16,F=65535&this.low,N=T.high>>>16,H=65535&T.high,$=T.low>>>16,z=65535&T.low,J=0,X=0,te=0,ne=0;return te+=(ne+=F*z)>>>16,X+=(te+=B*z)>>>16,te&=65535,X+=(te+=F*$)>>>16,J+=(X+=C*z)>>>16,X&=65535,J+=(X+=B*$)>>>16,X&=65535,J+=(X+=F*H)>>>16,J+=I*z+C*$+B*H+F*N,f((te&=65535)<<16|(ne&=65535),(J&=65535)<<16|(X&=65535),this.unsigned)},E.mul=E.multiply,E.divide=function(T){if(d(T)||(T=t(T)),T.isZero())throw Error("division by zero");var I,C,B;if(n)return this.unsigned||this.high!==-2147483648||T.low!==-1||T.high!==-1?f((this.unsigned?n.div_u:n.div_s)(this.low,this.high,T.low,T.high),n.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?m:g;if(this.unsigned){if(T.unsigned||(T=T.toUnsigned()),T.gt(this))return m;if(T.gt(this.shru(1)))return _;B=m}else{if(this.eq(O))return T.eq(b)||T.eq(w)?O:T.eq(O)?b:(I=this.shr(1).div(T).shl(1)).eq(g)?T.isNegative()?b:w:(C=this.sub(T.mul(I)),B=I.add(C.div(T)));if(T.eq(O))return this.unsigned?m:g;if(this.isNegative())return T.isNegative()?this.neg().div(T.neg()):this.neg().div(T).neg();if(T.isNegative())return this.div(T.neg()).neg();B=g}for(C=this;C.gte(T);){I=Math.max(1,Math.floor(C.toNumber()/T.toNumber()));for(var F=Math.ceil(Math.log(I)/Math.LN2),N=F<=48?1:a(2,F-48),H=h(I),$=H.mul(T);$.isNegative()||$.gt(C);)$=(H=h(I-=N,this.unsigned)).mul(T);H.isZero()&&(H=b),B=B.add(H),C=C.sub($)}return B},E.div=E.divide,E.modulo=function(T){return d(T)||(T=t(T)),n?f((this.unsigned?n.rem_u:n.rem_s)(this.low,this.high,T.low,T.high),n.get_high(),this.unsigned):this.sub(this.div(T).mul(T))},E.mod=E.modulo,E.rem=E.modulo,E.not=function(){return f(~this.low,~this.high,this.unsigned)},E.and=function(T){return d(T)||(T=t(T)),f(this.low&T.low,this.high&T.high,this.unsigned)},E.or=function(T){return d(T)||(T=t(T)),f(this.low|T.low,this.high|T.high,this.unsigned)},E.xor=function(T){return d(T)||(T=t(T)),f(this.low^T.low,this.high^T.high,this.unsigned)},E.shiftLeft=function(T){return d(T)&&(T=T.toInt()),(T&=63)==0?this:T<32?f(this.low<>>32-T,this.unsigned):f(0,this.low<>>T|this.high<<32-T,this.high>>T,this.unsigned):f(this.high>>T-32,this.high>=0?0:-1,this.unsigned)},E.shr=E.shiftRight,E.shiftRightUnsigned=function(T){if(d(T)&&(T=T.toInt()),(T&=63)==0)return this;var I=this.high;return T<32?f(this.low>>>T|I<<32-T,I>>>T,this.unsigned):f(T===32?I:I>>>T-32,0,this.unsigned)},E.shru=E.shiftRightUnsigned,E.shr_u=E.shiftRightUnsigned,E.toSigned=function(){return this.unsigned?f(this.low,this.high,!1):this},E.toUnsigned=function(){return this.unsigned?this:f(this.low,this.high,!0)},E.toBytes=function(T){return T?this.toBytesLE():this.toBytesBE()},E.toBytesLE=function(){var T=this.high,I=this.low;return[255&I,I>>>8&255,I>>>16&255,I>>>24,255&T,T>>>8&255,T>>>16&255,T>>>24]},E.toBytesBE=function(){var T=this.high,I=this.low;return[T>>>24,T>>>16&255,T>>>8&255,255&T,I>>>24,I>>>16&255,I>>>8&255,255&I]},u.fromBytes=function(T,I,C){return C?u.fromBytesLE(T,I):u.fromBytesBE(T,I)},u.fromBytesLE=function(T,I){return new u(T[0]|T[1]<<8|T[2]<<16|T[3]<<24,T[4]|T[5]<<8|T[6]<<16|T[7]<<24,I)},u.fromBytesBE=function(T,I){return new u(T[4]<<24|T[5]<<16|T[6]<<8|T[7],T[0]<<24|T[1]<<16|T[2]<<8|T[3],I)}},1446:(y,n,u)=>{var d,l,p,s=u(2100),h=s.Reader,f=s.Writer,a=s.util,o=s.roots.default||(s.roots.default={});o.onnx=((p={}).Version=(d={},(l=Object.create(d))[d[0]="_START_VERSION"]=0,l[d[1]="IR_VERSION_2017_10_10"]=1,l[d[2]="IR_VERSION_2017_10_30"]=2,l[d[3]="IR_VERSION_2017_11_3"]=3,l[d[4]="IR_VERSION_2019_1_22"]=4,l[d[5]="IR_VERSION"]=5,l),p.AttributeProto=function(){function t(e){if(this.floats=[],this.ints=[],this.strings=[],this.tensors=[],this.graphs=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:c.name=e.string();break;case 21:c.refAttrName=e.string();break;case 13:c.docString=e.string();break;case 20:c.type=e.int32();break;case 2:c.f=e.float();break;case 3:c.i=e.int64();break;case 4:c.s=e.bytes();break;case 5:c.t=o.onnx.TensorProto.decode(e,e.uint32());break;case 6:c.g=o.onnx.GraphProto.decode(e,e.uint32());break;case 7:if(c.floats&&c.floats.length||(c.floats=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.i.high>>>0).toNumber())),e.s!=null&&(typeof e.s=="string"?a.base64.decode(e.s,r.s=a.newBuffer(a.base64.length(e.s)),0):e.s.length&&(r.s=e.s)),e.t!=null){if(typeof e.t!="object")throw TypeError(".onnx.AttributeProto.t: object expected");r.t=o.onnx.TensorProto.fromObject(e.t)}if(e.g!=null){if(typeof e.g!="object")throw TypeError(".onnx.AttributeProto.g: object expected");r.g=o.onnx.GraphProto.fromObject(e.g)}if(e.floats){if(!Array.isArray(e.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");r.floats=[];for(var i=0;i>>0,e.ints[i].high>>>0).toNumber())}if(e.strings){if(!Array.isArray(e.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");for(r.strings=[],i=0;i>>0,e.i.high>>>0).toNumber():e.i),e.s!=null&&e.hasOwnProperty("s")&&(i.s=r.bytes===String?a.base64.encode(e.s,0,e.s.length):r.bytes===Array?Array.prototype.slice.call(e.s):e.s),e.t!=null&&e.hasOwnProperty("t")&&(i.t=o.onnx.TensorProto.toObject(e.t,r)),e.g!=null&&e.hasOwnProperty("g")&&(i.g=o.onnx.GraphProto.toObject(e.g,r)),e.floats&&e.floats.length){i.floats=[];for(var g=0;g>>0,e.ints[g].high>>>0).toNumber():e.ints[g];if(e.strings&&e.strings.length)for(i.strings=[],g=0;g>>3){case 1:c.name=e.string();break;case 2:c.type=o.onnx.TypeProto.decode(e,e.uint32());break;case 3:c.docString=e.string();break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.name!=null&&e.hasOwnProperty("name")&&!a.isString(e.name))return"name: string expected";if(e.type!=null&&e.hasOwnProperty("type")){var r=o.onnx.TypeProto.verify(e.type);if(r)return"type."+r}return e.docString!=null&&e.hasOwnProperty("docString")&&!a.isString(e.docString)?"docString: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.ValueInfoProto)return e;var r=new o.onnx.ValueInfoProto;if(e.name!=null&&(r.name=String(e.name)),e.type!=null){if(typeof e.type!="object")throw TypeError(".onnx.ValueInfoProto.type: object expected");r.type=o.onnx.TypeProto.fromObject(e.type)}return e.docString!=null&&(r.docString=String(e.docString)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.name="",i.type=null,i.docString=""),e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.type!=null&&e.hasOwnProperty("type")&&(i.type=o.onnx.TypeProto.toObject(e.type,r)),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.NodeProto=function(){function t(e){if(this.input=[],this.output=[],this.attribute=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:c.input&&c.input.length||(c.input=[]),c.input.push(e.string());break;case 2:c.output&&c.output.length||(c.output=[]),c.output.push(e.string());break;case 3:c.name=e.string();break;case 4:c.opType=e.string();break;case 7:c.domain=e.string();break;case 5:c.attribute&&c.attribute.length||(c.attribute=[]),c.attribute.push(o.onnx.AttributeProto.decode(e,e.uint32()));break;case 6:c.docString=e.string();break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.input!=null&&e.hasOwnProperty("input")){if(!Array.isArray(e.input))return"input: array expected";for(var r=0;r>>3){case 1:c.irVersion=e.int64();break;case 8:c.opsetImport&&c.opsetImport.length||(c.opsetImport=[]),c.opsetImport.push(o.onnx.OperatorSetIdProto.decode(e,e.uint32()));break;case 2:c.producerName=e.string();break;case 3:c.producerVersion=e.string();break;case 4:c.domain=e.string();break;case 5:c.modelVersion=e.int64();break;case 6:c.docString=e.string();break;case 7:c.graph=o.onnx.GraphProto.decode(e,e.uint32());break;case 14:c.metadataProps&&c.metadataProps.length||(c.metadataProps=[]),c.metadataProps.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.irVersion!=null&&e.hasOwnProperty("irVersion")&&!(a.isInteger(e.irVersion)||e.irVersion&&a.isInteger(e.irVersion.low)&&a.isInteger(e.irVersion.high)))return"irVersion: integer|Long expected";if(e.opsetImport!=null&&e.hasOwnProperty("opsetImport")){if(!Array.isArray(e.opsetImport))return"opsetImport: array expected";for(var r=0;r>>0,e.irVersion.high>>>0).toNumber())),e.opsetImport){if(!Array.isArray(e.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");r.opsetImport=[];for(var i=0;i>>0,e.modelVersion.high>>>0).toNumber())),e.docString!=null&&(r.docString=String(e.docString)),e.graph!=null){if(typeof e.graph!="object")throw TypeError(".onnx.ModelProto.graph: object expected");r.graph=o.onnx.GraphProto.fromObject(e.graph)}if(e.metadataProps){if(!Array.isArray(e.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");for(r.metadataProps=[],i=0;i>>0,e.irVersion.high>>>0).toNumber():e.irVersion),e.producerName!=null&&e.hasOwnProperty("producerName")&&(i.producerName=e.producerName),e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&(i.producerVersion=e.producerVersion),e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&(typeof e.modelVersion=="number"?i.modelVersion=r.longs===String?String(e.modelVersion):e.modelVersion:i.modelVersion=r.longs===String?a.Long.prototype.toString.call(e.modelVersion):r.longs===Number?new a.LongBits(e.modelVersion.low>>>0,e.modelVersion.high>>>0).toNumber():e.modelVersion),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.graph!=null&&e.hasOwnProperty("graph")&&(i.graph=o.onnx.GraphProto.toObject(e.graph,r)),e.opsetImport&&e.opsetImport.length){i.opsetImport=[];for(var g=0;g>>3){case 1:c.key=e.string();break;case 2:c.value=e.string();break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.key!=null&&e.hasOwnProperty("key")&&!a.isString(e.key)?"key: string expected":e.value!=null&&e.hasOwnProperty("value")&&!a.isString(e.value)?"value: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.StringStringEntryProto)return e;var r=new o.onnx.StringStringEntryProto;return e.key!=null&&(r.key=String(e.key)),e.value!=null&&(r.value=String(e.value)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.key="",i.value=""),e.key!=null&&e.hasOwnProperty("key")&&(i.key=e.key),e.value!=null&&e.hasOwnProperty("value")&&(i.value=e.value),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.TensorAnnotation=function(){function t(e){if(this.quantParameterTensorNames=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:c.tensorName=e.string();break;case 2:c.quantParameterTensorNames&&c.quantParameterTensorNames.length||(c.quantParameterTensorNames=[]),c.quantParameterTensorNames.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.tensorName!=null&&e.hasOwnProperty("tensorName")&&!a.isString(e.tensorName))return"tensorName: string expected";if(e.quantParameterTensorNames!=null&&e.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(e.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var r=0;r>>3){case 1:c.node&&c.node.length||(c.node=[]),c.node.push(o.onnx.NodeProto.decode(e,e.uint32()));break;case 2:c.name=e.string();break;case 5:c.initializer&&c.initializer.length||(c.initializer=[]),c.initializer.push(o.onnx.TensorProto.decode(e,e.uint32()));break;case 10:c.docString=e.string();break;case 11:c.input&&c.input.length||(c.input=[]),c.input.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 12:c.output&&c.output.length||(c.output=[]),c.output.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 13:c.valueInfo&&c.valueInfo.length||(c.valueInfo=[]),c.valueInfo.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 14:c.quantizationAnnotation&&c.quantizationAnnotation.length||(c.quantizationAnnotation=[]),c.quantizationAnnotation.push(o.onnx.TensorAnnotation.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.node!=null&&e.hasOwnProperty("node")){if(!Array.isArray(e.node))return"node: array expected";for(var r=0;r>>3){case 1:if(c.dims&&c.dims.length||(c.dims=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.dims[i].high>>>0).toNumber())}if(e.dataType!=null&&(r.dataType=0|e.dataType),e.segment!=null){if(typeof e.segment!="object")throw TypeError(".onnx.TensorProto.segment: object expected");r.segment=o.onnx.TensorProto.Segment.fromObject(e.segment)}if(e.floatData){if(!Array.isArray(e.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");for(r.floatData=[],i=0;i>>0,e.int64Data[i].high>>>0).toNumber())}if(e.name!=null&&(r.name=String(e.name)),e.docString!=null&&(r.docString=String(e.docString)),e.rawData!=null&&(typeof e.rawData=="string"?a.base64.decode(e.rawData,r.rawData=a.newBuffer(a.base64.length(e.rawData)),0):e.rawData.length&&(r.rawData=e.rawData)),e.externalData){if(!Array.isArray(e.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");for(r.externalData=[],i=0;i>>0,e.uint64Data[i].high>>>0).toNumber(!0))}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.dims=[],i.floatData=[],i.int32Data=[],i.stringData=[],i.int64Data=[],i.doubleData=[],i.uint64Data=[],i.externalData=[]),r.defaults&&(i.dataType=0,i.segment=null,i.name="",r.bytes===String?i.rawData="":(i.rawData=[],r.bytes!==Array&&(i.rawData=a.newBuffer(i.rawData))),i.docString="",i.dataLocation=r.enums===String?"DEFAULT":0),e.dims&&e.dims.length){i.dims=[];for(var c=0;c>>0,e.dims[c].high>>>0).toNumber():e.dims[c]}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&(i.dataType=e.dataType),e.segment!=null&&e.hasOwnProperty("segment")&&(i.segment=o.onnx.TensorProto.Segment.toObject(e.segment,r)),e.floatData&&e.floatData.length)for(i.floatData=[],c=0;c>>0,e.int64Data[c].high>>>0).toNumber():e.int64Data[c];if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.rawData!=null&&e.hasOwnProperty("rawData")&&(i.rawData=r.bytes===String?a.base64.encode(e.rawData,0,e.rawData.length):r.bytes===Array?Array.prototype.slice.call(e.rawData):e.rawData),e.doubleData&&e.doubleData.length)for(i.doubleData=[],c=0;c>>0,e.uint64Data[c].high>>>0).toNumber(!0):e.uint64Data[c];if(e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.externalData&&e.externalData.length)for(i.externalData=[],c=0;c>>3){case 1:g.begin=r.int64();break;case 2:g.end=r.int64();break;default:r.skipType(7&m)}}return g},e.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},e.verify=function(r){return typeof r!="object"||r===null?"object expected":r.begin!=null&&r.hasOwnProperty("begin")&&!(a.isInteger(r.begin)||r.begin&&a.isInteger(r.begin.low)&&a.isInteger(r.begin.high))?"begin: integer|Long expected":r.end!=null&&r.hasOwnProperty("end")&&!(a.isInteger(r.end)||r.end&&a.isInteger(r.end.low)&&a.isInteger(r.end.high))?"end: integer|Long expected":null},e.fromObject=function(r){if(r instanceof o.onnx.TensorProto.Segment)return r;var i=new o.onnx.TensorProto.Segment;return r.begin!=null&&(a.Long?(i.begin=a.Long.fromValue(r.begin)).unsigned=!1:typeof r.begin=="string"?i.begin=parseInt(r.begin,10):typeof r.begin=="number"?i.begin=r.begin:typeof r.begin=="object"&&(i.begin=new a.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber())),r.end!=null&&(a.Long?(i.end=a.Long.fromValue(r.end)).unsigned=!1:typeof r.end=="string"?i.end=parseInt(r.end,10):typeof r.end=="number"?i.end=r.end:typeof r.end=="object"&&(i.end=new a.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber())),i},e.toObject=function(r,i){i||(i={});var c={};if(i.defaults){if(a.Long){var g=new a.Long(0,0,!1);c.begin=i.longs===String?g.toString():i.longs===Number?g.toNumber():g}else c.begin=i.longs===String?"0":0;a.Long?(g=new a.Long(0,0,!1),c.end=i.longs===String?g.toString():i.longs===Number?g.toNumber():g):c.end=i.longs===String?"0":0}return r.begin!=null&&r.hasOwnProperty("begin")&&(typeof r.begin=="number"?c.begin=i.longs===String?String(r.begin):r.begin:c.begin=i.longs===String?a.Long.prototype.toString.call(r.begin):i.longs===Number?new a.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber():r.begin),r.end!=null&&r.hasOwnProperty("end")&&(typeof r.end=="number"?c.end=i.longs===String?String(r.end):r.end:c.end=i.longs===String?a.Long.prototype.toString.call(r.end):i.longs===Number?new a.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber():r.end),c},e.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},e}(),t.DataLocation=function(){var e={},r=Object.create(e);return r[e[0]="DEFAULT"]=0,r[e[1]="EXTERNAL"]=1,r}(),t}(),p.TensorShapeProto=function(){function t(e){if(this.dim=[],e)for(var r=Object.keys(e),i=0;i>>3==1?(c.dim&&c.dim.length||(c.dim=[]),c.dim.push(o.onnx.TensorShapeProto.Dimension.decode(e,e.uint32()))):e.skipType(7&g)}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.dim!=null&&e.hasOwnProperty("dim")){if(!Array.isArray(e.dim))return"dim: array expected";for(var r=0;r>>3){case 1:m.dimValue=i.int64();break;case 2:m.dimParam=i.string();break;case 3:m.denotation=i.string();break;default:i.skipType(7&b)}}return m},e.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},e.verify=function(i){if(typeof i!="object"||i===null)return"object expected";var c={};if(i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(c.value=1,!(a.isInteger(i.dimValue)||i.dimValue&&a.isInteger(i.dimValue.low)&&a.isInteger(i.dimValue.high))))return"dimValue: integer|Long expected";if(i.dimParam!=null&&i.hasOwnProperty("dimParam")){if(c.value===1)return"value: multiple values";if(c.value=1,!a.isString(i.dimParam))return"dimParam: string expected"}return i.denotation!=null&&i.hasOwnProperty("denotation")&&!a.isString(i.denotation)?"denotation: string expected":null},e.fromObject=function(i){if(i instanceof o.onnx.TensorShapeProto.Dimension)return i;var c=new o.onnx.TensorShapeProto.Dimension;return i.dimValue!=null&&(a.Long?(c.dimValue=a.Long.fromValue(i.dimValue)).unsigned=!1:typeof i.dimValue=="string"?c.dimValue=parseInt(i.dimValue,10):typeof i.dimValue=="number"?c.dimValue=i.dimValue:typeof i.dimValue=="object"&&(c.dimValue=new a.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber())),i.dimParam!=null&&(c.dimParam=String(i.dimParam)),i.denotation!=null&&(c.denotation=String(i.denotation)),c},e.toObject=function(i,c){c||(c={});var g={};return c.defaults&&(g.denotation=""),i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(typeof i.dimValue=="number"?g.dimValue=c.longs===String?String(i.dimValue):i.dimValue:g.dimValue=c.longs===String?a.Long.prototype.toString.call(i.dimValue):c.longs===Number?new a.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber():i.dimValue,c.oneofs&&(g.value="dimValue")),i.dimParam!=null&&i.hasOwnProperty("dimParam")&&(g.dimParam=i.dimParam,c.oneofs&&(g.value="dimParam")),i.denotation!=null&&i.hasOwnProperty("denotation")&&(g.denotation=i.denotation),g},e.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},e}(),t}(),p.TypeProto=function(){function t(r){if(r)for(var i=Object.keys(r),c=0;c>>3){case 1:g.tensorType=o.onnx.TypeProto.Tensor.decode(r,r.uint32());break;case 6:g.denotation=r.string();break;default:r.skipType(7&m)}}return g},t.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},t.verify=function(r){if(typeof r!="object"||r===null)return"object expected";if(r.tensorType!=null&&r.hasOwnProperty("tensorType")){var i=o.onnx.TypeProto.Tensor.verify(r.tensorType);if(i)return"tensorType."+i}return r.denotation!=null&&r.hasOwnProperty("denotation")&&!a.isString(r.denotation)?"denotation: string expected":null},t.fromObject=function(r){if(r instanceof o.onnx.TypeProto)return r;var i=new o.onnx.TypeProto;if(r.tensorType!=null){if(typeof r.tensorType!="object")throw TypeError(".onnx.TypeProto.tensorType: object expected");i.tensorType=o.onnx.TypeProto.Tensor.fromObject(r.tensorType)}return r.denotation!=null&&(i.denotation=String(r.denotation)),i},t.toObject=function(r,i){i||(i={});var c={};return i.defaults&&(c.denotation=""),r.tensorType!=null&&r.hasOwnProperty("tensorType")&&(c.tensorType=o.onnx.TypeProto.Tensor.toObject(r.tensorType,i),i.oneofs&&(c.value="tensorType")),r.denotation!=null&&r.hasOwnProperty("denotation")&&(c.denotation=r.denotation),c},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t.Tensor=function(){function r(i){if(i)for(var c=Object.keys(i),g=0;g>>3){case 1:m.elemType=i.int32();break;case 2:m.shape=o.onnx.TensorShapeProto.decode(i,i.uint32());break;default:i.skipType(7&b)}}return m},r.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},r.verify=function(i){if(typeof i!="object"||i===null)return"object expected";if(i.elemType!=null&&i.hasOwnProperty("elemType")&&!a.isInteger(i.elemType))return"elemType: integer expected";if(i.shape!=null&&i.hasOwnProperty("shape")){var c=o.onnx.TensorShapeProto.verify(i.shape);if(c)return"shape."+c}return null},r.fromObject=function(i){if(i instanceof o.onnx.TypeProto.Tensor)return i;var c=new o.onnx.TypeProto.Tensor;if(i.elemType!=null&&(c.elemType=0|i.elemType),i.shape!=null){if(typeof i.shape!="object")throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");c.shape=o.onnx.TensorShapeProto.fromObject(i.shape)}return c},r.toObject=function(i,c){c||(c={});var g={};return c.defaults&&(g.elemType=0,g.shape=null),i.elemType!=null&&i.hasOwnProperty("elemType")&&(g.elemType=i.elemType),i.shape!=null&&i.hasOwnProperty("shape")&&(g.shape=o.onnx.TensorShapeProto.toObject(i.shape,c)),g},r.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},r}(),t}(),p.OperatorSetIdProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i>>3){case 1:c.domain=e.string();break;case 2:c.version=e.int64();break;default:e.skipType(7&g)}}return c},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.domain!=null&&e.hasOwnProperty("domain")&&!a.isString(e.domain)?"domain: string expected":e.version!=null&&e.hasOwnProperty("version")&&!(a.isInteger(e.version)||e.version&&a.isInteger(e.version.low)&&a.isInteger(e.version.high))?"version: integer|Long expected":null},t.fromObject=function(e){if(e instanceof o.onnx.OperatorSetIdProto)return e;var r=new o.onnx.OperatorSetIdProto;return e.domain!=null&&(r.domain=String(e.domain)),e.version!=null&&(a.Long?(r.version=a.Long.fromValue(e.version)).unsigned=!1:typeof e.version=="string"?r.version=parseInt(e.version,10):typeof e.version=="number"?r.version=e.version:typeof e.version=="object"&&(r.version=new a.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber())),r},t.toObject=function(e,r){r||(r={});var i={};if(r.defaults)if(i.domain="",a.Long){var c=new a.Long(0,0,!1);i.version=r.longs===String?c.toString():r.longs===Number?c.toNumber():c}else i.version=r.longs===String?"0":0;return e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.version!=null&&e.hasOwnProperty("version")&&(typeof e.version=="number"?i.version=r.longs===String?String(e.version):e.version:i.version=r.longs===String?a.Long.prototype.toString.call(e.version):r.longs===Number?new a.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber():e.version),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p),y.exports=o},2100:(y,n,u)=>{y.exports=u(9482)},9482:(y,n,u)=>{var d=n;function l(){d.util._configure(),d.Writer._configure(d.BufferWriter),d.Reader._configure(d.BufferReader)}d.build="minimal",d.Writer=u(1173),d.BufferWriter=u(3155),d.Reader=u(1408),d.BufferReader=u(593),d.util=u(9693),d.rpc=u(5994),d.roots=u(5054),d.configure=l,l()},1408:(y,n,u)=>{y.exports=f;var d,l=u(9693),p=l.LongBits,s=l.utf8;function h(c,g){return RangeError("index out of range: "+c.pos+" + "+(g||1)+" > "+c.len)}function f(c){this.buf=c,this.pos=0,this.len=c.length}var a,o=typeof Uint8Array<"u"?function(c){if(c instanceof Uint8Array||Array.isArray(c))return new f(c);throw Error("illegal buffer")}:function(c){if(Array.isArray(c))return new f(c);throw Error("illegal buffer")},t=function(){return l.Buffer?function(c){return(f.create=function(g){return l.Buffer.isBuffer(g)?new d(g):o(g)})(c)}:o};function e(){var c=new p(0,0),g=0;if(!(this.len-this.pos>4)){for(;g<3;++g){if(this.pos>=this.len)throw h(this);if(c.lo=(c.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return c}return c.lo=(c.lo|(127&this.buf[this.pos++])<<7*g)>>>0,c}for(;g<4;++g)if(c.lo=(c.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return c;if(c.lo=(c.lo|(127&this.buf[this.pos])<<28)>>>0,c.hi=(c.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return c;if(g=0,this.len-this.pos>4){for(;g<5;++g)if(c.hi=(c.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return c}else for(;g<5;++g){if(this.pos>=this.len)throw h(this);if(c.hi=(c.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return c}throw Error("invalid varint encoding")}function r(c,g){return(c[g-4]|c[g-3]<<8|c[g-2]<<16|c[g-1]<<24)>>>0}function i(){if(this.pos+8>this.len)throw h(this,8);return new p(r(this.buf,this.pos+=4),r(this.buf,this.pos+=4))}f.create=t(),f.prototype._slice=l.Array.prototype.subarray||l.Array.prototype.slice,f.prototype.uint32=(a=4294967295,function(){if(a=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128||(a=(a|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)||(a=(a|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)||(a=(a|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)||(a=(a|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128))return a;if((this.pos+=5)>this.len)throw this.pos=this.len,h(this,10);return a}),f.prototype.int32=function(){return 0|this.uint32()},f.prototype.sint32=function(){var c=this.uint32();return c>>>1^-(1&c)|0},f.prototype.bool=function(){return this.uint32()!==0},f.prototype.fixed32=function(){if(this.pos+4>this.len)throw h(this,4);return r(this.buf,this.pos+=4)},f.prototype.sfixed32=function(){if(this.pos+4>this.len)throw h(this,4);return 0|r(this.buf,this.pos+=4)},f.prototype.float=function(){if(this.pos+4>this.len)throw h(this,4);var c=l.float.readFloatLE(this.buf,this.pos);return this.pos+=4,c},f.prototype.double=function(){if(this.pos+8>this.len)throw h(this,4);var c=l.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,c},f.prototype.bytes=function(){var c=this.uint32(),g=this.pos,m=this.pos+c;if(m>this.len)throw h(this,c);return this.pos+=c,Array.isArray(this.buf)?this.buf.slice(g,m):g===m?new this.buf.constructor(0):this._slice.call(this.buf,g,m)},f.prototype.string=function(){var c=this.bytes();return s.read(c,0,c.length)},f.prototype.skip=function(c){if(typeof c=="number"){if(this.pos+c>this.len)throw h(this,c);this.pos+=c}else do if(this.pos>=this.len)throw h(this);while(128&this.buf[this.pos++]);return this},f.prototype.skipType=function(c){switch(c){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;(c=7&this.uint32())!=4;)this.skipType(c);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+c+" at offset "+this.pos)}return this},f._configure=function(c){d=c,f.create=t(),d._configure();var g=l.Long?"toLong":"toNumber";l.merge(f.prototype,{int64:function(){return e.call(this)[g](!1)},uint64:function(){return e.call(this)[g](!0)},sint64:function(){return e.call(this).zzDecode()[g](!1)},fixed64:function(){return i.call(this)[g](!0)},sfixed64:function(){return i.call(this)[g](!1)}})}},593:(y,n,u)=>{y.exports=p;var d=u(1408);(p.prototype=Object.create(d.prototype)).constructor=p;var l=u(9693);function p(s){d.call(this,s)}p._configure=function(){l.Buffer&&(p.prototype._slice=l.Buffer.prototype.slice)},p.prototype.string=function(){var s=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+s,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+s,this.len))},p._configure()},5054:y=>{y.exports={}},5994:(y,n,u)=>{n.Service=u(7948)},7948:(y,n,u)=>{y.exports=l;var d=u(9693);function l(p,s,h){if(typeof p!="function")throw TypeError("rpcImpl must be a function");d.EventEmitter.call(this),this.rpcImpl=p,this.requestDelimited=!!s,this.responseDelimited=!!h}(l.prototype=Object.create(d.EventEmitter.prototype)).constructor=l,l.prototype.rpcCall=function p(s,h,f,a,o){if(!a)throw TypeError("request must be specified");var t=this;if(!o)return d.asPromise(p,t,s,h,f,a);if(t.rpcImpl)try{return t.rpcImpl(s,h[t.requestDelimited?"encodeDelimited":"encode"](a).finish(),function(e,r){if(e)return t.emit("error",e,s),o(e);if(r!==null){if(!(r instanceof f))try{r=f[t.responseDelimited?"decodeDelimited":"decode"](r)}catch(i){return t.emit("error",i,s),o(i)}return t.emit("data",r,s),o(null,r)}t.end(!0)})}catch(e){return t.emit("error",e,s),void setTimeout(function(){o(e)},0)}else setTimeout(function(){o(Error("already ended"))},0)},l.prototype.end=function(p){return this.rpcImpl&&(p||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},1945:(y,n,u)=>{y.exports=l;var d=u(9693);function l(f,a){this.lo=f>>>0,this.hi=a>>>0}var p=l.zero=new l(0,0);p.toNumber=function(){return 0},p.zzEncode=p.zzDecode=function(){return this},p.length=function(){return 1};var s=l.zeroHash="\0\0\0\0\0\0\0\0";l.fromNumber=function(f){if(f===0)return p;var a=f<0;a&&(f=-f);var o=f>>>0,t=(f-o)/4294967296>>>0;return a&&(t=~t>>>0,o=~o>>>0,++o>4294967295&&(o=0,++t>4294967295&&(t=0))),new l(o,t)},l.from=function(f){if(typeof f=="number")return l.fromNumber(f);if(d.isString(f)){if(!d.Long)return l.fromNumber(parseInt(f,10));f=d.Long.fromString(f)}return f.low||f.high?new l(f.low>>>0,f.high>>>0):p},l.prototype.toNumber=function(f){if(!f&&this.hi>>>31){var a=1+~this.lo>>>0,o=~this.hi>>>0;return a||(o=o+1>>>0),-(a+4294967296*o)}return this.lo+4294967296*this.hi},l.prototype.toLong=function(f){return d.Long?new d.Long(0|this.lo,0|this.hi,!!f):{low:0|this.lo,high:0|this.hi,unsigned:!!f}};var h=String.prototype.charCodeAt;l.fromHash=function(f){return f===s?p:new l((h.call(f,0)|h.call(f,1)<<8|h.call(f,2)<<16|h.call(f,3)<<24)>>>0,(h.call(f,4)|h.call(f,5)<<8|h.call(f,6)<<16|h.call(f,7)<<24)>>>0)},l.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},l.prototype.zzEncode=function(){var f=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^f)>>>0,this.lo=(this.lo<<1^f)>>>0,this},l.prototype.zzDecode=function(){var f=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^f)>>>0,this.hi=(this.hi>>>1^f)>>>0,this},l.prototype.length=function(){var f=this.lo,a=(this.lo>>>28|this.hi<<4)>>>0,o=this.hi>>>24;return o===0?a===0?f<16384?f<128?1:2:f<2097152?3:4:a<16384?a<128?5:6:a<2097152?7:8:o<128?9:10}},9693:function(y,n,u){var d=n;function l(s,h,f){for(var a=Object.keys(h),o=0;o0)},d.Buffer=function(){try{var s=d.inquire("buffer").Buffer;return s.prototype.utf8Write?s:null}catch{return null}}(),d._Buffer_from=null,d._Buffer_allocUnsafe=null,d.newBuffer=function(s){return typeof s=="number"?d.Buffer?d._Buffer_allocUnsafe(s):new d.Array(s):d.Buffer?d._Buffer_from(s):typeof Uint8Array>"u"?s:new Uint8Array(s)},d.Array=typeof Uint8Array<"u"?Uint8Array:Array,d.Long=d.global.dcodeIO&&d.global.dcodeIO.Long||d.global.Long||d.inquire("long"),d.key2Re=/^true|false|0|1$/,d.key32Re=/^-?(?:0|[1-9][0-9]*)$/,d.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,d.longToHash=function(s){return s?d.LongBits.from(s).toHash():d.LongBits.zeroHash},d.longFromHash=function(s,h){var f=d.LongBits.fromHash(s);return d.Long?d.Long.fromBits(f.lo,f.hi,h):f.toNumber(!!h)},d.merge=l,d.lcFirst=function(s){return s.charAt(0).toLowerCase()+s.substring(1)},d.newError=p,d.ProtocolError=p("ProtocolError"),d.oneOfGetter=function(s){for(var h={},f=0;f-1;--o)if(h[a[o]]===1&&this[a[o]]!==void 0&&this[a[o]]!==null)return a[o]}},d.oneOfSetter=function(s){return function(h){for(var f=0;f{y.exports=t;var d,l=u(9693),p=l.LongBits,s=l.base64,h=l.utf8;function f(b,_,w){this.fn=b,this.len=_,this.next=void 0,this.val=w}function a(){}function o(b){this.head=b.head,this.tail=b.tail,this.len=b.len,this.next=b.states}function t(){this.len=0,this.head=new f(a,0,0),this.tail=this.head,this.states=null}var e=function(){return l.Buffer?function(){return(t.create=function(){return new d})()}:function(){return new t}};function r(b,_,w){_[w]=255&b}function i(b,_){this.len=b,this.next=void 0,this.val=_}function c(b,_,w){for(;b.hi;)_[w++]=127&b.lo|128,b.lo=(b.lo>>>7|b.hi<<25)>>>0,b.hi>>>=7;for(;b.lo>127;)_[w++]=127&b.lo|128,b.lo=b.lo>>>7;_[w++]=b.lo}function g(b,_,w){_[w]=255&b,_[w+1]=b>>>8&255,_[w+2]=b>>>16&255,_[w+3]=b>>>24}t.create=e(),t.alloc=function(b){return new l.Array(b)},l.Array!==Array&&(t.alloc=l.pool(t.alloc,l.Array.prototype.subarray)),t.prototype._push=function(b,_,w){return this.tail=this.tail.next=new f(b,_,w),this.len+=_,this},i.prototype=Object.create(f.prototype),i.prototype.fn=function(b,_,w){for(;b>127;)_[w++]=127&b|128,b>>>=7;_[w]=b},t.prototype.uint32=function(b){return this.len+=(this.tail=this.tail.next=new i((b>>>=0)<128?1:b<16384?2:b<2097152?3:b<268435456?4:5,b)).len,this},t.prototype.int32=function(b){return b<0?this._push(c,10,p.fromNumber(b)):this.uint32(b)},t.prototype.sint32=function(b){return this.uint32((b<<1^b>>31)>>>0)},t.prototype.uint64=function(b){var _=p.from(b);return this._push(c,_.length(),_)},t.prototype.int64=t.prototype.uint64,t.prototype.sint64=function(b){var _=p.from(b).zzEncode();return this._push(c,_.length(),_)},t.prototype.bool=function(b){return this._push(r,1,b?1:0)},t.prototype.fixed32=function(b){return this._push(g,4,b>>>0)},t.prototype.sfixed32=t.prototype.fixed32,t.prototype.fixed64=function(b){var _=p.from(b);return this._push(g,4,_.lo)._push(g,4,_.hi)},t.prototype.sfixed64=t.prototype.fixed64,t.prototype.float=function(b){return this._push(l.float.writeFloatLE,4,b)},t.prototype.double=function(b){return this._push(l.float.writeDoubleLE,8,b)};var m=l.Array.prototype.set?function(b,_,w){_.set(b,w)}:function(b,_,w){for(var v=0;v>>0;if(!_)return this._push(r,1,0);if(l.isString(b)){var w=t.alloc(_=s.length(b));s.decode(b,w,0),b=w}return this.uint32(_)._push(m,_,b)},t.prototype.string=function(b){var _=h.length(b);return _?this.uint32(_)._push(h.write,_,b):this._push(r,1,0)},t.prototype.fork=function(){return this.states=new o(this),this.head=this.tail=new f(a,0,0),this.len=0,this},t.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new f(a,0,0),this.len=0),this},t.prototype.ldelim=function(){var b=this.head,_=this.tail,w=this.len;return this.reset().uint32(w),w&&(this.tail.next=b.next,this.tail=_,this.len+=w),this},t.prototype.finish=function(){for(var b=this.head.next,_=this.constructor.alloc(this.len),w=0;b;)b.fn(b.val,_,w),w+=b.len,b=b.next;return _},t._configure=function(b){d=b,t.create=e(),d._configure()}},3155:(y,n,u)=>{y.exports=p;var d=u(1173);(p.prototype=Object.create(d.prototype)).constructor=p;var l=u(9693);function p(){d.call(this)}function s(h,f,a){h.length<40?l.utf8.write(h,f,a):f.utf8Write?f.utf8Write(h,a):f.write(h,a)}p._configure=function(){p.alloc=l._Buffer_allocUnsafe,p.writeBytesBuffer=l.Buffer&&l.Buffer.prototype instanceof Uint8Array&&l.Buffer.prototype.set.name==="set"?function(h,f,a){f.set(h,a)}:function(h,f,a){if(h.copy)h.copy(f,a,0,h.length);else for(var o=0;o>>0;return this.uint32(f),f&&this._push(p.writeBytesBuffer,f,h),this},p.prototype.string=function(h){var f=l.Buffer.byteLength(h);return this.uint32(f),f&&this._push(s,f,h),this},p._configure()},7714:(y,n,u)=>{n.R=void 0;const d=u(6919),l=u(7448);n.R=new class{async init(){}async createSessionHandler(p,s){const h=new d.Session(s);return await h.loadModel(p),new l.OnnxjsSessionHandler(h)}}},4200:(y,n,u)=>{n.c8=n.rX=void 0;const d=u(1670),l=u(5381),p=u(2157),s=u(2306);n.rX=()=>{if((typeof d.env.wasm.initTimeout!="number"||d.env.wasm.initTimeout<0)&&(d.env.wasm.initTimeout=0),typeof d.env.wasm.simd!="boolean"&&(d.env.wasm.simd=!0),typeof d.env.wasm.proxy!="boolean"&&(d.env.wasm.proxy=!1),typeof d.env.wasm.numThreads!="number"||!Number.isInteger(d.env.wasm.numThreads)||d.env.wasm.numThreads<=0){const h=typeof navigator>"u"?(0,l.cpus)().length:navigator.hardwareConcurrency;d.env.wasm.numThreads=Math.min(4,Math.ceil((h||1)/2))}},n.c8=new class{async init(){(0,n.rX)(),await(0,p.initWasm)()}async createSessionHandler(h,f){const a=new s.OnnxruntimeWebAssemblySessionHandler;return await a.loadModel(h,f),Promise.resolve(a)}}},6018:function(y,n,u){var d=this&&this.__createBinding||(Object.create?function(s,h,f,a){a===void 0&&(a=f);var o=Object.getOwnPropertyDescriptor(h,f);o&&!("get"in o?!h.__esModule:o.writable||o.configurable)||(o={enumerable:!0,get:function(){return h[f]}}),Object.defineProperty(s,a,o)}:function(s,h,f,a){a===void 0&&(a=f),s[a]=h[f]}),l=this&&this.__exportStar||function(s,h){for(var f in s)f==="default"||Object.prototype.hasOwnProperty.call(h,f)||d(h,s,f)};Object.defineProperty(n,"__esModule",{value:!0}),l(u(1670),n);const p=u(1670);{const s=u(7714).R;(0,p.registerBackend)("webgl",s,-10)}{const s=u(4200).c8;(0,p.registerBackend)("cpu",s,10),(0,p.registerBackend)("wasm",s,10),(0,p.registerBackend)("xnnpack",s,9)}},246:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createAttributeWithCacheKey=void 0;class u{constructor(l){Object.assign(this,l)}get cacheKey(){return this._cacheKey||(this._cacheKey=Object.getOwnPropertyNames(this).sort().map(l=>`${this[l]}`).join(";")),this._cacheKey}}n.createAttributeWithCacheKey=d=>new u(d)},7778:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Attribute=void 0;const d=u(1446),l=u(9395),p=u(9162),s=u(2517);var h=l.onnxruntime.experimental.fbs;class f{constructor(o){if(this._attributes=new Map,o!=null){for(const t of o)t instanceof d.onnx.AttributeProto?this._attributes.set(t.name,[f.getValue(t),f.getType(t)]):t instanceof h.Attribute&&this._attributes.set(t.name(),[f.getValue(t),f.getType(t)]);if(this._attributes.sizep.Tensor.fromProto(r));if(o instanceof h.Attribute)return e.map(r=>p.Tensor.fromOrtTensor(r))}if(t===d.onnx.AttributeProto.AttributeType.STRING&&o instanceof d.onnx.AttributeProto){const r=e;return(0,s.decodeUtf8String)(r)}return t===d.onnx.AttributeProto.AttributeType.STRINGS&&o instanceof d.onnx.AttributeProto?e.map(s.decodeUtf8String):e}static getValueNoCheck(o){return o instanceof d.onnx.AttributeProto?this.getValueNoCheckFromOnnxFormat(o):this.getValueNoCheckFromOrtFormat(o)}static getValueNoCheckFromOnnxFormat(o){switch(o.type){case d.onnx.AttributeProto.AttributeType.FLOAT:return o.f;case d.onnx.AttributeProto.AttributeType.INT:return o.i;case d.onnx.AttributeProto.AttributeType.STRING:return o.s;case d.onnx.AttributeProto.AttributeType.TENSOR:return o.t;case d.onnx.AttributeProto.AttributeType.GRAPH:return o.g;case d.onnx.AttributeProto.AttributeType.FLOATS:return o.floats;case d.onnx.AttributeProto.AttributeType.INTS:return o.ints;case d.onnx.AttributeProto.AttributeType.STRINGS:return o.strings;case d.onnx.AttributeProto.AttributeType.TENSORS:return o.tensors;case d.onnx.AttributeProto.AttributeType.GRAPHS:return o.graphs;default:throw new Error(`unsupported attribute type: ${d.onnx.AttributeProto.AttributeType[o.type]}`)}}static getValueNoCheckFromOrtFormat(o){switch(o.type()){case h.AttributeType.FLOAT:return o.f();case h.AttributeType.INT:return o.i();case h.AttributeType.STRING:return o.s();case h.AttributeType.TENSOR:return o.t();case h.AttributeType.GRAPH:return o.g();case h.AttributeType.FLOATS:return o.floatsArray();case h.AttributeType.INTS:{const t=[];for(let e=0;e{Object.defineProperty(n,"__esModule",{value:!0}),n.resolveBackend=n.backend=void 0;const d=u(5038),l=new Map;async function p(s){const h=n.backend;if(h[s]!==void 0&&function(f){const a=f;return"initialize"in a&&typeof a.initialize=="function"&&"createSessionHandler"in a&&typeof a.createSessionHandler=="function"&&"dispose"in a&&typeof a.dispose=="function"}(h[s])){const f=h[s];let a=f.initialize();if(typeof a=="object"&&"then"in a&&(a=await a),a)return l.set(s,f),f}}n.backend={webgl:new d.WebGLBackend},n.resolveBackend=async function s(h){if(!h)return s(["webgl"]);{const f=typeof h=="string"?[h]:h;for(const a of f){const o=l.get(a);if(o)return o;const t=await p(a);if(t)return t}}throw new Error("no available backend to use")}},5038:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLBackend=void 0;const d=u(1670),l=u(6231),p=u(6416),s=u(7305);n.WebGLBackend=class{get contextId(){return d.env.webgl.contextId}set contextId(h){d.env.webgl.contextId=h}get matmulMaxBatchSize(){return d.env.webgl.matmulMaxBatchSize}set matmulMaxBatchSize(h){d.env.webgl.matmulMaxBatchSize=h}get textureCacheMode(){return d.env.webgl.textureCacheMode}set textureCacheMode(h){d.env.webgl.textureCacheMode=h}get pack(){return d.env.webgl.pack}set pack(h){d.env.webgl.pack=h}get async(){return d.env.webgl.async}set async(h){d.env.webgl.async=h}initialize(){try{return this.glContext=(0,s.createWebGLContext)(this.contextId),typeof this.matmulMaxBatchSize!="number"&&(this.matmulMaxBatchSize=16),typeof this.textureCacheMode!="string"&&(this.textureCacheMode="full"),typeof this.pack!="boolean"&&(this.pack=!1),typeof this.async!="boolean"&&(this.async=!1),l.Logger.setWithEnv(d.env),l.Logger.verbose("WebGLBackend",`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${this.async}.`),!0}catch(h){return l.Logger.warning("WebGLBackend",`Unable to initialize WebGLBackend. ${h}`),!1}}createSessionHandler(h){return new p.WebGLSessionHandler(this,h)}dispose(){this.glContext.dispose()}}},5107:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.CoordsGlslLib=void 0;const d=u(2517),l=u(8520),p=u(5060),s=u(7859),h=u(9390);class f extends l.GlslLib{constructor(o){super(o)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.offsetToCoords()),this.coordsToOffset()),this.toVec()),this.valueFrom()),this.getCommonUtilFuncs()),this.getInputsSamplingSnippets()),this.getOutputSamplingSnippet())}getCustomTypes(){return{}}offsetToCoords(){return{offsetToCoords:new l.GlslLibRoutine(` - vec2 offsetToCoords(int offset, int width, int height) { - int t = offset / width; - int s = offset - t*width; - vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height); - return coords; - } - `)}}coordsToOffset(){return{coordsToOffset:new l.GlslLibRoutine(` - int coordsToOffset(vec2 coords, int width, int height) { - float s = coords.s * float(width); - float t = coords.t * float(height); - int offset = int(t) * width + int(s); - return offset; - } - `)}}getOutputSamplingSnippet(){const o=this.context.outputTextureLayout;return o.isPacked?this.getPackedOutputSamplingSnippet(o):this.getUnpackedOutputSamplingSnippet(o)}getPackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputPacked1DCoords(t,e);break;case 2:r[i]=this.getOutputPacked2DCoords(t,e);break;case 3:r[i]=this.getOutputPacked3DCoords(t,e);break;default:r[i]=this.getOutputPackedNDCoords(t,e)}const c=` - void setOutput(vec4 val) { - ${(0,p.getGlsl)(this.context.glContext.version).output} = val; - } - `;return r.floatTextureSetRGBA=new l.GlslLibRoutine(c),r}getUnpackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputUnpacked1DCoords(t,e);break;case 2:r[i]=this.getOutputUnpacked2DCoords(t,e);break;case 3:r[i]=this.getOutputUnpacked3DCoords(t,e);break;case 4:r[i]=this.getOutputUnpacked4DCoords(t,e);break;case 5:r[i]=this.getOutputUnpacked5DCoords(t,e);break;case 6:r[i]=this.getOutputUnpacked6DCoords(t,e);break;default:throw new Error(`Unsupported output dimensionality: ${t.length}`)}const c=` - void setOutput(float val) { - ${(0,p.getGlsl)(this.context.glContext.version).output} = vec4(val, 0, 0, 0); - } - `;return r.floatTextureSetR=new l.GlslLibRoutine(c),r}getOutputScalarCoords(){return new l.GlslLibRoutine(` - int getOutputCoords() { - return 0; - } - `)}getOutputPacked1DCoords(o,t){const e=t;let r="";return e[0]===1?(r=` - int getOutputCoords() { - return 2 * int(TexCoords.y * ${e[1]}.0); - } - `,new l.GlslLibRoutine(r)):e[1]===1?(r=` - int getOutputCoords() { - return 2 * int(TexCoords.x * ${e[0]}.0); - } - `,new l.GlslLibRoutine(r)):(r=` - int getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - return 2 * (resTexRC.y * ${e[0]} + resTexRC.x); - } - `,new l.GlslLibRoutine(r))}getOutputPacked2DCoords(o,t){let e="";if(d.ArrayUtil.arraysEqual(o,t))return e=` - ivec2 getOutputCoords() { - return 2 * ivec2(TexCoords.xy * vec2(${t[0]}, ${t[1]})); - } - `,new l.GlslLibRoutine(e);const r=t,i=Math.ceil(o[1]/2);return e=` - ivec2 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${r[0]}, ${r[1]})); - - int index = resTexRC.y * ${r[0]} + resTexRC.x; - - // reverse r and c order for packed texture - int r = imod(index, ${i}) * 2; - int c = 2 * (index / ${i}); - - return ivec2(r, c); - } - `,new l.GlslLibRoutine(e)}getOutputPacked3DCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[2]/2),i=r*Math.ceil(o[1]/2),c=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - int index = resTexRC.y * ${e[0]} + resTexRC.x; - - int b = index / ${i}; - index -= b * ${i}; - - // reverse r and c order for packed texture - int r = imod(index, ${r}) * 2; - int c = 2 * (index / ${r}); - - return ivec3(b, r, c); - } - `;return new l.GlslLibRoutine(c)}getOutputPackedNDCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[o.length-1]/2),i=r*Math.ceil(o[o.length-2]/2);let c=i,g="",m="b, r, c";for(let _=2;_=0;--m)i[m]=i[m+1]*o[m+1];const c=["r","c","d"],g=i.map((m,b)=>`int ${c[b]} = index / ${m}; ${b===i.length-1?`int ${c[b+1]} = index - ${c[b]} * ${m}`:`index -= ${c[b]} * ${m}`};`).join("");return e=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec3(r, c, d); - } - `,new l.GlslLibRoutine(e)}getOutputUnpacked4DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const c=["r","c","d","d2"],g=i.map((m,b)=>`int ${c[b]} = index / ${m}; ${b===i.length-1?`int ${c[b+1]} = index - ${c[b]} * ${m}`:`index -= ${c[b]} * ${m}`};`).join("");return e=` - ivec4 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec4(r, c, d, d2); - } - `,new l.GlslLibRoutine(e)}getOutputUnpacked5DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const c=["r","c","d","d2","d3"],g=i.map((m,b)=>`int ${c[b]} = index / ${m}; ${b===i.length-1?`int ${c[b+1]} = index - ${c[b]} * ${m}`:`index -= ${c[b]} * ${m}`};`).join("");return e=` - ivec5 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec5(r, c, d, d2, d3); - } - `,new l.GlslLibRoutine(e)}getOutputUnpacked6DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const c=["r","c","d","d2","d3","d4"],g=i.map((m,b)=>`int ${c[b]} = index / ${m}; ${b===i.length-1?`int ${c[b+1]} = index - ${c[b]} * ${m}`:`index -= ${c[b]} * ${m}`};`).join("");return e=` - ivec6 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec6(r, c, d, d2, d3, d4); - } - `,new l.GlslLibRoutine(e)}getCommonUtilFuncs(){const o={};let t="uvFromFlat";o[t]=new l.GlslLibRoutine(` - vec2 uvFromFlat(int texNumR, int texNumC, int index) { - int texC = index / texNumR; - int texR = index - texC * texNumR; - // TODO: swap texR, texC order in following function so row is corresponding to u and column is corresponding to - // v. - return (vec2(texR, texC) + halfCR) / vec2(texNumR, texNumC); - } - `),t="packedUVfrom1D",o[t]=new l.GlslLibRoutine(` - vec2 packedUVfrom1D(int texNumR, int texNumC, int index) { - int texelIndex = index / 2; - int texR = texelIndex / texNumC; - int texC = texelIndex - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="packedUVfrom2D",o[t]=new l.GlslLibRoutine(` - vec2 packedUVfrom2D(int texNumR, int texNumC, int texelsInLogicalRow, int row, int col) { - int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2); - int texR = texelIndex / texNumC; - int texC = texelIndex - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="packedUVfrom3D",o[t]=new l.GlslLibRoutine(` - vec2 packedUVfrom3D(int texNumR, int texNumC, - int texelsInBatch, int texelsInLogicalRow, int b, - int row, int col) { - int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2); - int texR = index / texNumC; - int texC = index - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="sampleTexture";const e=(0,p.getGlsl)(this.context.glContext.version);return o[t]=new l.GlslLibRoutine(` - float sampleTexture(sampler2D textureSampler, vec2 uv) { - return ${e.texture2D}(textureSampler, uv).r; - }`),o}getInputsSamplingSnippets(){const o={},t=this.context.outputTextureLayout;return this.context.programInfo.inputNames.forEach((e,r)=>{const i=this.context.inputTextureLayouts[r],c=(0,h.generateShaderFuncNameFromInputSamplerName)(e);i.isPacked?o[c]=this.getPackedSamplerFromInput(c,e,i):o[c]=this.getUnpackedSamplerFromInput(c,e,i);const g=(0,h.generateShaderFuncNameFromInputSamplerNameAtOutCoords)(e);i.unpackedShape.length<=t.unpackedShape.length&&(i.isPacked?o[g]=this.getPackedSamplerAtOutputCoords(g,i,t,e):o[g]=this.getUnpackedSamplerAtOutputCoords(g,i,t,e))}),o}getPackedSamplerAtOutputCoords(o,t,e,r){const i=t.unpackedShape,c=e.unpackedShape,g=r,m=(0,h.generateShaderFuncNameFromInputSamplerName)(g),b=i.length,_=c.length,w=d.BroadcastUtil.getBroadcastDims(i,c),v=(0,h.getCoordsDataType)(_),S=_-b;let O;const E=(0,h.getGlChannels)();O=b===0?"":_<2&&w.length>=1?"coords = 0;":w.map(N=>`coords.${E[N+S]} = 0;`).join(` -`);let T="";T=_<2&&b>0?"coords":i.map((N,H)=>`coords.${E[H+S]}`).join(", ");let I="return outputValue;";const C=d.ShapeUtil.size(i)===1,B=d.ShapeUtil.size(c)===1;if(b!==1||C||B){if(C&&!B)I=_===1?` - return vec4(outputValue.x, outputValue.x, 0., 0.); - `:` - return vec4(outputValue.x); - `;else if(w.length){const N=b-2,H=b-1;w.indexOf(N)>-1&&w.indexOf(H)>-1?I="return vec4(outputValue.x);":w.indexOf(N)>-1?I="return vec4(outputValue.x, outputValue.y, outputValue.x, outputValue.y);":w.indexOf(H)>-1&&(I="return vec4(outputValue.xx, outputValue.zz);")}}else I=` - return vec4(outputValue.xy, outputValue.xy); - `;const F=` - vec4 ${o}() { - ${v} coords = getOutputCoords(); - - int lastDim = coords.${E[_-1]}; - coords.${E[_-1]} = coords.${E[_-2]}; - coords.${E[_-2]} = lastDim; - - ${O} - vec4 outputValue = ${m}(${T}); - ${I} - } - `;return new l.GlslLibRoutine(F,["coordinates.getOutputCoords"])}getUnpackedSamplerAtOutputCoords(o,t,e,r){const i=[e.width,e.height],c=[t.width,t.height],g=t.unpackedShape.length,m=e.unpackedShape.length,b=t.unpackedShape,_=e.unpackedShape,w=(0,h.generateShaderFuncNameFromInputSamplerName)(r);if(g===m&&d.ArrayUtil.arraysEqual(c,i)){const B=` - float ${o}() { - return sampleTexture(${r}, TexCoords); - } - `;return new l.GlslLibRoutine(B,["coordinates.sampleTexture"])}const v=(0,h.getCoordsDataType)(m),S=d.BroadcastUtil.getBroadcastDims(b,_),O=m-g;let E;const T=(0,h.getGlChannels)();E=g===0?"":m<2&&S.length>=1?"coords = 0;":S.map(B=>`coords.${T[B+O]} = 0;`).join(` -`);let I="";I=m<2&&g>0?"coords":t.unpackedShape.map((B,F)=>`coords.${T[F+O]}`).join(", ");const C=` - float ${o}() { - ${v} coords = getOutputCoords(); - ${E} - return ${w}(${I}); - } - `;return new l.GlslLibRoutine(C,["coordinates.getOutputCoords"])}getPackedSamplerFromInput(o,t,e){switch(e.unpackedShape.length){case 0:return this.getPackedSamplerScalar(o,t);case 1:return this.getPackedSampler1D(o,t,e);case 2:return this.getPackedSampler2D(o,t,e);case 3:return this.getPackedSampler3D(o,t,e);default:return this.getPackedSamplerND(o,t,e)}}getUnpackedSamplerFromInput(o,t,e){const r=e.unpackedShape;switch(r.length){case 0:return this.getUnpackedSamplerScalar(o,t,e);case 1:return this.getUnpackedSampler1D(o,t,e);case 2:return this.getUnpackedSampler2D(o,t,e);case 3:return this.getUnpackedSampler3D(o,t,e);case 4:return this.getUnpackedSampler4D(o,t,e);case 5:return this.getUnpackedSampler5D(o,t,e);case 6:return this.getUnpackedSampler6D(o,t,e);default:throw new Error(`Unsupported dimension ${r.length}-D`)}}getPackedSamplerScalar(o,t){const e=` - vec4 ${o}() { - return ${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${t}, halfCR); - } - `;return new l.GlslLibRoutine(e)}getPackedSampler1D(o,t,e){const r=[e.width,e.height],i=[r[1],r[0]],c=(0,p.getGlsl)(this.context.glContext.version),g=`vec4 ${o}(int index) { - vec2 uv = packedUVfrom1D( - ${i[0]}, ${i[1]}, index); - return ${c.texture2D}(${t}, uv); - }`;return new l.GlslLibRoutine(g,["coordinates.packedUVfrom1D"])}getPackedSampler2D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],c=(0,p.getGlsl)(this.context.glContext.version),g=i[0],m=i[1];if(i!=null&&d.ArrayUtil.arraysEqual(r,i)){const v=`vec4 ${o}(int row, int col) { - vec2 uv = (vec2(col, row) + halfCR) / vec2(${m}.0, ${g}.0); - return ${c.texture2D}(${t}, uv); - }`;return new l.GlslLibRoutine(v)}const b=i,_=Math.ceil(r[1]/2),w=`vec4 ${o}(int row, int col) { - vec2 uv = packedUVfrom2D(${b[1]}, ${b[0]}, ${_}, row, col); - return ${c.texture2D}(${t}, uv); - }`;return new l.GlslLibRoutine(w,["coordinates.packedUVfrom2D"])}getPackedSampler3D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],c=[i[0],i[1]],g=(0,p.getGlsl)(this.context.glContext.version);if(r[0]===1){const v=r.slice(1),S=[1,2],O=(0,h.squeezeInputShape)(r,v),E=["b","row","col"],T=JSON.parse(JSON.stringify(e));T.unpackedShape=O;const I=this.getPackedSamplerFromInput(o,t,T),C=`${I.routineBody} - vec4 ${o}(int b, int row, int col) { - return ${o}(${(0,h.getSqueezedParams)(E,S)}); - } `;return new l.GlslLibRoutine(C,I.dependencies)}const m=c[0],b=c[1],_=Math.ceil(r[2]/2),w=`vec4 ${o}(int b, int row, int col) { - vec2 uv = packedUVfrom3D( - ${b}, ${m}, ${_*Math.ceil(r[1]/2)}, ${_}, b, row, col); - return ${g.texture2D}(${t}, uv);}`;return new l.GlslLibRoutine(w,["coordinates.packedUVfrom3D"])}getPackedSamplerND(o,t,e){const r=e.unpackedShape,i=r.length,c=[e.width,e.height],g=(0,p.getGlsl)(this.context.glContext.version),m=[c[0],c[1]],b=m[1],_=m[0],w=Math.ceil(r[i-1]/2);let v=w*Math.ceil(r[i-2]/2),S="int b, int row, int col",O=`b * ${v} + (row / 2) * ${w} + (col / 2)`;for(let T=2;T{const r=this.context.inputTextureLayouts[e],i=(r.unpackedShape.length>0?r.unpackedShape:r.shape).length;let c=`_${t}`;o[c]=new l.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!1),[`shapeUtils.indicesToOffset${c}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"]),c+="_T",o[c]=new l.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!0),[`shapeUtils.indicesToOffset${c}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"])}),o}getValueFromSingle(o,t,e,r,i){let c=`_${o}`;return i&&(c+="_T"),` - float ${c}(int m[${t}]) { - int offset = indicesToOffset${c}(m); - vec2 coords = offsetToCoords(offset, ${e}, ${r}); - float value = getColorAsFloat(${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords)); - return value; - } - `}getPackedValueFrom(o,t,e,r,i){let c=`_${o}_Pack`;return i&&(c+="_T"),` - vec4 ${c}(int m[${t}]) { - int offset = indicesToOffset_${o}(m); - vec2 coords = offsetToCoords(offset, ${e}, ${r}); - return ${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords); - } - `}}n.CoordsGlslLib=f},8520:(y,n)=>{var u;Object.defineProperty(n,"__esModule",{value:!0}),n.TopologicalSortGlslRoutines=n.GlslLibRoutineNode=n.GlslLibRoutine=n.GlslLib=n.GlslContext=n.FunctionType=void 0,(u=n.FunctionType||(n.FunctionType={}))[u.ValueBased=0]="ValueBased",u[u.Positional=1]="Positional",n.GlslContext=class{constructor(d,l,p,s){this.glContext=d,this.programInfo=l,this.inputTextureLayouts=p,this.outputTextureLayout=s}},n.GlslLib=class{constructor(d){this.context=d}},n.GlslLibRoutine=class{constructor(d,l){this.routineBody=d,this.dependencies=l}},n.GlslLibRoutineNode=class{constructor(d,l,p){this.name=d,this.dependencies=p||[],l&&(this.routineBody=l)}addDependency(d){d&&this.dependencies.push(d)}},n.TopologicalSortGlslRoutines=class{static returnOrderedNodes(d){if(!d||d.length===0)return[];if(d.length===1)return d;const l=new Set,p=new Set,s=new Array;return this.createOrderedNodes(d,l,p,s),s}static createOrderedNodes(d,l,p,s){for(let h=0;h0)for(let f=0;f{Object.defineProperty(n,"__esModule",{value:!0}),n.EncodingGlslLib=void 0;const d=u(8520);class l extends d.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign({},this.encodeFloat32()),this.decodeFloat32())}getCustomTypes(){return{}}encodeFloat32(){return{encode:new d.GlslLibRoutine(`highp vec4 encode(highp float f) { - return vec4(f, 0.0, 0.0, 0.0); - } - `)}}decodeFloat32(){return{decode:new d.GlslLibRoutine(`highp float decode(highp vec4 rgba) { - return rgba.r; - } - `)}}encodeUint8(){const s=l.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{encode:new d.GlslLibRoutine(` - highp vec4 encode(highp float f) { - highp float F = abs(f); - highp float Sign = step(0.0,-f); - highp float Exponent = floor(log2(F)); - highp float Mantissa = (exp2(- Exponent) * F); - Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa)); - highp vec4 rgba; - rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0)); - rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0); - rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0))); - rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0))); - ${s} - rgba = rgba / 255.0; // values need to be normalized to [0,1] - return rgba; - } - `)}}decodeUint8(){const s=l.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{decode:new d.GlslLibRoutine(` - highp float decode(highp vec4 rgba) { - rgba = rgba * 255.0; // values need to be de-normalized from [0,1] to [0,255] - ${s} - highp float Sign = 1.0 - step(128.0,rgba[0])*2.0; - highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0; - highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000); - highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 )); - return Result; - } - `)}}static isLittleEndian(){const s=new ArrayBuffer(4),h=new Uint32Array(s),f=new Uint8Array(s);if(h[0]=3735928559,f[0]===239)return!0;if(f[0]===222)return!1;throw new Error("unknown endianness")}}n.EncodingGlslLib=l},9894:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.FragColorGlslLib=void 0;const d=u(8520),l=u(5060);class p extends d.GlslLib{constructor(h){super(h)}getFunctions(){return Object.assign(Object.assign({},this.setFragColor()),this.getColorAsFloat())}getCustomTypes(){return{}}setFragColor(){const h=(0,l.getGlsl)(this.context.glContext.version);return{setFragColor:new d.GlslLibRoutine(` - void setFragColor(float value) { - ${h.output} = encode(value); - } - `,["encoding.encode"])}}getColorAsFloat(){return{getColorAsFloat:new d.GlslLibRoutine(` - float getColorAsFloat(vec4 color) { - return decode(color); - } - `,["encoding.decode"])}}}n.FragColorGlslLib=p},2848:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.replaceInlines=void 0;const u=/@inline[\s\n\r]+(\w+)[\s\n\r]+([0-9a-zA-Z_]+)\s*\(([^)]*)\)\s*{(([^}]|[\n\r])*)}/gm;n.replaceInlines=function(d){const l={};let p;for(;(p=u.exec(d))!==null;){const s=p[3].split(",").map(h=>{const f=h.trim().split(" ");return f&&f.length===2?{type:f[0],name:f[1]}:null}).filter(h=>h!==null);l[p[2]]={params:s,body:p[4]}}for(const s in l){const h="(\\w+)?\\s+([_0-9a-zA-Z]+)\\s+=\\s+__FUNC__\\((.*)\\)\\s*;".replace("__FUNC__",s),f=new RegExp(h,"gm");for(;(p=f.exec(d))!==null;){const a=p[1],o=p[2],t=p[3].split(","),e=a?`${a} ${o};`:"";let r=l[s].body,i="";l[s].params.forEach((g,m)=>{g&&(i+=`${g.type} ${g.name} = ${t[m]}; -`)}),r=`${i} - ${r}`,r=r.replace("return",`${o} = `);const c=` - ${e} - { - ${r} - } - `;d=d.replace(p[0],c)}}return d.replace(u,"")}},8879:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.GlslPreprocessor=void 0;const d=u(8520),l=u(2848),p=u(5483),s=u(5060);n.GlslPreprocessor=class{constructor(h,f,a,o){this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new d.GlslContext(h,f,a,o),Object.keys(p.glslRegistry).forEach(e=>{const r=new p.glslRegistry[e](this.context);this.libs[e]=r});const t=this.glslLibRoutineDependencyGraph;for(const e in this.libs){const r=this.libs[e].getFunctions();for(const i in r){const c=e+"."+i;let g;t[c]?(g=t[c],g.routineBody=r[i].routineBody):(g=new d.GlslLibRoutineNode(c,r[i].routineBody),t[c]=g);const m=r[i].dependencies;if(m)for(let b=0;b{const o=a.split(".")[1];h.indexOf(o)!==-1&&f.push(this.glslLibRoutineDependencyGraph[a])}),d.TopologicalSortGlslRoutines.returnOrderedNodes(f)}getUniforms(h,f){const a=[];if(h)for(const o of h)a.push(`uniform sampler2D ${o};`);if(f)for(const o of f)a.push(`uniform ${o.type} ${o.name}${o.arrayLength?`[${o.arrayLength}]`:""};`);return a.join(` -`)}}},5483:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.glslRegistry=void 0;const d=u(5107),l=u(7341),p=u(9894),s=u(2655),h=u(3891);n.glslRegistry={encoding:l.EncodingGlslLib,fragcolor:p.FragColorGlslLib,vec:h.VecGlslLib,shapeUtils:s.ShapeUtilsGlslLib,coordinates:d.CoordsGlslLib}},2655:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ShapeUtilsGlslLib=void 0;const d=u(8520);class l extends d.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.bcastIndex()),this.bcastMatmulIndex()),this.offsetToIndices()),this.indicesToOffset()),this.incrementIndices())}getCustomTypes(){return{}}bcastIndex(){const s=this.context.outputTextureLayout.shape.length,h={};return this.context.programInfo.inputNames.forEach((f,a)=>{const o=this.context.inputTextureLayouts[a].unpackedShape;if(o.length<=s){const t=o.length,e=s-t,r=`bcastIndices_${f}`;let i="";for(let g=0;g{const o=this.context.inputTextureLayouts[a].shape;if(!(o.length<2||o.length>s)){const t=o.length,e=s-t,r=`bcastMatmulIndices_${f}`;let i="";for(let g=0;g{const a=this.context.inputTextureLayouts[f].shape,o=this.context.inputTextureLayouts[f].strides,t=a.length;let e=`indicesToOffset_${h}`;s[e]=new d.GlslLibRoutine(l.indexToOffsetSingle(e,t,o)),e=`indicesToOffset_${h}_T`,s[e]=new d.GlslLibRoutine(l.indexToOffsetSingle(e,t,o.slice().reverse()))}),s}static indexToOffsetSingle(s,h,f){let a="";for(let o=h-1;o>=0;--o)a+=` - offset += indices[${o}] * ${f[o]}; - `;return` - int ${s}(int indices[${h}]) { - int offset = 0; - ${a} - return offset; - } - `}offsetToIndices(){const s={};return this.context.programInfo.inputNames.forEach((h,f)=>{const a=this.context.inputTextureLayouts[f].shape,o=this.context.inputTextureLayouts[f].strides,t=a.length;let e=`offsetToIndices_${h}`;s[e]=new d.GlslLibRoutine(l.offsetToIndicesSingle(e,t,o)),e=`offsetToIndices_${h}_T`,s[e]=new d.GlslLibRoutine(l.offsetToIndicesSingle(e,t,o.slice().reverse()))}),s}static offsetToIndicesSingle(s,h,f){const a=[];for(let o=0;o{const a=this.context.inputTextureLayouts[f].shape,o=a.length,t=`incrementIndices_${h}`;let e="";for(let i=0;i= 0; --i) { - if(i > axis) continue; - indices[i] += 1; - if(indices[i] < shape[i]) { - break; - } - indices[i] = 0; - } - } - `;s[t]=new d.GlslLibRoutine(r)}),s}}n.ShapeUtilsGlslLib=l},5060:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getDefaultFragShaderMain=n.getFragShaderPreamble=n.getVertexShaderSource=n.getGlsl=void 0;const u={version:"",attribute:"attribute",varyingVertex:"varying",varyingFrag:"varying",texture2D:"texture2D",output:"gl_FragColor",outputDeclaration:""},d={version:"#version 300 es",attribute:"in",varyingVertex:"out",varyingFrag:"in",texture2D:"texture",output:"outputColor",outputDeclaration:"out vec4 outputColor;"};function l(p){return p===1?u:d}n.getGlsl=l,n.getVertexShaderSource=function(p){const s=l(p);return`${s.version} - precision highp float; - ${s.attribute} vec3 position; - ${s.attribute} vec2 textureCoord; - - ${s.varyingVertex} vec2 TexCoords; - - void main() - { - gl_Position = vec4(position, 1.0); - TexCoords = textureCoord; - }`},n.getFragShaderPreamble=function(p){const s=l(p);return`${s.version} - precision highp float; - precision highp int; - precision highp sampler2D; - ${s.varyingFrag} vec2 TexCoords; - ${s.outputDeclaration} - const vec2 halfCR = vec2(0.5, 0.5); - - // Custom vector types to handle higher dimenalities. - struct ivec5 - { - int x; - int y; - int z; - int w; - int u; - }; - - struct ivec6 - { - int x; - int y; - int z; - int w; - int u; - int v; - }; - - int imod(int x, int y) { - return x - y * (x / y); - } - - `},n.getDefaultFragShaderMain=function(p,s){return` - void main() { - int indices[${s}]; - toVec(TexCoords, indices); - vec4 result = vec4(process(indices)); - ${l(p).output} = result; - } - `}},3891:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.VecGlslLib=void 0;const d=u(8520);class l extends d.GlslLib{constructor(s){super(s)}getCustomTypes(){return{}}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign({},this.binaryVecFunctions()),this.copyVec()),this.setVecItem()),this.getVecItem())}binaryVecFunctions(){const s=this.context.outputTextureLayout.shape.length,h={add:"+=",sub:"-=",mul:"*=",div:"/="},f={};for(const a in h){const o=`${a}Vec`;let t="";for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLInferenceHandler=void 0;const d=u(6231),l=u(9162),p=u(2517),s=u(2403),h=u(7019),f=u(8710),a=u(5611),o=u(4057),t=u(2039);n.WebGLInferenceHandler=class{constructor(e){this.session=e,this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map}calculateTextureWidthAndHeight(e,r){return(0,o.calculateTextureWidthAndHeight)(this.session.layoutStrategy,e,r)}executeProgram(e,r){if(r.length{const S=v.map(E=>`${E.unpackedShape.join(",")};${E.width}x${E.height}`).join("_");let O=w.name;return w.cacheHint&&(O+="["+w.cacheHint+"]"),O+=":"+S,O})(e,i);let g=this.session.programManager.getArtifact(c);const m=g?g.programInfo:typeof e.get=="function"?e.get():e,b=(0,o.createTextureLayoutFromTextureType)(this.session.layoutStrategy,m.output.dims,m.output.textureType),_=this.createTextureData(b,m.output.type);return g||(g=this.session.programManager.build(m,i,_),this.session.programManager.setArtifact(c,g)),this.runProgram(g,i,_),_}run(e,r){return this.executeProgram(e,r).tensor}runProgram(e,r,i){for(let c=0;cthis.readTexture(m),async b=>this.readTextureAsync(m),void 0,g),texture:i});return this.setTextureData(m.tensor.dataId,m,e.isPacked),m}getTextureData(e,r=!1){return this.session.isInitializer(e)?this.session.getTextureData(e,r):r?this.packedTextureDataCache.get(e):this.unpackedTextureDataCache.get(e)}setTextureData(e,r,i=!1){this.session.isInitializer(e)?this.session.setTextureData(e,r,i):(i?this.packedTextureDataCache:this.unpackedTextureDataCache).set(e,r)}isTextureLayoutCached(e,r=!1){return!!this.getTextureData(e.dataId,r)}dispose(){this.session.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.unpackedTextureDataCache=new Map}readTexture(e){return e.isPacked?this.readTexture(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTexture(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,f.encodeAsUint8)(this,e))}async readTextureAsync(e){return e.isPacked?this.readTextureAsync(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTextureAsync(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,f.encodeAsUint8)(this,e))}pack(e){return this.executeProgram((0,s.createPackProgramInfoLoader)(this,e.tensor),[e.tensor])}unpack(e){return this.executeProgram((0,a.createUnpackProgramInfoLoader)(this,e.tensor),[e.tensor])}}},1640:function(y,n,u){var d=this&&this.__createBinding||(Object.create?function(X,te,ne,me){me===void 0&&(me=ne);var Ie=Object.getOwnPropertyDescriptor(te,ne);Ie&&!("get"in Ie?!te.__esModule:Ie.writable||Ie.configurable)||(Ie={enumerable:!0,get:function(){return te[ne]}}),Object.defineProperty(X,me,Ie)}:function(X,te,ne,me){me===void 0&&(me=ne),X[me]=te[ne]}),l=this&&this.__setModuleDefault||(Object.create?function(X,te){Object.defineProperty(X,"default",{enumerable:!0,value:te})}:function(X,te){X.default=te}),p=this&&this.__importStar||function(X){if(X&&X.__esModule)return X;var te={};if(X!=null)for(var ne in X)ne!=="default"&&Object.prototype.hasOwnProperty.call(X,ne)&&d(te,X,ne);return l(te,X),te};Object.defineProperty(n,"__esModule",{value:!0}),n.WEBGL_OP_RESOLVE_RULES=void 0;const s=u(2898),h=p(u(7839)),f=u(4196),a=u(2069),o=u(8138),t=u(9663),e=u(5193),r=u(7992),i=u(1253),c=u(4776),g=u(6572),m=u(3346),b=u(5623),_=u(2870),w=u(2143),v=u(4939),S=u(718),O=u(2268),E=u(8117),T=u(2278),I=u(5524),C=u(5975),B=u(3933),F=u(6558),N=u(5723),H=u(3738),$=p(u(4909)),z=u(8428),J=u(9793);n.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",$.abs],["Acos","","7+",$.acos],["Add","","7+",h.add],["And","","7+",h.and],["Asin","","7+",$.asin],["Atan","","7+",$.atan],["AveragePool","","7+",w.averagePool,w.parseAveragePoolAttributes],["BatchNormalization","","7+",s.batchNormalization,s.parseBatchNormalizationAttributes],["Cast","","6+",f.cast,f.parseCastAttributes],["Ceil","","6+",$.ceil],["Clip","","6-10",$.clip,$.parseClipAttributes],["Clip","","11+",$.clipV11],["Concat","","4+",a.concat,a.parseConcatAttributes],["Conv","","1+",o.conv,o.parseConvAttributes],["ConvTranspose","","1+",t.convTranspose,t.parseConvTransposeAttributes],["Cos","","7+",$.cos],["Div","","7+",h.div],["Dropout","","7+",$.identity],["DepthToSpace","","1+",e.depthToSpace,e.parseDepthToSpaceAttributes],["Equal","","7+",h.equal],["Elu","","6+",$.elu,$.parseEluAttributes],["Exp","","6+",$.exp],["Flatten","","1+",r.flatten,r.parseFlattenAttributes],["Floor","","6+",$.floor],["FusedConv","com.microsoft","1+",o.conv,o.parseConvAttributes],["Gather","","1+",i.gather,i.parseGatherAttributes],["Gemm","","7-10",c.gemm,c.parseGemmAttributesV7],["Gemm","","11+",c.gemm,c.parseGemmAttributesV11],["GlobalAveragePool","","1+",w.globalAveragePool,w.parseGlobalAveragePoolAttributes],["GlobalMaxPool","","1+",w.globalMaxPool],["Greater","","7+",h.greater],["Identity","","1+",$.identity],["ImageScaler","","1+",g.imageScaler,g.parseImageScalerAttributes],["InstanceNormalization","","6+",m.instanceNormalization,m.parseInstanceNormalizationAttributes],["LeakyRelu","","6+",$.leakyRelu,$.parseLeakyReluAttributes],["Less","","7+",h.less],["Log","","6+",$.log],["MatMul","","1+",b.matMul,b.parseMatMulAttributes],["MaxPool","","1+",w.maxPool,w.parseMaxPoolAttributes],["Mul","","7+",h.mul],["Neg","","6+",$.neg],["Not","","1+",$.not],["Or","","7+",h.or],["Pad","","2-10",_.padV2,_.parsePadAttributesV2],["Pad","","11+",_.padV11,_.parsePadAttributesV11],["Pow","","7+",h.pow],["PRelu","","7+",h.pRelu],["ReduceLogSum","","1+",v.reduceLogSum,v.parseReduceAttributes],["ReduceMax","","1+",v.reduceMax,v.parseReduceAttributes],["ReduceMean","","1+",v.reduceMean,v.parseReduceAttributes],["ReduceMin","","1+",v.reduceMin,v.parseReduceAttributes],["ReduceProd","","1+",v.reduceProd,v.parseReduceAttributes],["ReduceSum","","1-12",v.reduceSum,v.parseReduceAttributes],["ReduceSumSquare","","1+",v.reduceLogSumSquare,v.parseReduceAttributes],["Relu","","6+",$.relu],["Reshape","","5+",S.reshape],["Resize","","10",O.resize,O.parseResizeAttributesV10],["Resize","","11+",O.resize,O.parseResizeAttributesV11],["Shape","","1+",E.shape],["Sigmoid","","6+",$.sigmoid],["Sin","","7+",$.sin],["Slice","","10+",T.sliceV10],["Slice","","1-9",T.slice,T.parseSliceAttributes],["Softmax","","1-12",I.softmax,I.parseSoftmaxAttributes],["Softmax","","13+",I.softmaxV13,I.parseSoftmaxAttributesV13],["Split","","2-12",C.split,C.parseSplitAttributes],["Sqrt","","6+",$.sqrt],["Squeeze","","1-12",B.squeeze,B.parseSqueezeAttributes],["Squeeze","","13+",B.squeezeV13],["Sub","","7+",h.sub],["Sum","","6+",F.sum],["Tan","","7+",$.tan],["Tanh","","6+",$.tanh],["Tile","","6+",N.tile],["Transpose","","1+",H.transpose,H.parseTransposeAttributes],["Upsample","","7-8",J.upsample,J.parseUpsampleAttributesV7],["Upsample","","9",J.upsample,J.parseUpsampleAttributesV9],["Unsqueeze","","1-12",z.unsqueeze,z.parseUnsqueezeAttributes],["Unsqueeze","","13+",z.unsqueezeV13],["Xor","","7+",h.xor]]},2898:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseBatchNormalizationAttributes=n.batchNormalization=void 0;const d=u(246),l=u(5060),p=u(2039),s={name:"BatchNormalization",inputNames:["A","Scale","B","Mean","Variance"],inputTypes:[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]};n.batchNormalization=(a,o,t)=>(f(o),[a.run(Object.assign(Object.assign({},s),{cacheHint:t.cacheKey,get:()=>h(a,o,t)}),o)]),n.parseBatchNormalizationAttributes=a=>{const o=a.attributes.getFloat("epsilon",1e-5),t=a.attributes.getFloat("momentum",.9),e=a.attributes.getInt("spatial",1);return(0,d.createAttributeWithCacheKey)({epsilon:o,momentum:t,spatial:e})};const h=(a,o,t)=>{const e=(0,l.getGlsl)(a.session.backend.glContext.version),r=o[0].dims.length,[i,c]=a.calculateTextureWidthAndHeight(o[1].dims,p.TextureType.unpacked),g=` - float process(int[${r}] indices) { - vec2 position = offsetToCoords(indices[1], ${i}, ${c}); - float scale = getColorAsFloat(${e.texture2D}(Scale, position)); - float mean = getColorAsFloat(${e.texture2D}(Mean, position)); - float variance = getColorAsFloat(${e.texture2D}(Variance, position)); - float b = getColorAsFloat(${e.texture2D}(B, position)); - - return scale * ( (_A(indices) - mean) / sqrt(variance + float(${t.epsilon})) ) + b; - }`;return Object.assign(Object.assign({},s),{output:{dims:o[0].dims,type:o[0].type,textureType:p.TextureType.unpacked},shaderSource:g})},f=a=>{if(!a||a.length!==5)throw new Error("BatchNormalization requires 5 inputs.");const o=a[0],t=a[1],e=a[2],r=a[3],i=a[4];if(o.dims.length<3||t.dims.length!==1||e.dims.length!==1||r.dims.length!==1||i.dims.length!==1)throw new Error("invalid input shape.");if(t.dims[0]!==o.dims[1]||e.dims[0]!==o.dims[1]||r.dims[0]!==o.dims[1]||i.dims[0]!==o.dims[1])throw new Error("invalid input shape.");if(o.type!=="float32"&&o.type!=="float64"||t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64"||i.type!=="float32"&&i.type!=="float64")throw new Error("invalid input tensor types.")}},7839:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.xor=n.sub=n.pRelu=n.pow=n.or=n.mul=n.less=n.greater=n.equal=n.div=n.and=n.add=n.glslPRelu=n.glslPow=n.glslXor=n.glslOr=n.glslAnd=n.glslLess=n.glslGreater=n.glslEqual=n.glslSub=n.glslMul=n.glslDiv=n.glslAdd=void 0;const d=u(2517),l=u(8520),p=u(5060),s=u(2039);function h(){const v="add_";return{body:` - float ${v}(float a, float b) { - return a + b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 + v2; - } - `,name:v,type:l.FunctionType.ValueBased}}function f(){const v="div_";return{body:` - float ${v}(float a, float b) { - return a / b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 / v2; - } - `,name:v,type:l.FunctionType.ValueBased}}function a(){const v="mul_";return{body:` - float ${v}(float a, float b) { - return a * b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 * v2; - } - `,name:v,type:l.FunctionType.ValueBased}}function o(){const v="sub_";return{body:` - float ${v}(float a, float b) { - return a - b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 - v2; - } - `,name:v,type:l.FunctionType.ValueBased}}function t(){const v="equal_";return{body:` - float ${v}(float a, float b) { - return float(a == b); - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4(equal(v1, v2)); - } - `,name:v,type:l.FunctionType.ValueBased}}function e(){const v="greater_";return{body:` - float ${v}(float a, float b) { - return float(a > b); - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4( v1.r > v2.r , - v1.g > v2.g, - v1.b > v2.b, - v1.a > v2.a ); - } - `,name:v,type:l.FunctionType.ValueBased}}function r(){const v="less_";return{body:` - float ${v}(float a, float b) { - return float(a < b); - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4( v1.r < v2.r , - v1.g < v2.g, - v1.b < v2.b, - v1.a < v2.a ); - } - `,name:v,type:l.FunctionType.ValueBased}}function i(){const v="and_";return{body:` - float ${v}(float a, float b) { - return float( bool(a) && bool(b) ); - } - vec4 ${v}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r && b2.r , - b1.g && b2.g, - b1.b && b2.b, - b1.a && b2.a ); - } - `,name:v,type:l.FunctionType.ValueBased}}function c(){const v="or_";return{body:` - float ${v}(float a, float b) { - return float( bool(a) || bool(b) ); - } - vec4 ${v}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r || b2.r , - b1.g || b2.g, - b1.b || b2.b, - b1.a || b2.a ); - } - `,name:v,type:l.FunctionType.ValueBased}}function g(){const v="xor_";return{body:` - float ${v}(float a, float b) { - return float( bool(a) ^^ bool(b) ); - } - vec4 ${v}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r ^^ b2.r , - b1.g ^^ b2.g, - b1.b ^^ b2.b, - b1.a ^^ b2.a ); - } - `,name:v,type:l.FunctionType.ValueBased}}function m(){return function(v){const S=`${v}_`;return{body:` - float ${S}(float a, float b) { - return ${v}(a, b); - } - vec4 ${S}(vec4 v1, vec4 v2) { - return ${v}(v1, v2); - } - `,name:S,type:l.FunctionType.ValueBased}}("pow")}function b(){const v="prelu_";return{body:` - float ${v}(float a, float b) { - return a < 0.0 ? a * b: a; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4( - v1.r < 0.0 ? v1.r * v2.r: v1.r, - v1.g < 0.0 ? v1.g * v2.g: v1.g, - v1.b < 0.0 ? v1.b * v2.b: v1.b, - v1.a < 0.0 ? v1.a * v2.a: v1.a - ); - } - `,name:v,type:l.FunctionType.ValueBased}}n.glslAdd=h,n.glslDiv=f,n.glslMul=a,n.glslSub=o,n.glslEqual=t,n.glslGreater=e,n.glslLess=r,n.glslAnd=i,n.glslOr=c,n.glslXor=g,n.glslPow=m,n.glslPRelu=b;const _=(v,S,O,E=S[0].type,T)=>{const I=v.session.pack?s.TextureType.packed:s.TextureType.unpacked;return{name:O.name,inputNames:["A","B"],inputTypes:[I,I],cacheHint:T,get:()=>w(v,S,O,E)}},w=(v,S,O,E=S[0].type)=>{const T=v.session.pack?s.TextureType.packed:s.TextureType.unpacked,I=!d.ShapeUtil.areEqual(S[0].dims,S[1].dims);let C=S[0].dims;const B=v.session.pack;if(I){const H=d.BroadcastUtil.calcShape(S[0].dims,S[1].dims,!1);if(!H)throw new Error("Can't perform binary op on the given tensors");C=H;const $=C.length,z=S[0].dims.length!==0?S[0].dims.length:1,J=S[1].dims.length!==0?S[1].dims.length:1,X=S[0].dims.length!==0?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",te=S[1].dims.length!==0?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",ne=(0,p.getGlsl)(v.session.backend.glContext.version),me=B?` - ${O.body} - void main() { - vec4 a = getAAtOutCoords(); - vec4 b = getBAtOutCoords(); - vec4 result = ${O.name}(a, b); - ${ne.output} = result; - }`:` - ${O.body} - float process(int indices[${$}]) { - int aindices[${z}]; - int bindices[${J}]; - ${X} - ${te} - return ${O.name}(_A(aindices), _B(bindices)); - }`;return{name:O.name,inputNames:["A","B"],inputTypes:[T,T],output:{dims:C,type:E,textureType:T},shaderSource:me,hasMain:B}}const F=(0,p.getGlsl)(v.session.backend.glContext.version),N=` - ${O.body} - void main() { - vec4 v1 = ${F.texture2D}(A, TexCoords); - vec4 v2 = ${F.texture2D}(B, TexCoords); - vec4 result = ${O.name}(v1, v2); - ${F.output} = result; - } - `;return{name:O.name,inputNames:["A","B"],inputTypes:[T,T],output:{dims:S[0].dims,type:E,textureType:T},shaderSource:N,hasMain:!0}};n.add=(v,S)=>[v.run(_(v,S,h()),S)],n.and=(v,S)=>[v.run(_(v,S,i(),"bool"),S)],n.div=(v,S)=>[v.run(_(v,S,f()),S)],n.equal=(v,S)=>[v.run(_(v,S,t(),"bool"),S)],n.greater=(v,S)=>[v.run(_(v,S,e(),"bool"),S)],n.less=(v,S)=>[v.run(_(v,S,r(),"bool"),S)],n.mul=(v,S)=>[v.run(_(v,S,a()),S)],n.or=(v,S)=>[v.run(_(v,S,c(),"bool"),S)],n.pow=(v,S)=>[v.run(_(v,S,m()),S)],n.pRelu=(v,S)=>[v.run(_(v,S,b()),S)],n.sub=(v,S)=>[v.run(_(v,S,o()),S)],n.xor=(v,S)=>[v.run(_(v,S,g(),"bool"),S)]},4196:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseCastAttributes=n.cast=void 0;const d=u(2517);n.cast=(p,s,h)=>(l(s),[p.cast(s[0],h)]),n.parseCastAttributes=p=>d.ProtoUtil.tensorDataTypeFromProto(p.attributes.getInt("to"));const l=p=>{if(!p||p.length!==1)throw new Error("Cast requires 1 input.");if(p[0].type==="string")throw new Error("Invalid input type.")}},1163:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedConcatProgramInfoLoader=void 0;const d=u(5060),l=u(2039),p=u(9390),s=u(2827);n.createPackedConcatProgramInfoLoader=(f,a,o)=>{const t=(e=a.length,r=o.cacheKey,{name:"Concat (packed)",inputNames:Array.from({length:e},(i,c)=>`X${c}`),inputTypes:Array(e).fill(l.TextureType.packed),cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,c,g,m)=>{const b=g[0].dims.slice();if(m>=b.length||m<-1*b.length)throw new Error("axis specified for concat doesn't match input dimensionality");m<0&&(m=b.length+m);const _=b.slice(0);for(let X=1;XX.dims),T=(0,p.getGlChannels)(w),I=new Array(E.length-1);I[0]=E[0][m];for(let X=1;X= ${I[X-1]}) { - return getChannel( - getX${X}(${h(T,C,te)}), - vec2(${h(B,C,te)})); - }`}const H=I.length,$=I[I.length-1];N+=` - return getChannel( - getX${H}(${h(T,C,$)}), - vec2(${h(B,C,$)}));`;const z=(0,d.getGlsl)(i.session.backend.glContext.version),J=` - ${O} - float getValue(${T.map(X=>"int "+X)}) { - ${N} - } - - void main() { - ${S} coords = getOutputCoords(); - int lastDim = coords.${T[w-1]}; - coords.${T[w-1]} = coords.${T[w-2]}; - coords.${T[w-2]} = lastDim; - - vec4 result = vec4(getValue(${v}), 0., 0., 0.); - - ${v[w-1]} = ${v[w-1]} + 1; - if (${v[w-1]} < ${_[w-1]}) { - result.g = getValue(${v}); - } - - ${v[w-2]} = ${v[w-2]} + 1; - if (${v[w-2]} < ${_[w-2]}) { - result.a = getValue(${v}); - } - - ${v[w-1]} = ${v[w-1]} - 1; - if (${v[w-2]} < ${_[w-2]} && - ${v[w-1]} < ${_[w-1]}) { - result.b = getValue(${v}); - } - ${z.output} = result; - } - `;return Object.assign(Object.assign({},c),{output:{dims:_,type:g[0].type,textureType:l.TextureType.packed},shaderSource:J,hasMain:!0})})(f,t,a,o.axis)})};const h=(f,a,o)=>{const t=f.indexOf(a);return f.map((e,r)=>r===t?`${e} - ${o}`:e).join()}},2069:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConcatAttributes=n.concat=void 0;const d=u(246),l=u(2039),p=u(1163);n.concat=(e,r,i)=>(t(r),e.session.pack&&r[0].dims.length>1?[e.run((0,p.createPackedConcatProgramInfoLoader)(e,r,i),r)]:[e.run(s(e,r,i),r)]);const s=(e,r,i)=>{const c=(g=r.length,m=i.cacheKey,{name:"Concat",inputNames:Array.from({length:g},(b,_)=>`X${_}`),inputTypes:Array(g).fill(l.TextureType.unpacked),cacheHint:m});var g,m;return Object.assign(Object.assign({},c),{get:()=>((b,_,w,v)=>{const S=w[0].dims.slice();if(v>=S.length||v<-1*S.length)throw new Error("axis specified for concat doesn't match input dimensionality");v<0&&(v=S.length+v);const O=S.slice(0);for(let F=1;F`int getTextureWhereDataResides(int index) { - ${e.map((r,i)=>`if(index<${r}) {return ${i};} -`).join("")} - }`,f=e=>h(e),a=(e,r)=>{const i=[`float fetchDataFromCorrectTexture(int textureIndex, int indices[${r}]) {`];for(let c=0;c{const r=["int getSizeInConcatAxisValueFromIndex(int index) {"];for(let i=0;i(0,d.createAttributeWithCacheKey)({axis:e.attributes.getInt("axis")});const t=e=>{if(!e||e.length<1)throw new Error("too few inputs");const r=e[0].type,i=e[0].dims.length;if(r==="string")throw new Error("string tensor is not supported yet");for(const c of e){if(c.type!==r)throw new Error("input tensors should be one type");if(c.dims.length!==i)throw new Error("input tensors should have the same shape")}}},4770:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackedGroupedConvProgramInfoLoader=void 0;const d=u(6231),l=u(5060),p=u(2039),s=u(8138),h=u(2823);n.createUnpackedGroupedConvProgramInfoLoader=(f,a,o)=>{const t=(e=a.length>2,r=o.cacheKey,{name:"GroupedConv",inputNames:e?["X","W","Bias"]:["X","W"],inputTypes:e?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,c,g,m)=>{const b=c.length>2?"value += getBias(output_channel);":"",_=c[0].dims.slice(),w=c[1].dims.slice(),v=w[0]/m.group;d.Logger.verbose("GroupedConv",`autpPad:${m.autoPad}, dilations:${m.dilations}, group:${m.group}, kernelShape:${m.kernelShape}, pads:${m.pads}, strides:${m.strides}`);const S=(0,s.calculateOutputShape)(_,w,m.dilations,m.pads,m.strides),O=(0,l.getGlsl)(i.session.backend.glContext.version),{activationFunction:E,applyActivation:T}=(0,h.getActivationSnippet)(m),I=` - const ivec2 strides = ivec2(${m.strides[0]}, ${m.strides[1]}); - const ivec2 pads = ivec2(${m.pads[0]}, ${m.pads[1]}); - ${E} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - ivec2 xRCCorner = coords.zw * strides - pads; - int group_id = output_channel / ${v}; - - float value = 0.0; - for (int wInChannel = 0; wInChannel < ${w[1]}; wInChannel++) { - int input_channel = group_id * ${w[1]} + wInChannel; - for (int wHeight = 0; wHeight < ${w[2]}; wHeight++) { - int xHeight = xRCCorner.x + wHeight * ${m.dilations[0]}; - - if (xHeight < 0 || xHeight >= ${_[2]}) { - continue; - } - - for (int wWidth = 0; wWidth < ${w[3]}; wWidth++) { - int xWidth = xRCCorner.y + wWidth * ${m.dilations[1]}; - if (xWidth < 0 || xWidth >= ${_[3]}) { - continue; - } - - float xVal = getX(batch, input_channel, xWidth, xHeight); - float wVal = getW(output_channel, wInChannel, wWidth, wHeight); - value += xVal*wVal; - } - } - } - ${b} - ${T} - ${O.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},g),{output:{dims:S,type:c[0].type,textureType:p.TextureType.unpacked},shaderSource:I,hasMain:!0})})(f,a,t,o)})}},1386:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.conv2DPacked=n.conv2DPackedPointwise=void 0;const d=u(8138),l=u(8555),p=u(708);n.conv2DPackedPointwise=(s,h,f)=>{const a=h[0].dims,o=h[1].dims,t=(0,d.calculateOutputShape)(a,o,f.dilations,f.pads,f.strides),e=s.reshapePacked(h[0],[a[1],a[2]*a[3]]),r=s.reshapePacked(h[1],[o[0],o[1]]),i=h.length>2?[r,e,h[2]]:[r,e],c=s.run((0,p.createPackedMatmulProgramInfoLoader)(s,i,f),i);return s.reshapePacked(c,t)},n.conv2DPacked=(s,h,f)=>{const a=h[0].dims,o=h[1].dims,t=(0,d.calculateOutputShape)(a,o,f.dilations,f.pads,f.strides),e=s.run((0,l.createPackedIm2ColProgramInfoLoader)(s,h[0],h[1],t,f),[h[0]]),r=s.reshapePacked(h[1],[o[0],o[1]*o[2]*o[3]]),i=h.length===3?[r,e,h[2]]:[r,e],c=s.run((0,p.createPackedMatmulProgramInfoLoader)(s,i,f),i);return s.reshapePacked(c,t)}},9663:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvTransposeAttributes=n.convTranspose=void 0;const d=u(246),l=u(5060),p=u(2039),s=u(2823),h=(r,i,c,g,m,b)=>(r-1)*i+c+(g-1)*m+1-b,f=(r,i,c,g,m)=>{const b=Math.floor(r/2);i==="SAME_UPPER"?(c[g]=b,c[m]=r-b):i==="SAME_LOWER"&&(c[g]=r-b,c[m]=b)};n.convTranspose=(r,i,c)=>(e(i,c),a(r,i,c));const a=(r,i,c)=>{const g=t(c,i);return[o(r,i,g)]},o=(r,i,c)=>r.run(((g,m,b)=>{const _=(w=m.length>2,v=b.cacheKey,{name:"ConvTranspose",inputNames:w?["X","W","B"]:["X","W"],inputTypes:w?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],cacheHint:v});var w,v;return Object.assign(Object.assign({},_),{get:()=>((S,O,E,T)=>{const I=O.length>2?"getB(output_channel)":"0.0",C=O[0].dims,B=O[1].dims,F=B[1],N=B[0]/T.group,H=[O[0].dims[0],O[1].dims[1]*T.group,...T.outputShape],$=(0,l.getGlsl)(S.session.backend.glContext.version),{activationFunction:z,applyActivation:J}=(0,s.getActivationSnippet)(T),X=` - const ivec2 strides = ivec2(${T.strides[0]}, ${T.strides[1]}); - const ivec2 pads = ivec2(${T.pads[0]}, ${T.pads[1]}); - ${z} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - - ivec2 loc = coords.zw + pads; - - int group_id = output_channel / ${F}; - int wOutChannel = output_channel - group_id * ${F}; - - float value = ${I}; - for (int inChannelOffset = 0; inChannelOffset < ${N}; inChannelOffset++) { - int input_channel = group_id * ${N} + inChannelOffset; - for (int wWOff = 0; wWOff < ${B[2]}; wWOff++) { - for (int wHOff = 0; wHOff < ${B[3]}; wHOff++) { - ivec2 wOff = ivec2(wWOff * ${T.dilations[0]}, wHOff * ${T.dilations[1]}); - ivec2 wLoc = loc - wOff; - ivec2 wLocIn = wLoc / strides; - if ( - wLocIn * strides == wLoc && - wLocIn.x >= 0 && wLocIn.x < ${C[2]} && - wLocIn.y >= 0 && wLocIn.y < ${C[3]} - ) { - float xVal = getX(batch, input_channel, wLocIn.y, wLocIn.x); - float wVal = getW(input_channel, wOutChannel, wHOff, wWOff); - value += xVal * wVal; - } - } - } - } - ${J} - ${$.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},E),{output:{dims:H,type:O[0].type,textureType:p.TextureType.unpacked},shaderSource:X,hasMain:!0})})(g,m,_,b)})})(r,i,c),i),t=(r,i)=>{const c=r.kernelShape.slice();if(r.kernelShape.length===0)for(let _=2;_{const C=_.length-2,B=I.length===0;for(let F=0;F{const i=r.attributes,c=(0,s.parseInternalActivationAttributes)(i),g=i.getString("auto_pad","NOTSET"),m=i.getInts("dilations",[1,1]),b=i.getInt("group",1),_=i.getInts("kernel_shape",[]),w=i.getInts("output_padding",[0,0]),v=i.getInts("output_shape",[]),S=i.getInts("pads",[0,0,0,0]),O=i.getInts("strides",[1,1]);return(0,d.createAttributeWithCacheKey)(Object.assign({autoPad:g,dilations:m,group:b,kernelShape:_,outputPadding:w,outputShape:v,pads:S,strides:O},c))};const e=(r,i)=>{if(!r||r.length!==2&&r.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(r[0].dims.length!==4||r[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(r[0].dims[1]!==r[1].dims[0])throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");const c=r[1].dims[1]*i.group;if(r.length===3&&(r[2].dims.length!==1||r[2].dims[0]!==c))throw new Error("invalid bias");const g=r[0].dims.length-2;if(i.dilations.length!==g)throw new Error(`dilations should be ${g}D`);if(i.strides.length!==g)throw new Error(`strides should be ${g}D`);if(i.pads.length!==2*g)throw new Error(`pads should be ${2*g}D`);if(i.outputPadding.length!==g)throw new Error(`output_padding should be ${g}D`);if(i.kernelShape.length!==0&&i.kernelShape.length!==r[1].dims.length-2)throw new Error("invalid kernel shape");if(i.outputShape.length!==0&&i.outputShape.length!==r[0].dims.length-2)throw new Error("invalid output shape");if(r[0].type!=="float32"||r[1].type!=="float32")throw new Error("ConvTranspose input(X,W) should be float tensor");if(r.length===3&&r[2].type!=="float32")throw new Error("ConvTranspose input(bias) should be float tensor")}},8138:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvAttributes=n.conv=n.calculateOutputShape=void 0;const d=u(246),l=u(2517),p=u(4770),s=u(1386),h=u(9828),f=u(2823),a=u(3248),o=u(5623);n.calculateOutputShape=(g,m,b,_,w)=>{const v=g[0],S=g.slice(2),O=S.length,E=m[0],T=m.slice(2).map((C,B)=>C+(C-1)*(b[B]-1)),I=S.map((C,B)=>C+_[B]+_[B+O]).map((C,B)=>Math.floor((C-T[B]+w[B])/w[B]));return[v,E].concat(...I)},n.conv=(g,m,b)=>(c(m,b),t(g,m,b));const t=(g,m,b)=>{const _=i(b,m),w=g.session.pack,v=_.kernelShape[0]===1&&_.kernelShape[1]===1;return _.group>1?[g.run((0,p.createUnpackedGroupedConvProgramInfoLoader)(g,m,_),m)]:v&&w?[e(g,m,_)]:w&&m[0].dims.length===4&&m[0].dims[0]===1&&!v?[(0,s.conv2DPacked)(g,m,_)]:[r(g,m,_)]},e=(g,m,b)=>{const _=m[0].dims,w=m[1].dims,v=(0,n.calculateOutputShape)(_,w,b.dilations,b.pads,b.strides),S=g.reshapeUnpacked(m[0],[_[1],_[2]*_[3]]),O=g.reshapeUnpacked(m[1],[w[0],w[1]]),E=m.length>2?[O,S,m[2]]:[O,S],T=g.run((0,o.createMatmulProgramInfoLoader)(E,b),E);return g.reshapeUnpacked(T,v)},r=(g,m,b)=>{const _=m[0].dims,w=m[1].dims,v=(0,n.calculateOutputShape)(_,w,b.dilations,b.pads,b.strides),S=g.run((0,a.createIm2ColProgramInfoLoader)(g,m[0],m[1],v,b),[m[0]]),O=m.length===3?[S,m[1],m[2]]:[S,m[1]];return g.run((0,h.createDotProductProgramInfoLoader)(g,m,v,b),O)},i=(g,m)=>{const b=g.kernelShape.slice();if(g.kernelShape.length===0)for(let v=2;v{const m=g.attributes,b=(0,f.parseInternalActivationAttributes)(m),_=m.getString("auto_pad","NOTSET"),w=m.getInts("dilations",[1,1]),v=m.getInt("group",1),S=m.getInts("kernel_shape",[]),O=m.getInts("pads",[0,0,0,0]),E=m.getInts("strides",[1,1]);return(0,d.createAttributeWithCacheKey)(Object.assign({autoPad:_,dilations:w,group:v,kernelShape:S,pads:O,strides:E},b))};const c=(g,m)=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(g[0].dims.length!==4||g[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(g[0].dims[1]!==g[1].dims[1]*m.group)throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");if(g.length===3&&(g[2].dims.length!==1||g[1].dims[0]!==g[2].dims[0]))throw new Error("invalid bias");const b=g[0].dims.length-2;if(m.dilations.length!==b)throw new Error(`dilations should be ${b}D`);if(m.strides.length!==b)throw new Error(`strides should be ${b}D`);if(m.pads.length!==2*b)throw new Error(`pads should be ${2*b}D`);if(m.kernelShape.length!==0&&m.kernelShape.length!==g[1].dims.length-2)throw new Error("invalid kernel shape");if(g[0].type!=="float32"||g[1].type!=="float32")throw new Error("Conv input(X,W) should be float tensor");if(g.length===3&&g[2].type!=="float32")throw new Error("Conv input(bias) should be float tensor")}},5193:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseDepthToSpaceAttributes=n.depthToSpace=void 0;const d=u(3738);n.depthToSpace=(p,s,h)=>{l(s);const f=h.blocksize,a=f*f,o=h.mode==="DCR"?[0,3,4,1,5,2]:[0,1,4,2,5,3],t=h.mode==="DCR"?[s[0].dims[0],f,f,s[0].dims[1]/a,s[0].dims[2],s[0].dims[3]]:[s[0].dims[0],s[0].dims[1]/a,f,f,s[0].dims[2],s[0].dims[3]],e=p.reshapeUnpacked(s[0],t),r={perm:o,cacheKey:`${o}`},[i]=(0,d.transpose)(p,[e],r),c=[s[0].dims[0],s[0].dims[1]/a,s[0].dims[2]*f,s[0].dims[3]*f];return[p.reshapeUnpacked(i,c)]},n.parseDepthToSpaceAttributes=p=>{const s=p.attributes.getInt("blocksize");if(s<1)throw new Error(`blocksize must be >= 1, but got : ${s} for DepthToSpace`);const h=p.attributes.getString("mode","DCR");if(h!=="DCR"&&h!=="CRD")throw new Error(`unrecognized mode: ${h} for DepthToSpace`);return{mode:h,blocksize:s}};const l=p=>{if(p.length!==1)throw new Error(`DepthToSpace expect 1 inputs, but got ${p.length}`);if(p[0].type==="string"||p[0].dims.length!==4)throw new TypeError("DepthToSpace input should be a 4-D numeric tensor")}},9828:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createDotProductProgramInfoLoader=void 0;const d=u(2517),l=u(5060),p=u(2039),s=u(2823),h=u(3248);n.createDotProductProgramInfoLoader=(f,a,o,t)=>{const e=((r,i)=>({name:"ConvDotProduct",inputNames:r?["Im2Col","K","B"]:["Im2Col","K"],inputTypes:r?[p.TextureType.unpacked,p.TextureType.packedLastDimension,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.packedLastDimension],cacheKey:i.activationCacheKey}))(a.length>2,t);return Object.assign(Object.assign({},e),{get:()=>((r,i,c,g,m)=>{const b=c[0].dims,_=c[1].dims,w=[_[0],Math.ceil(b[1]*_[2]*_[3]/4)],v=(0,h.calculateIm2ColDims)(b,_,g),[S,O]=r.calculateTextureWidthAndHeight(w,p.TextureType.packedLastDimension),E=d.ShapeUtil.computeStrides(v),[T,I]=r.calculateTextureWidthAndHeight(v,p.TextureType.packedLastDimension),C=g.length,B=c.length<3?"0.0":"_B(b)",F=Math.ceil(b[1]*_[2]*_[3]/4),{activationFunction:N,applyActivation:H}=(0,s.getActivationSnippet)(m),$=(0,l.getGlsl)(r.session.backend.glContext.version),z=` -${N} -float process(int indices[${C}]) { - int b[1]; - b[0] = indices[1]; - int im2col[4]; - im2col[0] = indices[0]; - im2col[1] = indices[2]; - im2col[2] = indices[3]; - int im2colOffset = im2col[0] * ${E[0]} + im2col[1] * ${E[1]} + im2col[2] * ${E[2]}; - int kernelOffset = indices[1] * ${w[1]}; - float value = ${B}; - for (int i = 0; i < ${F}; ++i) { - vec2 im2colCoords = offsetToCoords(im2colOffset, ${T}, ${I}); - vec2 kernelCoords = offsetToCoords(kernelOffset, ${S}, ${O}); - value += dot(${$.texture2D}(Im2Col, im2colCoords), ${$.texture2D}(K, kernelCoords)); - ++im2colOffset; - ++kernelOffset; - } - ${H} - return value; -}`;return Object.assign(Object.assign({},i),{output:{dims:g,type:c[0].type,textureType:p.TextureType.unpacked},shaderSource:z})})(f,e,a,o,t)})}},7992:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseFlattenAttributes=n.flatten=void 0;const d=u(2517);n.flatten=(p,s,h)=>{l(s,h);const f=d.ShapeUtil.flattenShape(s[0].dims,h);return[p.reshapeUnpacked(s[0],f)]},n.parseFlattenAttributes=p=>p.attributes.getInt("axis",1);const l=(p,s)=>{if(!p||p.length!==1)throw new Error("Flatten requires 1 input.");const h=p[0].dims.length;if(h===0)throw new Error("scalar tensor is not supported.");if(s<-h||s>h)throw new Error("Invalid axis");if(p[0].type==="string")throw new Error("string tensor is not supported.")}},2823:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInternalActivationAttributes=n.getActivationSnippet=void 0;const d=u(2517),l=u(4909);n.getActivationSnippet=function(p){let s;switch(p.activation){case"Relu":s=(0,l.glslRelu)();break;case"Sigmoid":s=(0,l.glslSigmoid)();break;case"Clip":s=(0,l.glslClip)(p.clipMin,p.clipMax);break;default:return{activationFunction:"",applyActivation:""}}const h=s.name;return{activationFunction:s.body,applyActivation:`value = ${h}_(value);`}},n.parseInternalActivationAttributes=p=>{const s=p.getString("activation","");if(s==="Clip"){const[h,f]=p.getFloats("activation_params",[d.MIN_CLIP,d.MAX_CLIP]);return{activation:s,clipMax:f,clipMin:h,activationCacheKey:`${s}:${h},${f}`}}return{activation:s,activationCacheKey:s}}},1253:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGatherAttributes=n.gather=void 0;const d=u(246),l=u(782),p=u(2517),s=u(2039);n.gather=(o,t,e)=>(a(t,e.axis),[o.run(f(o,t,e),t)]),n.parseGatherAttributes=o=>(0,d.createAttributeWithCacheKey)({axis:o.attributes.getInt("axis",0)});const h={name:"Gather",inputNames:["A","B"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked]},f=(o,t,e)=>{const r=Object.assign(Object.assign({},h),{cacheHint:e.cacheKey});return Object.assign(Object.assign({},r),{get:()=>((i,c,g,m)=>{const b=g[0].dims.slice(),_=g[1].dims.slice(),w=new Array(b.length+_.length-1);m=p.ShapeUtil.normalizeAxis(m,b.length);const v=[];for(let O=0;O{if(!o||o.length!==2)throw new Error("Gather requires 2 inputs.");const e=o[0].dims.length;if(e<1)throw new Error("Invalid input shape.");if(t<-e||t>e-1)throw new Error("Invalid axis.");if(l.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invaid input type.");if(o[1].type!=="int32"&&o[1].type!=="int16")throw new Error("Invaid input type.")}},4776:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGemmAttributesV11=n.parseGemmAttributesV7=n.gemm=void 0;const d=u(246),l=u(2517),p=u(2039);n.gemm=(o,t,e)=>(a(t,e),[o.run(h(t,e),t)]);const s=(o,t)=>{const e=o.attributes.getInt("transA",0)!==0,r=o.attributes.getInt("transB",0)!==0,i=o.attributes.getFloat("alpha",1),c=o.attributes.getFloat("beta",1);return(0,d.createAttributeWithCacheKey)({transA:e,transB:r,alpha:i,beta:c,isOptionalC:t})};n.parseGemmAttributesV7=o=>s(o,!1),n.parseGemmAttributesV11=o=>s(o,!0);const h=(o,t)=>{const e={name:"Gemm",inputNames:o.length===3?["A","B","C"]:["A","B"],inputTypes:o.length===3?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],key:t.cacheKey};return Object.assign(Object.assign({},e),{get:()=>f(e,o,t)})},f=(o,t,e)=>{const r=t[0].dims.slice(),i=t[1].dims.slice(),[c,g]=l.GemmUtil.getShapeOfGemmResult(r,e.transA,i,e.transB,t.length===3?t[2].dims:void 0),m=[c,g];if(!m)throw new Error("Can't use gemm on the given tensors");let b=r[r.length-1],_="";e.transA&&(b=r[0]),e.transA&&e.transB?_="value += _A_T(a) * _B_T(b);":e.transA&&!e.transB?_="value += _A_T(a) * _B(b);":!e.transA&&e.transB?_="value += _A(a) * _B_T(b);":e.transA||e.transB||(_="value += _A(a) * _B(b);");const w=m.length,v=` - float process(int indices[${w}]) { - int a[${w}]; - int b[${w}]; - ${t.length===3?`int c[${t[2].dims.length}];`:""} - - copyVec(indices, a); - copyVec(indices, b); - ${t.length===3?"bcastIndices_C(indices, c);":""} - - float value = 0.0; - for (int k=0; k<${b}; ++k) { - a[${w-1}] = k; - b[${w-2}] = k; - ${_} - } - - value = value * alpha; - ${t.length===3?"value += beta * _C(c);":""} - return value; - }`;return Object.assign(Object.assign({},o),{output:{dims:m,type:t[0].type,textureType:p.TextureType.unpacked},variables:[{name:"alpha",type:"float",data:e.alpha},{name:"beta",type:"float",data:e.beta}],shaderSource:v})},a=(o,t)=>{if(!o)throw new Error("Input is missing");if(t.isOptionalC&&(o.length<2||o.length>3))throw new Error("Invaid input shape.");if(!t.isOptionalC&&o.length!==3)throw new Error("Gemm requires 3 inputs");if(o.length===3&&o[2].dims.length!==1&&o[2].dims.length!==2)throw new Error("Invalid input shape of C");if(o[0].type!=="float32"&&o[0].type!=="float64"||o[1].type!=="float32"&&o[1].type!=="float64"||o.length===3&&o[2].type!=="float32"&&o[2].type!=="float64")throw new Error("Invalid input type.");if(o[0].type!==o[1].type||o.length===3&&o[0].type!==o[2].type)throw new Error("Input types are mismatched")}},8555:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedIm2ColProgramInfoLoader=void 0;const d=u(5060),l=u(2039),p=u(2827);n.createPackedIm2ColProgramInfoLoader=(s,h,f,a,o)=>{const t=(e=o.cacheKey,{name:"Im2Col (packed)",inputNames:["A"],inputTypes:[l.TextureType.packed],cacheHint:e});var e;return Object.assign(Object.assign({},t),{get:()=>((r,i,c,g,m,b)=>{const _=c.dims,w=g.dims,v=m.length,S=[w[1]*w[2]*w[3],m[2]*m[3]],O=w[2]*w[3],E=(0,p.unpackFromChannel)(),T=(0,d.getGlsl)(r.session.backend.glContext.version);let I="";for(let B=0;B<=1;B++)for(let F=0;F<=1;F++)I+=` - blockIndex = rc.x + ${F}; - pos = rc.y + ${B}; - - if(blockIndex < ${S[1]} && pos < ${S[0]}) { - offsetY = int(blockIndex / (${m[v-1]})) * ${b.strides[0]} - - ${b.pads[0]}; - d0 = offsetY + ${b.dilations[0]} * (imod(pos, ${O}) / ${w[2]}); - - if(d0 < ${_[2]} && d0 >= 0) { - offsetX = imod(blockIndex, ${m[v-1]}) * ${b.strides[1]} - - ${b.pads[1]}; - d1 = offsetX + ${b.dilations[1]} * imod(imod(pos, ${O}), ${w[2]}); - - if(d1 < ${_[3]} && d1 >= 0) { - - ch = int(float(pos)/ ${O}.); - innerDims = vec2(d0, d1); - result[${2*B+F}] = getChannel( - getA(0, ch, int(innerDims.x), - int(innerDims.y)), innerDims); - } - } - } - - `;const C=` - ${E} - - void main() { - ivec2 rc = getOutputCoords(); - vec4 result = vec4(0.0); - int blockIndex, pos, offsetY, d0, offsetX, d1, ch; - vec2 innerDims; - ${I} - ${T.output} = result; - } - `;return Object.assign(Object.assign({},i),{output:{dims:S,type:c.type,textureType:l.TextureType.packed},shaderSource:C,hasMain:!0})})(s,t,h,f,a,o)})}},3248:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.calculateIm2ColDims=n.createIm2ColProgramInfoLoader=void 0;const d=u(2039);n.createIm2ColProgramInfoLoader=(l,p,s,h,f)=>{const a=(o=f.cacheKey,{name:"Im2Col",inputNames:["X"],inputTypes:[d.TextureType.unpacked],cacheHint:o});var o;return Object.assign(Object.assign({},a),{get:()=>((t,e,r,i,c,g)=>{const m=r.dims,b=i.dims,_=c.length,w=(0,n.calculateIm2ColDims)(m,b,c,4),v=` - const int XC = ${m[1]}; - const int XH = ${m[2]}; - const int XW = ${m[3]}; - const int KH = ${g.kernelShape[0]}; - const int KW = ${g.kernelShape[1]}; - const int dilationH = ${g.dilations[0]}; - const int dilationW = ${g.dilations[1]}; - const int strideH = ${g.strides[0]}; - const int strideW = ${g.strides[1]}; - const int padH = ${g.pads[0]}; - const int padW = ${g.pads[1]}; - const int KHKW = KH*KW; - const int XCKHKW = XC * KHKW; - const int outputChannels = 4; - vec4 process(int indices[${_}]) { - int b = indices[0]; // batch size - int oh = indices[1] * strideH - padH; //output height - int ow = indices[2] * strideW - padW; //output width - int p = indices[3] * outputChannels; //patch - vec4 value = vec4(0.0); - for(int i=0; i < outputChannels; ++i) { - if(p < XCKHKW) { - int patchC = p / KHKW; - int patchH = (p - patchC*KHKW) / KW; - int patchW = (p - patchC*KHKW) - patchH * KW; - int xh2 = oh + patchH * dilationH; - int xw2 = ow + patchW * dilationW; - int x[${m.length}]; - x[0] = b; - x[1] = patchC; - x[2] = xh2; - x[3] = xw2; - if(xh2 >= 0 && - xh2 < XH && - xw2 >= 0 && - xw2 < XW) { - value[i] = _X(x); - } - } - ++p; - } - return value; - } - `;return Object.assign(Object.assign({},e),{output:{dims:w,type:r.type,textureType:d.TextureType.packedLastDimension},shaderSource:v})})(0,a,p,s,h,f)})},n.calculateIm2ColDims=(l,p,s,h=4)=>[s[0],s[2],s[3],Math.ceil(l[1]*p[2]*p[3]/h)]},6572:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseImageScalerAttributes=n.imageScaler=void 0;const d=u(246),l=u(2039);n.imageScaler=(a,o,t)=>(f(o),[a.run(s(a,o,t),o)]),n.parseImageScalerAttributes=a=>{const o=a.attributes.getFloat("scale"),t=a.attributes.getFloats("bias");return(0,d.createAttributeWithCacheKey)({scale:o,bias:t})};const p={name:"ImageScaler",inputNames:["X"],inputTypes:[l.TextureType.unpacked]},s=(a,o,t)=>{const e=Object.assign(Object.assign({},p),{cacheHint:t.cacheKey});return Object.assign(Object.assign({},e),{get:()=>((r,i,c,g)=>{const m=c[0].dims.slice(),b=m.length,_=` - ${h(g.bias.length)} - float process(int indices[${b}]) { - return _X(indices) * scale + getBias(bias, indices[1]); - }`;return Object.assign(Object.assign({},i),{output:{dims:m,type:c[0].type,textureType:l.TextureType.unpacked},variables:[{name:"bias",type:"float",arrayLength:g.bias.length,data:g.bias},{name:"scale",type:"float",data:g.scale}],shaderSource:_})})(0,e,o,t)})},h=a=>{const o=[`float getBias(float bias[${a}], int channel) {`];for(let t=0;t{if(!a||a.length!==1)throw new Error("ImageScaler requires 1 input.");if(a[0].dims.length!==4)throw new Error("Invalid input shape.");if(a[0].type!=="float32"&&a[0].type!=="float64")throw new Error("Invalid input type.")}},3346:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInstanceNormalizationAttributes=n.instanceNormalization=void 0;const d=u(5060),l=u(2039);n.instanceNormalization=(o,t,e)=>{a(t);const r=o.run(s(t[0]),t);return[o.run(f(o,t[0],e,r.dims),[t[0],r,t[1],t[2]])]},n.parseInstanceNormalizationAttributes=o=>o.attributes.getFloat("epsilon",1e-5);const p={name:"InstanceNormalization_MeanAndVariance",inputNames:["X"],inputTypes:[l.TextureType.unpacked]},s=o=>Object.assign(Object.assign({},p),{get:()=>((t,e)=>{const r=e.dims.slice(),i=r[1],c=r[2]*r[3],g=[r[0],i],m=` - vec4 process(int[2] indices) { - vec4 v = vec4(0.0); - int a[4]; - a[0] = indices[0]; - a[1] = indices[1]; - float temp = 0.0; - for(int a2=0; a2<${r[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${r[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += x; - } - } - float mean = temp / float(${c}); - temp = 0.0; - for(int a2=0; a2<${r[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${r[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += (x - mean) * (x - mean); - } - } - v.r = mean; - v.g = temp / float(${c}); - - return v; - }`;return Object.assign(Object.assign({},t),{output:{dims:g,type:e.type,textureType:l.TextureType.packedLastDimension},shaderSource:m})})(p,o)}),h={name:"InstanceNormalization_ComputeOutput",inputNames:["X","MeanAndVariance","Scale","B"],inputTypes:[l.TextureType.unpacked,l.TextureType.packedLastDimension,l.TextureType.unpacked,l.TextureType.unpacked]},f=(o,t,e,r)=>{const i=Object.assign(Object.assign({},h),{cacheHint:`${e}`});return Object.assign(Object.assign({},i),{get:()=>((c,g,m,b,_)=>{const w=(0,d.getGlsl)(c.session.backend.glContext.version),[v,S]=c.calculateTextureWidthAndHeight(_,l.TextureType.packedLastDimension),[O,E]=[v/4,S],T=` - vec4 get_MeanAndVariance(int[2] mv) { - int offset = indicesToOffset_MeanAndVariance(mv); - vec2 coords = offsetToCoords(offset, ${O}, ${E}); - return ${w.texture2D}(MeanAndVariance, coords); - } - - float process(int[4] indices) { - int mv[2]; - mv[0] = indices[0]; - mv[1] = indices[1]; - vec4 mean_and_variance = get_MeanAndVariance(mv); - float mean = mean_and_variance.r; - float variance = mean_and_variance.g; - - int sb[1]; - sb[0] = indices[1]; - float scale = _Scale(sb); - float b = _B(sb); - - return scale * (_X(indices) - mean) / sqrt(variance + epsilon) + b; - }`;return Object.assign(Object.assign({},g),{output:{dims:m.dims,type:m.type,textureType:l.TextureType.unpacked},variables:[{name:"epsilon",type:"float",data:b}],shaderSource:T})})(o,i,t,e,r)})},a=o=>{if(!o||o.length!==3)throw new Error("InstanceNormalization requires 3 inputs.");const t=o[0],e=o[1],r=o[2];if(t.dims.length<3||e.dims.length!==1||r.dims.length!==1)throw new Error("Invalid input shape.");if(e.dims[0]!==t.dims[1]||r.dims[0]!==t.dims[1])throw new Error("Input shapes are mismatched.");if(t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64")throw new Error("Invalid input type.");if(o[0].dims.length!==4)throw new Error("Only support 4-D input shape.")}},708:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedMatmulProgramInfoLoader=void 0;const d=u(2517),l=u(5060),p=u(2039),s=u(9390),h=u(2823),f=u(5623);n.createPackedMatmulProgramInfoLoader=(a,o,t)=>{const e=(r=o.length>2,i=t.activationCacheKey,{name:"MatMul (packed)",inputNames:r?["A","B","Bias"]:["A","B"],inputTypes:r?[p.TextureType.packed,p.TextureType.packed,p.TextureType.packed]:[p.TextureType.packed,p.TextureType.packed],cacheHint:i});var r,i;return Object.assign(Object.assign({},e),{get:()=>((c,g,m,b)=>{const _=m.length>2,w=_?"value += getBiasForMatmul();":"",v=m[0].dims,S=m[1].dims,O=d.BroadcastUtil.calcShape(v,S,!0),E=!d.ShapeUtil.areEqual(m[0].dims,m[1].dims);if(!O)throw new Error("Can't use matmul on the given tensors");const T=v[v.length-1],I=Math.ceil(T/2),C=v.length,B=S.length,F=(0,l.getGlsl)(c.session.backend.glContext.version),N=(0,s.getCoordsDataType)(O.length),H=O.length,$=(0,s.getGlChannels)(),{activationFunction:z,applyActivation:J}=(0,h.getActivationSnippet)(b),X=_?`${(0,f.getBiasForMatmul)(N,$,m[2].dims,O,!0)}`:"",te=E?`${function(Oe,ce,Te,_e){let Le=[],We=[];const Ae=Te[0].dims,Ce=Te[1].dims,Me=Ae.length,Ee=Ce.length,ve=_e.length,je=ve-Me,ze=ve-Ee;Le=Ae.map((Se,Fe)=>`coords.${ce[Fe+je]}`),Le[Me-1]="i*2",Le.join(", "),We=Ce.map((Se,Fe)=>`coords.${ce[Fe+ze]}`),We[Ee-2]="i*2",We.join(", ");const Ue=d.BroadcastUtil.getBroadcastDims(Ae,_e),He=d.BroadcastUtil.getBroadcastDims(Ce,_e),Ke=Ue.map(Se=>`coords.${ce[Se+je]} = 0;`).join(` -`),Ve=He.map(Se=>`coords.${ce[Se+ze]} = 0;`).join(` -`),Be=`int lastDim = coords.${ce[ve-1]}; - coords.${ce[ve-1]} = coords.${ce[ve-2]}; - coords.${ce[ve-2]} = lastDim;`;return` -vec4 getAAtOutCoordsMatmul(int i) { - ${Oe} coords = getOutputCoords(); - ${Be} - ${Ke} - vec4 outputValue = getA(${Le}); - return outputValue; -} - -vec4 getBAtOutCoordsMatmul(int i) { - ${Oe} coords = getOutputCoords(); - ${Be} - ${Ve} - vec4 outputValue = getB(${We}); - return outputValue; -}`}(N,$,m,O)}`:"",ne=E?"getAAtOutCoordsMatmul(i)":`getA(${function(Oe,ce){let Te="";for(let _e=0;_e{Object.defineProperty(n,"__esModule",{value:!0}),n.getBiasForMatmul=n.createMatmulProgramInfoLoader=n.parseMatMulAttributes=n.matMul=void 0;const d=u(2517),l=u(2039),p=u(9390),s=u(2823),h=u(708);function f(t,e){const r=(i=t.length>2,c=e.activationCacheKey,{name:"MatMul",inputNames:i?["A","B","Bias"]:["A","B"],inputTypes:i?[l.TextureType.unpacked,l.TextureType.unpacked,l.TextureType.unpacked]:[l.TextureType.unpacked,l.TextureType.unpacked],cacheHint:c});var i,c;return Object.assign(Object.assign({},r),{get:()=>function(g,m,b){const _=m[0].dims,w=m[1].dims,v=d.BroadcastUtil.calcShape(_,w,!0);if(!v)throw new Error("Can't use matmul on the given tensors");const S=(0,p.getCoordsDataType)(v.length),O=(0,p.getGlChannels)(),{activationFunction:E,applyActivation:T}=(0,s.getActivationSnippet)(b),I=m.length>2,C=I?"value += getBiasForMatmul();":"",B=I?`${o(S,O,m[2].dims,v,!1)}`:"",F=v.length,N=_.length,H=w.length,$=` - ${E} - ${B} - float process(int indices[${F}]) { - int a[${N}]; - int b[${H}]; - bcastMatmulIndices_A(indices, a); - bcastMatmulIndices_B(indices, b); - - float value; - for (int k=0; k<${_[_.length-1]}; ++k) { - a[${N-1}] = k; - b[${H-2}] = k; - value += _A(a) * _B(b); - } - ${C} - ${T} - return value; - }`;return Object.assign(Object.assign({},g),{output:{dims:v,type:m[0].type,textureType:l.TextureType.unpacked},shaderSource:$})}(r,t,e)})}n.matMul=(t,e,r)=>(a(e),t.session.pack?[t.run((0,h.createPackedMatmulProgramInfoLoader)(t,e,r),e)]:[t.run(f(e,r),e)]),n.parseMatMulAttributes=t=>(0,s.parseInternalActivationAttributes)(t.attributes),n.createMatmulProgramInfoLoader=f;const a=t=>{if(!t||t.length!==2)throw new Error("MatMul requires 2 inputs.");if(t[0].dims[t[0].dims.length-1]!==t[1].dims[t[1].dims.length-2])throw new Error("shared dimension does not match.");if(t[0].type!=="float32"&&t[0].type!=="float64"||t[1].type!=="float32"&&t[1].type!=="float64")throw new Error("inputs should be float type");if(t[0].type!==t[1].type)throw new Error("inputs types should match")};function o(t,e,r,i,c){let g="";const m=r.length,b=i.length,_=b-m;g=b<2&&m>0?"coords":r.map((S,O)=>`coords.${e[O+_]}`).join(", ");const w=d.BroadcastUtil.getBroadcastDims(r,i).map(S=>`coords.${e[S+_]} = 0;`).join(` -`);let v="vec4(outputValue.xx, outputValue.yy)";return d.ShapeUtil.size(r)===1&&(v="vec4(outputValue.x)"),c?` -vec4 getBiasForMatmul() { - ${t} coords = getOutputCoords(); - ${w} - vec4 outputValue = getBias(${g}); - return ${v}; -}`:` -float getBiasForMatmul() { - ${t} coords = getOutputCoords(); - ${w} - return getBias(coords.x); -}`}n.getBiasForMatmul=o},2403:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackProgramInfoLoader=void 0;const d=u(5060),l=u(2039),p=u(9390),s=u(2827),h={name:"pack",inputNames:["A"],inputTypes:[l.TextureType.unpackedReversed]};n.createPackProgramInfoLoader=(f,a)=>Object.assign(Object.assign({},h),{get:()=>((o,t)=>{const e=(0,d.getGlsl)(o.session.backend.glContext.version),r=t.dims,i=r.length,c=t.dims.length,g=(0,p.getCoordsDataType)(c),m=(0,s.getChannels)("rc",c),b=(_=c,w=m,v=r[r.length-2],S=r[r.length-1],_===0||_===1?"":` - int r = ${w[_-2]}; - int c = ${w[_-1]}; - int rp1 = ${w[_-2]} + 1; - int cp1 = ${w[_-1]} + 1; - bool rEdge = rp1 >= ${S}; - bool cEdge = cp1 >= ${v}; - `);var _,w,v,S;let O;O=i===0?[1,1]:i===1?[r[0],1]:[r[c-1],r[c-2]];const E=function(C,B,F){if(C===0)return"false";if(C===1)return`rc > ${B[0]}`;let N="";for(let H=C-2;H= ${B[H-C+2]}`,H= ${C[0]} ? 0. : getA(rc + 1), - 0, 0`;let N="";if(F>2)for(let H=0;H{Object.defineProperty(n,"__esModule",{value:!0}),n.unpackFromChannel=n.getChannels=n.getVecChannels=void 0;const d=u(9390);function l(p,s){return(0,d.getGlChannels)(s).map(h=>`${p}.${h}`)}n.getVecChannels=l,n.getChannels=function(p,s){return s===1?[p]:l(p,s)},n.unpackFromChannel=function(){return` - float getChannel(vec4 frag, int dim) { - int modCoord = imod(dim, 2); - return modCoord == 0 ? frag.r : frag.g; - } - - float getChannel(vec4 frag, vec2 innerDims) { - vec2 modCoord = mod(innerDims, 2.); - return modCoord.x == 0. ? - (modCoord.y == 0. ? frag.r : frag.g) : - (modCoord.y == 0. ? frag.b : frag.a); - } - `}},2870:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parsePadAttributesV11=n.padV11=n.parsePadAttributesV2=n.padV2=void 0;const d=u(246),l=u(2517),p=u(5060),s=u(2039),h={name:"Pad",inputNames:["A"],inputTypes:[s.TextureType.unpacked]};n.padV2=(g,m,b)=>(o(m),[g.run(Object.assign(Object.assign({},h),{cacheHint:b.cacheKey,get:()=>a(g,m[0],b)}),m)]),n.parsePadAttributesV2=g=>{const m=g.attributes.getString("mode","constant"),b=g.attributes.getFloat("value",0),_=g.attributes.getInts("pads");return(0,d.createAttributeWithCacheKey)({mode:m,value:b,pads:_})},n.padV11=(g,m,b)=>{t(m);const _=f(g,m,b);return(0,n.padV2)(g,[m[0]],_)},n.parsePadAttributesV11=g=>g.attributes.getString("mode","constant");const f=(g,m,b)=>{if(!g.session.isInitializer(m[1].dataId)||m.length>=3&&!g.session.isInitializer(m[2].dataId))throw new Error("dynamic pad attributes are not allowed");const _=Array.from(m[1].integerData),w=m.length>=3?m[2].floatData[0]:0;return(0,d.createAttributeWithCacheKey)({mode:b,pads:_,value:w})},a=(g,m,b)=>{const _=l.ShapeUtil.padShape(m.dims.slice(),b.pads),w=_.length,v=` - ${e(g,m,b)} - float process(int[${w}] indices) { - return padA(indices); - }`;return{name:"Pad",inputNames:["A"],inputTypes:[s.TextureType.unpacked],output:{dims:_,type:m.type,textureType:s.TextureType.unpacked},shaderSource:v}},o=g=>{if(!g||g.length!==1)throw new Error("Pad requires 1 input");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type.")},t=g=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Pad requires 2 or 3 inputs");if(g[1].type!=="int32")throw new Error("Invalid input type.");if(g.length>=3&&g[2].type==="string")throw new Error("Invalid input type.")},e=(g,m,b)=>{const _=(0,p.getGlsl)(g.session.backend.glContext.version),[w,v]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),S=l.ShapeUtil.computeStrides(m.dims);switch(b.mode){case"constant":return r(_,m.dims,S,w,v,b.pads,b.value);case"reflect":return i(_,m.dims,S,w,v,b.pads);case"edge":return c(_,m.dims,S,w,v,b.pads);default:throw new Error("Invalid mode")}},r=(g,m,b,_,w,v,S)=>{const O=m.length;let E="";for(let T=O-1;T>=0;--T)E+=` - k = m[${T}] - ${v[T]}; - if (k < 0) return constant; - if (k >= ${m[T]}) return constant; - offset += k * ${b[T]}; - `;return` - float padA(int m[${O}]) { - const float constant = float(${S}); - int offset = 0; - int k = 0; - ${E} - vec2 coords = offsetToCoords(offset, ${_}, ${w}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `},i=(g,m,b,_,w,v)=>{const S=m.length;let O="";for(let E=S-1;E>=0;--E)O+=` - k = m[${E}] - ${v[E]}; - if (k < 0) { k = -k; } - { - const int _2n_1 = ${2*(m[E]-1)}; - k = int( mod( float(k), float(_2n_1) ) ) ; - if(k >= ${m[E]}) { k = _2n_1 - k; } - } - offset += k * ${b[E]}; - `;return` - float padA(int m[${S}]) { - int offset = 0; - int k = 0; - ${O} - vec2 coords = offsetToCoords(offset, ${_}, ${w}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `},c=(g,m,b,_,w,v)=>{const S=m.length;let O="";for(let E=S-1;E>=0;--E)O+=` - k = m[${E}] - ${v[E]}; - if (k < 0) k = 0; - if (k >= ${m[E]}) k = ${m[E]-1}; - offset += k * ${b[E]}; - `;return` - float padA(int m[${S}]) { - int offset = 0; - int k = 0; - ${O} - vec2 coords = offsetToCoords(offset, ${_}, ${w}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `}},2143:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.globalMaxPool=n.parseMaxPoolAttributes=n.maxPool=n.parseGlobalAveragePoolAttributes=n.globalAveragePool=n.parseAveragePoolAttributes=n.averagePool=void 0;const d=u(246),l=u(2517),p=u(2039);n.averagePool=(c,g,m)=>{t(g);const b={name:"AveragePool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:m.cacheKey};return[c.run(Object.assign(Object.assign({},b),{get:()=>s(g,b,!1,m)}),g)]},n.parseAveragePoolAttributes=c=>{const g=c.attributes.getString("auto_pad","NOTSET"),m=c.attributes.getInt("ceil_mode",0),b=c.attributes.getInt("count_include_pad",0)!==0,_=c.attributes.getInts("kernel_shape"),w=c.attributes.getInts("strides",[]),v=c.attributes.getInts("pads",[]);if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for AveragePool");return(0,d.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:b,kernelShape:_,strides:w,pads:v})};const s=(c,g,m,b)=>{const[_,w]=f(c,b,m),v=l.ShapeUtil.size(_.kernelShape);let S="";_.countIncludePad?S+=`value /= float(${v});`:S+=`value /= float(${v} - pad);`;const O=` - ${e(c[0].dims,_,"value += _X(x);",S,"0.0")} - `;return Object.assign(Object.assign({},g),{output:{dims:w,type:c[0].type,textureType:p.TextureType.unpacked},shaderSource:O})};n.globalAveragePool=(c,g,m)=>{t(g);const b={name:"GlobalAveragePool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:`${m.countIncludePad}`};return[c.run(Object.assign(Object.assign({},b),{get:()=>s(g,b,!0,m)}),g)]},n.parseGlobalAveragePoolAttributes=c=>{const g=c.attributes.getInt("count_include_pad",0)!==0;return(0,d.createAttributeWithCacheKey)({autoPad:"",ceilMode:0,countIncludePad:g,kernelShape:[],strides:[],pads:[]})},n.maxPool=(c,g,m)=>{t(g);const b={name:"MaxPool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:m.cacheKey};return[c.run(Object.assign(Object.assign({},b),{get:()=>h(g,b,!1,m)}),g)]},n.parseMaxPoolAttributes=c=>{const g=c.attributes.getString("auto_pad","NOTSET"),m=c.attributes.getInt("ceil_mode",0),b=c.attributes.getInts("kernel_shape"),_=c.attributes.getInts("strides",[]),w=c.attributes.getInts("pads",[]),v=c.attributes.getInt("storage_order",0),S=c.attributes.getInts("dilations",[]);if(v!==0)throw new Error("column major storage order is not yet supported for MaxPool");if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for MaxPool");return(0,d.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:!1,kernelShape:b,strides:_,pads:w,storageOrder:v,dilations:S})};const h=(c,g,m,b)=>{const[_,w]=f(c,b,m),v=` - ${e(c[0].dims,_,` - value = max(_X(x), value); - `,"","-1e5")} - `;return Object.assign(Object.assign({},g),{output:{dims:w,type:c[0].type,textureType:p.TextureType.unpacked},shaderSource:v})},f=(c,g,m)=>{const b=c[0].dims.slice(),_=Object.hasOwnProperty.call(g,"dilations"),w=g.kernelShape.slice(),v=g.strides.slice(),S=_?g.dilations.slice():[],O=g.pads.slice();l.PoolConvUtil.adjustPoolAttributes(m,b,w,v,S,O);const E=l.PoolConvUtil.computePoolOutputShape(m,b,v,S,w,O,g.autoPad),T=Object.assign({},g);return _?Object.assign(T,{kernelShape:w,strides:v,pads:O,dilations:S,cacheKey:g.cacheKey}):Object.assign(T,{kernelShape:w,strides:v,pads:O,cacheKey:g.cacheKey}),[T,E]},a={autoPad:"",ceilMode:0,countIncludePad:!1,kernelShape:[],strides:[],pads:[],storageOrder:0,dilations:[],cacheKey:""},o={name:"GlobalMaxPool",inputNames:["X"],inputTypes:[p.TextureType.unpacked]};n.globalMaxPool=(c,g)=>(t(g),[c.run(Object.assign(Object.assign({},o),{get:()=>h(g,o,!0,a)}),g)]);const t=c=>{if(!c||c.length!==1)throw new Error("Pool ops requires 1 input.");if(c[0].type!=="float32"&&c[0].type!=="float64")throw new Error("Invalid input type.")},e=(c,g,m,b,_)=>{const w=c.length;if(g.kernelShape.length<=2){const v=g.kernelShape[g.kernelShape.length-1],S=g.strides[g.strides.length-1],O=g.pads[g.pads.length/2-1],E=g.pads[g.pads.length-1],T=c[w-1];let I="",C="",B="";if(I=O+E!==0?` - for (int i = 0; i < ${v}; i++) { - x[${w} - 1] = indices[${w} - 1] * ${S} - ${O} + i; - if (x[${w} - 1] < 0 || x[${w} - 1] >= ${T}) { - pad++; - continue; - } - ${m} - }`:` - for (int i = 0; i < ${v}; i++) { - x[${w} - 1] = indices[${w} - 1] * ${S} - ${O} + i; - ${m} - }`,g.kernelShape.length===2){const F=g.kernelShape[g.kernelShape.length-2],N=g.strides[g.strides.length-2],H=g.pads[g.pads.length/2-2],$=g.pads[g.pads.length-2],z=c[w-2];C=H+$!==0?` - for (int j = 0; j < ${F}; j++) { - x[${w} - 2] = indices[${w} - 2] * ${N} - ${H} + j; - if (x[${w} - 2] < 0 || x[${w} - 2] >= ${z}) { - pad+= ${v}; - continue; - } - `:` - for (int j = 0; j < ${F}; j++) { - x[${w} - 2] = indices[${w} - 2] * ${N} - ${H} + j; - `,B=` - } - `}return` - float process(int indices[${w}]) { - int x[${w}]; - copyVec(indices, x); - - float value = ${_}; - int pad = 0; - ${C} - ${I} - ${B} - ${b} - return value; - } - `}{const v=l.ShapeUtil.size(g.kernelShape),S=l.ShapeUtil.computeStrides(g.kernelShape),O=S.length,E=g.pads.length,T=i(O),I=r(c,"inputDims"),C=r(g.pads,"pads"),B=r(S,"kernelStrides"),F=r(g.strides,"strides");let N="";return N=g.pads.reduce((H,$)=>H+$)?` - if (x[j] >= inputDims[j] || x[j] < 0) { - pad++; - isPad = true; - break; - } - } - if (!isPad) { - ${m} - }`:` - } - ${m} - `,` - ${T} - float process(int indices[${w}]) { - int x[${w}]; - copyVec(indices, x); - int offset[${O}]; - int pads[${E}]; - int inputDims[${w}]; - int kernelStrides[${O}]; - int strides[${O}]; - ${C} - ${I} - ${F} - ${B} - - float value = ${_}; - int pad = 0; - bool isPad = false; - for (int i = 0; i < ${v}; i++) { - offsetToIndices(i, kernelStrides, offset); - isPad = false; - for (int j = ${w} - ${O}; j < ${w}; j++) { - x[j] = indices[j] * strides[j - ${w} + ${O}] - + offset[j - ${w} + ${O}] - pads[j - 2]; - ${N} - } - ${b} - - return value; - } - `}},r=(c,g)=>{let m="";for(let b=0;b` - void offsetToIndices(int offset, int[${c}] strides, out int[${c}] indices) { - if (${c} == 0) { - return; - } - for (int i = 0; i < ${c} - 1; ++i) { - indices[i] = offset / strides[i]; - offset -= indices[i] * strides[i]; - } - indices[${c} - 1] = offset; - }`},4939:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reduceLogSumSquare=n.reduceLogSum=n.reduceProd=n.reduceMin=n.reduceMax=n.reduceMean=n.reduceSum=n.parseReduceAttributes=void 0;const d=u(246),l=u(782),p=u(2517),s=u(2039),h=(o,t,e,r,i)=>{a(t);const c={name:r,inputNames:["A"],inputTypes:[s.TextureType.unpacked]};return[o.run(Object.assign(Object.assign({},c),{cacheHint:e.cacheKey,get:()=>f(o,t,e,r,i,c)}),t)]};n.parseReduceAttributes=o=>{const t=o.attributes.getInts("axes",[]),e=o.attributes.getInt("keepdims",1)===1;return(0,d.createAttributeWithCacheKey)({axes:t,keepDims:e})};const f=(o,t,e,r,i,c)=>{const g=[],m=t[0].dims.length||1,b=[],_=p.ShapeUtil.normalizeAxes(e.axes,t[0].dims.length),w=i(t,_);let v=w[1];for(let O=0;O=0||_.length===0?(e.keepDims&&g.push(1),v=` - for(int j${O} = 0; j${O} < ${t[0].dims[O]}; j${O}++) { - inputIdx[${O}] = j${O}; - ${v} - }`):(b.push(`inputIdx[${O}] = outputIdx[${g.length}];`),g.push(t[0].dims[O]));const S=` - float process(int outputIdx[${g.length||1}]) { - float value; // final result - int inputIdx[${m}]; // addressing input data - ${b.join(` -`)} - ${w[0]} // init ops for reduce max/min - ${v} - ${w[2]} // final computation for reduce mean - return value; - }`;return Object.assign(Object.assign({},c),{output:{dims:g,type:t[0].type,textureType:s.TextureType.unpacked},shaderSource:S})},a=o=>{if(!o||o.length!==1)throw new Error("Reduce op requires 1 input.");if(l.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invalid input type.")};n.reduceSum=(o,t,e)=>h(o,t,e,"ReduceSum",()=>["value = 0.0;","value += _A(inputIdx);",""]),n.reduceMean=(o,t,e)=>h(o,t,e,"ReduceMean",(r,i)=>{let c=1;for(let g=0;g=0||i.length===0)&&(c*=r[0].dims[g]);return["value = 0.0;","value += _A(inputIdx);",`value /= ${c}.;`]}),n.reduceMax=(o,t,e)=>h(o,t,e,"ReduceMax",(r,i)=>{const c=[];for(let g=0;g=0||i.length===0)&&c.push(`inputIdx[${g}] = 0;`);return[`${c.join(` -`)} -value = _A(inputIdx);`,"value = max(value, _A(inputIdx));",""]}),n.reduceMin=(o,t,e)=>h(o,t,e,"ReduceMin",(r,i)=>{const c=[];for(let g=0;g=0||i.length===0)&&c.push(`inputIdx[${g}] = 0;`);return[`${c.join(` -`)} -value = _A(inputIdx);`,"value = min(value, _A(inputIdx));",""]}),n.reduceProd=(o,t,e)=>h(o,t,e,"ReduceProd",()=>["value = 1.0;","value *= _A(inputIdx);",""]),n.reduceLogSum=(o,t,e)=>h(o,t,e,"ReduceLogSum",()=>["value = 0.0;","value += _A(inputIdx);","value = log(value);"]),n.reduceLogSumSquare=(o,t,e)=>h(o,t,e,"ReduceLogSumSquare",()=>["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""])},7019:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.isReshapeCheap=n.processDims3D=n.createPackedReshape3DProgramInfoLoader=void 0;const d=u(2517),l=u(5060),p=u(2039),s=u(2827);n.createPackedReshape3DProgramInfoLoader=(h,f,a)=>{const o=(t=>({name:"Reshape (packed)",inputTypes:[p.TextureType.packed],inputNames:["A"],cacheHint:`${t}`}))(a);return Object.assign(Object.assign({},o),{get:()=>((t,e,r,i)=>{const c=e.dims,g=i;let m="";for(let w=0;w<4;w++){let v="";switch(w){case 0:v="outputCoords = rc;";break;case 1:v="outputCoords = ivec3(rc.x, rc.y+1, rc.z);";break;case 2:v="outputCoords = ivec3(rc.x, rc.y, rc.z+1);";break;case 3:v="outputCoords = ivec3(rc.x, rc.y+1, rc.z+1);";break;default:throw new Error}m+=` - ${v} - ${w>0?"if(outputCoords.y < rows && outputCoords.z < cols){":""} - int flattenedIndex = getFlattenedIndex(outputCoords); - - ivec3 inputRC = inputCoordsFromReshapedOutCoords(flattenedIndex); - vec2 innerDims = vec2(float(inputRC.y),float(inputRC.z)); - - result[${w}] = getChannel(getA(inputRC.x, inputRC.y, inputRC.z), innerDims); - - ${w>0?"}":""} - `}const b=(0,l.getGlsl)(t.session.backend.glContext.version),_=` - ${function(w){const v=d.ShapeUtil.computeStrides(w),S=["b","r","c"],O="index";return` - ivec3 inputCoordsFromReshapedOutCoords(int index) { - ${v.map((E,T)=>`int ${S[T]} = ${O} / ${E}; ${T===v.length-1?`int ${S[T+1]} = ${O} - ${S[T]} * ${E}`:`index -= ${S[T]} * ${E}`};`).join("")} - return ivec3(b, r, c); - } - `}(c)} - ${function(w){const v=d.ShapeUtil.computeStrides(w);return` - int getFlattenedIndex(ivec3 coords) { - // reverse y, z order - return coords.x * ${v[0]} + coords.z * ${v[1]} + coords.y; - } -`}(g)} - ${(0,s.unpackFromChannel)()} - - void main() { - ivec3 rc = getOutputCoords(); - - vec4 result = vec4(0.0); - - ivec3 outputCoords; - int rows = ${g[2]}; - int cols = ${g[1]}; - - ${m} - ${b.output} = result; - } - `;return Object.assign(Object.assign({},r),{output:{dims:g,type:e.type,textureType:p.TextureType.packed},shaderSource:_,hasMain:!0})})(h,f,o,a)})},n.processDims3D=function(h){if(h.length===0)return[1,1,1];let f=1;for(let a=0;a1?h[h.length-2]:1,h[h.length-1]]},n.isReshapeCheap=function(h,f){let a=!1;return a=h.length===0||f.length===0||(h.length<2||f.length<2?h[h.length-1]===f[f.length-1]:h[h.length-1]===f[f.length-1]&&h[h.length-2]===f[f.length-2]),a}},718:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reshape=void 0;const d=u(2517);n.reshape=(l,p)=>{const s=d.ShapeUtil.calculateReshapedDims(p[0].dims,p[1].integerData);return l.session.pack?[l.reshapePacked(p[0],s)]:[l.reshapeUnpacked(p[0],s)]}},2268:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseResizeAttributesV11=n.parseResizeAttributesV10=n.resize=void 0;const d=u(5060),l=u(2039),p=u(9390),s=u(2827),h=u(9793),f={name:"Resize",inputNames:["A"],inputTypes:[l.TextureType.packed]};n.resize=(r,i,c)=>((0,h.validateInputs)(i,c),[r.run(Object.assign(Object.assign({},f),{cacheHint:c.cacheKey,get:()=>a(r,i,c)}),i)]),n.parseResizeAttributesV10=r=>(0,h.parseUpsampleAttributes)(r,10),n.parseResizeAttributesV11=r=>(0,h.parseUpsampleAttributes)(r,11);const a=(r,i,c)=>{const g=(0,d.getGlsl)(r.session.backend.glContext.version),[m,b]=o(i,c);if(m.every(N=>N===1)&&c.coordinateTransformMode!=="tf_crop_and_resize")return Object.assign(Object.assign({},f),{output:{dims:b,type:i[0].type,textureType:l.TextureType.packed},hasMain:!0,shaderSource:`void main() { - vec4 v = ${g.texture2D}(X, TexCoords); - ${g.output} = v; - }`});const _=b.length;if(_<2)throw new Error(`output dimension should be at least 2, but got ${_}`);const w=b[_-2],v=b[_-1],S=i[0].dims;if(_!==S.length)throw new Error(`output dimension should match input ${S.length}, but got ${_}`);const O=S[_-2],E=S[_-1],T=m[_-2],I=m[_-1];let C="";if(c.mode!=="linear")throw new Error(`resize (packed) does not support mode: '${c.mode}'`);switch(c.coordinateTransformMode){case"asymmetric":C=` - vec4 getSourceFracIndex(ivec4 coords) { - return vec4(coords) / scaleWHWH; - } - `;break;case"half_pixel":C=` - vec4 getSourceFracIndex(ivec4 coords) { - return (vec4(coords) + 0.5) / scaleWHWH - 0.5; - } - `;break;case"pytorch_half_pixel":C=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 fcoords = vec4(coords); - return vec4( - ${v}.0 > 1.0 ? (fcoords.x + 0.5) / scaleWHWH.x - 0.5 : 0.0, - ${w}.0 > 1.0 ? (fcoords.y + 0.5) / scaleWHWH.y - 0.5 : 0.0, - ${v}.0 > 1.0 ? (fcoords.z + 0.5) / scaleWHWH.z - 0.5 : 0.0, - ${w}.0 > 1.0 ? (fcoords.w + 0.5) / scaleWHWH.w - 0.5 : 0.0 - ); - } - `;break;case"align_corners":C=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 resized = vec4(${v}.0 - 1.0, ${w}.0 - 1.0, ${v}.0 - 1.0, - ${w}.0 - 1.0); - vec4 original = vec4(${E}.0 - 1.0, ${O}.0 - 1.0, ${E}.0 - 1.0, - ${O}.0 - 1.0); - vec4 new_scale = original / resized; - return vec4(coords) * new_scale; - } - `;break;default:throw new Error(`resize (packed) does not support coordinateTransformMode: '${c.coordinateTransformMode}'`)}const B=(0,p.getCoordsDataType)(_),F=` - const vec2 inputWH = vec2(${O}.0, ${E}.0); - const vec4 scaleWHWH = vec4(float(${T}), float(${I}), float(${T}), float(${I})); - ${(0,s.unpackFromChannel)()} - ${C} - float getAValue(int x10, int r, int c, int d) { - return getChannel(getA(x10, r, c, d), vec2(c, d)); - } - void main() { - ${B} rc = getOutputCoords(); - - int batch = rc[0]; - int depth = rc[1]; - - // retrieve the 4 coordinates that is used in the 4 packed output values. - ivec4 coords = ivec4(rc.wz, rc.w + 1, rc.z + 1); - - // calculate the source index in fraction - vec4 sourceFrac = getSourceFracIndex(coords); - - // get the lower and upper bound of the 4 values that will be packed into one texel. - ivec4 x00 = ivec4(max(sourceFrac.xy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xy))); - ivec4 x01 = ivec4(max(sourceFrac.xw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xw))); - ivec4 x10 = ivec4(max(sourceFrac.zy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zy))); - ivec4 x11 = ivec4(max(sourceFrac.zw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zw))); - - bool hasNextRow = rc.w < ${w-1}; - bool hasNextCol = rc.z < ${v-1}; - - // pack x00, x01, x10, x11's top-left corner into one vec4 structure - vec4 topLeft = vec4( - getAValue(batch, depth, x00.x, x00.y), - hasNextCol ? getAValue(batch, depth, x01.x, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.y) : 0.0); - - // pack x00, x01, x10, x11's top-right corner into one vec4 structure - vec4 topRight = vec4( - getAValue(batch, depth, x00.x, x00.w), - hasNextCol ? getAValue(batch, depth, x01.x, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.w) : 0.0); - - // pack x00, x01, x10, x11's bottom-left corner into one vec4 structure - vec4 bottomLeft = vec4( - getAValue(batch, depth, x00.z, x00.y), - hasNextCol ? getAValue(batch, depth, x01.z, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.y) : 0.0); - - // pack x00, x01, x10, x11's bottom-right corner into one vec4 structure - vec4 bottomRight = vec4( - getAValue(batch, depth, x00.z, x00.w), - hasNextCol ? getAValue(batch, depth, x01.z, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.w) : 0.0); - - // calculate the interpolation fraction on u and v direction - vec4 frac = vec4(sourceFrac) - floor(sourceFrac); - vec4 clampFrac = clamp(frac, vec4(0.0), vec4(1.0)); - - vec4 top = mix(topLeft, topRight, clampFrac.ywyw); - vec4 bottom = mix(bottomLeft, bottomRight, clampFrac.ywyw); - vec4 newValue = mix(top, bottom, clampFrac.xxzz); - - ${g.output} = vec4(newValue); - } - `;return Object.assign(Object.assign({},f),{output:{dims:b,type:i[0].type,textureType:l.TextureType.packed},hasMain:!0,shaderSource:F})},o=(r,i)=>{const c=r[0].dims;let g,m=i.scales;if(m.length===0){const _=r[i.scalesInputIdx];if(_&&_.size!==0){if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");m=t(_,i.mode,i.isResize)}else{const w=r[i.sizesInputIdx];if(!w||w.size===0)throw new Error("Either scales or sizes MUST be provided as input.");g=Array.from(w.integerData),m=e(g,c,i.mode,i.isResize)}}else if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");const b=g||c.map((_,w)=>Math.floor(_*m[w]));return[m,b]},t=(r,i,c)=>{const g=Array.from(r.floatData);return(0,h.scalesValidation)(g,i,c),g},e=(r,i,c,g)=>{const m=i.length,b=new Array(m);for(let _=0,w=m;_{Object.defineProperty(n,"__esModule",{value:!0}),n.shape=void 0;const d=u(9162);n.shape=(p,s)=>(l(s),[new d.Tensor([s[0].dims.length],"int32",void 0,void 0,new Int32Array(s[0].dims))]);const l=p=>{if(!p||p.length!==1)throw new Error("Shape requires 1 input.")}},2278:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sliceV10=n.parseSliceAttributes=n.slice=void 0;const d=u(246),l=u(782),p=u(2517),s=u(2039),h={name:"Slice",inputNames:["A"],inputTypes:[s.TextureType.unpacked]};n.slice=(e,r,i)=>(a(r),[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>f(e,r[0],i)}),r)]),n.parseSliceAttributes=e=>{const r=e.attributes.getInts("starts"),i=e.attributes.getInts("ends"),c=e.attributes.getInts("axes",[]);return(0,d.createAttributeWithCacheKey)({starts:r,ends:i,axes:c})};const f=(e,r,i)=>{const c=i.axes.length===0?r.dims.slice(0).map((S,O)=>O):i.axes,g=p.ShapeUtil.normalizeAxes(c,r.dims.length),m=i.starts.map((S,O)=>S>r.dims[g[O]]-1?r.dims[g[O]]:p.ShapeUtil.normalizeAxis(S,r.dims[g[O]])),b=i.ends.map((S,O)=>S>r.dims[g[O]]-1?r.dims[g[O]]:p.ShapeUtil.normalizeAxis(S,r.dims[g[O]])),_=r.dims.slice(),w=[];for(let S=0;S0&&w.push(`outputIdx[${g[S]}] += ${m[S]};`);const v=` - float process(int outputIdx[${_.length}]) { - ${w.join(` - `)} - return _A(outputIdx); - }`;return Object.assign(Object.assign({},h),{output:{dims:_,type:r.type,textureType:s.TextureType.unpacked},shaderSource:v})},a=e=>{if(!e||e.length!==1)throw new Error("Slice requires 1 input.");if(l.NUMBER_TYPES.indexOf(e[0].type)===-1)throw new Error("Invalid input type.")};n.sliceV10=(e,r)=>{t(r);const i=o(e,r);return[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>f(e,r[0],i)}),[r[0]])]};const o=(e,r)=>{if(!e.session.isInitializer(r[1].dataId)||!e.session.isInitializer(r[2].dataId)||r.length>=4&&!e.session.isInitializer(r[3].dataId)||r.length>=5&&!e.session.isInitializer(r[4].dataId))throw new Error("dynamic slice attributes are not allowed");if(r.length>=5&&r[4].integerData.some(m=>m!==1))throw new Error("currently non-1 steps is not supported for Slice");const i=Array.from(r[1].integerData),c=Array.from(r[2].integerData),g=r.length>=4?Array.from(r[3].integerData):[];return{starts:i,ends:c,axes:g,cacheKey:`${g};${i};${c}`}},t=e=>{if(!e||e.length<3||e.length>5)throw new Error("Invalid input number.");if(e[1].type!=="int32"||e[1].dims.length!==1)throw new Error("Invalid input type.");if(e[2].type!=="int32"||e[2].dims.length!==1)throw new Error("Invalid input type.");if(e.length>=4&&(e[3].type!=="int32"||e[3].dims.length!==1))throw new Error("Invalid input type.");if(e.length>=5&&(e[4].type!=="int32"||e[4].dims.length!==1))throw new Error("Invalid input type.")}},5524:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.softmaxV13=n.parseSoftmaxAttributesV13=n.parseSoftmaxAttributes=n.softmax=void 0;const d=u(246),l=u(2517),p=u(5060),s=u(2039),h=u(3738),f={name:"SoftmaxComputeMax",inputNames:["A"],inputTypes:[s.TextureType.unpacked]},a={name:"SoftmaxComputeScale",inputNames:["A","Max"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked]},o={name:"SoftMax",inputNames:["A","Max","Norm"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked,s.TextureType.unpacked]};n.softmax=(g,m,b)=>{c(m);const _=m[0].dims.slice(),w=l.ShapeUtil.normalizeAxis(b.axis,_.length),v=l.ShapeUtil.sizeToDimension(_,w),S=l.ShapeUtil.sizeFromDimension(_,w);return t(g,m,b,v,S)},n.parseSoftmaxAttributes=g=>(0,d.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",1)}),n.parseSoftmaxAttributesV13=g=>(0,d.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",-1)}),n.softmaxV13=(g,m,b)=>{c(m);const _=m[0].dims.slice(),w=l.ShapeUtil.normalizeAxis(b.axis,_.length),v=_.length,S=w!==v-1,O=[];let E,T=[],I=[];S&&(T=Array.from({length:v}).map((N,H)=>H),T[w]=v-1,T[v-1]=w,T.map(N=>O.push(_[N])),E=(0,d.createAttributeWithCacheKey)({perm:T}),I=(0,h.transpose)(g,m,E));const C=S?l.ShapeUtil.sizeToDimension(O,v-1):l.ShapeUtil.sizeToDimension(_,v-1),B=S?l.ShapeUtil.sizeFromDimension(O,v-1):l.ShapeUtil.sizeFromDimension(_,v-1),F=t(g,S?I:m,b,C,B);return S?(0,h.transpose)(g,F,E):F};const t=(g,m,b,_,w)=>{const v=e(g,m[0],_,w,[_]),S=g.run(Object.assign(Object.assign({},f),{cacheHint:b.cacheKey,get:()=>v}),m),O=r(g,m[0],_,w,v.output.dims,[_]),E=g.run(Object.assign(Object.assign({},a),{cacheHint:b.cacheKey,get:()=>O}),[m[0],S]),T=i(g,m[0],_,w,v.output.dims,O.output.dims);return[g.run(Object.assign(Object.assign({},o),{cacheHint:b.cacheKey,get:()=>T}),[m[0],S,E])]},e=(g,m,b,_,w)=>{const[v,S]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),O=w.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1)throw new Error("Dimensionality of the output should be 1");if(w[0]!==b)throw new Error("Shape of the output should be equal to logical row count");const E=(0,p.getGlsl)(g.session.backend.glContext.version),T=` - float process(int[${O}] indices) { - int logical_row_start_offset = indices[0] * ${_}; - - float max = getColorAsFloat(${E.texture2D}(A, offsetToCoords(logical_row_start_offset, ${v}, - ${S} ))); - for(int i=1; i<${_}; ++i) - { - float current = getColorAsFloat(${E.texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${v}, ${S}))); - if(current > max) - max = current; - } - - return max; - }`;return Object.assign(Object.assign({},f),{output:{dims:w,type:m.type,textureType:s.TextureType.unpacked},shaderSource:T})},r=(g,m,b,_,w,v)=>{const[S,O]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),E=v.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1)throw new Error("Dimensionality of the output should be 1");if(v[0]!==b)throw new Error("Shape of the output should be equal to logical row count");if(w.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(w[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const T=` - float process(int[${E}] indices) { - int logical_row_start_offset = indices[0] * ${_}; - - float norm_factor = 0.0; - float max = _Max(indices); - for(int i=0; i<${_}; ++i) - { - norm_factor += exp(getColorAsFloat(${(0,p.getGlsl)(g.session.backend.glContext.version).texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${S}, ${O}))) - max); - } - - return norm_factor; - }`;return Object.assign(Object.assign({},a),{output:{dims:v,type:m.type,textureType:s.TextureType.unpacked},shaderSource:T})},i=(g,m,b,_,w,v)=>{const[S,O]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),E=m.dims.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1||v.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(w[0]!==b||v[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const T=` - float process(int[${E}] indices) { - - // get offset of current logical tensor index from the 2-D texture coordinates (TexCoords) - int offset = coordsToOffset(TexCoords, ${S}, ${O}); - - //determine the logical row for this index - int logical_row_index[1]; - logical_row_index[0] = offset / ${_}; - - float norm_factor = _Norm(logical_row_index); - - // avoid possible division by 0 - // if norm_facor is 0, all elements are zero - // if so, return 0 - if(norm_factor == 0.0) - return 0.0; - - return exp(_A(indices) - _Max(logical_row_index)) / norm_factor; - }`;return Object.assign(Object.assign({},o),{output:{dims:m.dims,type:m.type,textureType:s.TextureType.unpacked},shaderSource:T})},c=g=>{if(!g||g.length!==1)throw new Error("Softmax requires 1 input.");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type")}},5975:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSplitAttributes=n.split=void 0;const d=u(246),l=u(2517),p=u(2039),s={name:"Split",inputNames:["A"],inputTypes:[p.TextureType.unpacked]};n.split=(o,t,e)=>{a(t);const r=l.ShapeUtil.normalizeAxis(e.axis,t[0].dims.length),i=h(o,t,r,e),c=[];for(let g=0;gf(o,t[0],e,r,g)}),t));return c},n.parseSplitAttributes=o=>{const t=o.attributes.getInt("axis",0),e=o.attributes.getInts("split",[]),r=o.outputs.length;return(0,d.createAttributeWithCacheKey)({axis:t,split:e,numOutputs:r})};const h=(o,t,e,r)=>{const[,i]=l.SplitUtil.splitShape(t[0].dims,e,r.split,r.numOutputs);return i.length},f=(o,t,e,r,i)=>{const[c,g]=l.SplitUtil.splitShape(t.dims,r,e.split,e.numOutputs),m=g[i],b=c[i],_=` - float process(int indices[${b.length}]) { - indices[${r}] += ${m}; - return _A(indices); - } - `;return Object.assign(Object.assign({},s),{cacheHint:`${e.cacheKey}:${i}`,output:{dims:b,type:t.type,textureType:p.TextureType.unpacked},shaderSource:_})},a=o=>{if(!o||o.length!==1)throw new Error("Split requires one input.");if(o[0].type!=="int8"&&o[0].type!=="uint8"&&o[0].type!=="int16"&&o[0].type!=="uint16"&&o[0].type!=="int32"&&o[0].type!=="uint32"&&o[0].type!=="float32"&&o[0].type!=="float64"&&o[0].type!=="bool")throw new Error("Invalid input type.")}},3933:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSqueezeAttributes=n.squeezeV13=n.squeeze=void 0;const d=u(2517);n.squeeze=(s,h,f)=>{l(h);const a=d.ShapeUtil.squeezeShape(h[0].dims,f);return[s.reshapeUnpacked(h[0],a)]},n.squeezeV13=(s,h)=>(p(h),(0,n.squeeze)(s,[h[0]],Array.from(h[1].integerData))),n.parseSqueezeAttributes=s=>s.attributes.getInts("axes");const l=s=>{if(!s||s.length!==1)throw new Error("Squeeze requires 1 input.");if(s[0].type==="string")throw new Error("invalid input tensor types.")},p=s=>{if(!s||s.length!==2)throw new Error("Squeeze requires 2 inputs.");if(s[1].type!=="int32")throw new Error("Invalid input type.")}},6558:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sum=void 0;const d=u(5060),l=u(2039);n.sum=(h,f)=>{s(f);const a={name:"Sum",inputNames:f.map((o,t)=>`X${t}`),inputTypes:new Array(f.length).fill(l.TextureType.unpacked)};return[h.run(Object.assign(Object.assign({},a),{get:()=>p(h,f,a)}),f)]};const p=(h,f,a)=>{const o=(0,d.getGlsl)(h.session.backend.glContext.version),t=f[0].dims.slice(),e=` - void main() { - vec4 result = ${f.map((r,i)=>`${o.texture2D}(X${i},TexCoords)`).join(" + ")}; - ${o.output} = result; - } - `;return Object.assign(Object.assign({},a),{output:{dims:t,type:f[0].type,textureType:l.TextureType.unpacked},hasMain:!0,shaderSource:e})},s=h=>{if(!h||h.length===0)throw new Error("Sum requires inputs.");const f=h[0].dims.length;for(let a=1;a{Object.defineProperty(n,"__esModule",{value:!0}),n.tile=void 0;const d=u(782),l=u(2039);n.tile=(h,f)=>{s(f);const a={name:"Tile",inputNames:["A"],inputTypes:[l.TextureType.unpacked]};return[h.run(Object.assign(Object.assign({},a),{get:()=>p(h,f,a)}),f)]};const p=(h,f,a)=>{const o=f[0].dims.slice(),t=new Array(o.length),e=[];for(let c=0;c{if(!h||h.length!==2)throw new Error("Tile requires 2 input.");if(h[1].dims.length!==1)throw new Error("The second input shape must 1 dimension.");if(h[1].dims[0]!==h[0].dims.length)throw new Error("Invalid input shape.");if(d.NUMBER_TYPES.indexOf(h[0].type)===-1)throw new Error("Invalid input type.");if(h[1].type!=="int32"&&h[1].type!=="int16")throw new Error("Invalid repeat type.")}},3738:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseTransposeAttributes=n.transpose=void 0;const d=u(246),l=u(2517),p=u(2039),s={name:"Transpose",inputNames:["A"],inputTypes:[p.TextureType.unpacked]};n.transpose=(e,r,i)=>(t(r),[e.run(Object.assign(Object.assign({},s),{cacheHint:i.cacheKey,get:()=>h(e,r[0],i.perm)}),r)]),n.parseTransposeAttributes=e=>(0,d.createAttributeWithCacheKey)({perm:e.attributes.getInts("perm",[])});const h=(e,r,i)=>{const c=r.dims;i=f(c,i);const g=a(c,i),m=c.length,b=` - ${o("perm",i,m)} - float process(int indices[${m}]) { - int a[${m}]; - perm(a, indices); - return _A(a); - }`;return Object.assign(Object.assign({},s),{output:{dims:g,type:r.type,textureType:p.TextureType.unpacked},shaderSource:b})},f=(e,r)=>(r&&r.length!==e.length&&(r=[...e.keys()].reverse()),r),a=(e,r)=>(r=f(e,r),l.ShapeUtil.sortBasedOnPerm(e,r)),o=(e,r,i)=>{const c=[];c.push(`void ${e}(out int a[${i}], int src[${i}]) {`);for(let g=0;g{if(!e||e.length!==1)throw new Error("Transpose requires 1 input.");if(e[0].type!=="float32"&&e[0].type!=="float64")throw new Error("input should be float tensor")}},8710:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.encodeAsUint8=void 0;const d=u(5060),l=u(2039);n.encodeAsUint8=(p,s)=>{const h=s.shape,f=(0,d.getGlsl)(p.session.backend.glContext.version),a=` - const float FLOAT_MAX = 1.70141184e38; - const float FLOAT_MIN = 1.17549435e-38; - - bool isNaN(float val) { - return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true; - } - - highp vec4 encodeAsUint8(highp float v) { - if (isNaN(v)) { - return vec4(255, 255, 255, 255); - } - - highp float av = abs(v); - - if(av < FLOAT_MIN) { - return vec4(0.0, 0.0, 0.0, 0.0); - } else if(v > FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 127.0) / 255.0; - } else if(v < -FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 255.0) / 255.0; - } - - highp vec4 c = vec4(0,0,0,0); - - highp float e = floor(log2(av)); - highp float m = exp2(fract(log2(av))) - 1.0; - - c[2] = floor(128.0 * m); - m -= c[2] / 128.0; - c[1] = floor(32768.0 * m); - m -= c[1] / 32768.0; - c[0] = floor(8388608.0 * m); - - highp float ebias = e + 127.0; - c[3] = floor(ebias / 2.0); - ebias -= c[3] * 2.0; - c[2] += floor(ebias) * 128.0; - - c[3] += 128.0 * step(0.0, -v); - - return c / 255.0; - } - - void main() { - float value = ${f.texture2D}(X,TexCoords).r; - ${f.output} = encodeAsUint8(value); - }`,o={name:"Uint8Encode",inputTypes:[l.TextureType.unpacked],inputNames:["X"],output:{dims:h,type:s.tensor.type,textureType:l.TextureType.downloadUint8AsFloat},shaderSource:a,hasMain:!0};return p.executeProgram(o,[s.tensor])}},4909:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.tanh=n.tan=n.sqrt=n.sin=n.sigmoid=n.relu=n.not=n.neg=n.log=n.parseLeakyReluAttributes=n.leakyRelu=n.identity=n.floor=n.exp=n.parseEluAttributes=n.elu=n.cos=n.ceil=n.clipV11=n.parseClipAttributes=n.clip=n.atan=n.asin=n.acos=n.abs=n.glslTanh=n.glslTan=n.glslSqrt=n.glslSigmoid=n.glslRelu=n.glslSin=n.glslNot=n.glslNeg=n.glslLog=n.glslLeakyRelu=n.glslIdentity=n.glslClip=n.glslFloor=n.glslExp=n.glslElu=n.glslCos=n.glslCeil=n.glslAtan=n.glslAsin=n.glslAcos=n.glslAbs=void 0;const d=u(246),l=u(2517),p=u(8520),s=u(5060),h=u(2039);function f(){return F("abs")}function a(){return F("acos")}function o(){return F("asin")}function t(){return F("atan")}function e(){return F("ceil")}function r(){return F("cos")}function i($){const z="elu";return{body:` - const float alpha = float(${$}); - - float ${z}_(float a) { - return a >= 0.0 ? a: (exp(a) - 1.0) * alpha; - } - vec4 ${z}_(vec4 v) { - return vec4(${z}_(v.x), ${z}_(v.y), ${z}_(v.z), ${z}_(v.w)); - } - `,name:z,type:p.FunctionType.ValueBased}}function c(){return F("exp")}function g(){return F("floor")}function m($,z){const J="clip";return{body:` - const float min = float(${$}); - const float max = float(${z}); - - float ${J}_(float a) { - return clamp(a, min, max); - } - vec4 ${J}_(vec4 v) { - return clamp(v, min, max); - } - `,name:J,type:p.FunctionType.ValueBased}}function b(){const $="indentity";return{body:` - float ${$}_(float a) { - return a; - } - vec4 ${$}_(vec4 v) { - return v; - } - `,name:$,type:p.FunctionType.ValueBased}}function _($){const z="leakyRelu";return{body:` - const float alpha = float(${$}); - - float ${z}_(float a) { - return a < 0.0 ? a * alpha : a; - } - vec4 ${z}_(vec4 v) { - return vec4(${z}_(v.x), ${z}_(v.y), ${z}_(v.z), ${z}_(v.w)); - } - `,name:z,type:p.FunctionType.ValueBased}}function w(){return F("log")}function v(){const $="neg";return{body:` - float ${$}_(float a) { - return -a; - } - vec4 ${$}_(vec4 v) { - return -v; - } - `,name:$,type:p.FunctionType.ValueBased}}function S(){const $="not";return{body:` - float ${$}_(float a) { - return float( ! bool(a) ); - } - bool ${$}_(bool a) { - return !a; - } - vec4 ${$}_(vec4 v) { - return vec4(!bool(v.x), !bool(v.y), !bool(v.z), !bool(v.w)); - } - bvec4 ${$}_(bvec4 v) { - return bvec4(!v.x, !v.y, !v.z, !v.w); - } - `,name:$,type:p.FunctionType.ValueBased}}function O(){return F("sin")}function E(){const $="relu";return{body:` - float ${$}_(float a) { - return max( a, 0.0 ); - } - vec4 ${$}_(vec4 v) { - return max( v, 0.0 ); - } - `,name:$,type:p.FunctionType.ValueBased}}function T(){const $="sigmoid";return{body:` - float ${$}_(float a) { - return 1.0 / (1.0 + exp(-a)); - } - vec4 ${$}_(vec4 v) { - return 1.0 / (1.0 + exp(-v)); - } - `,name:$,type:p.FunctionType.ValueBased}}function I(){return F("sqrt")}function C(){return F("tan")}function B(){const $="tanh";return{body:` - float ${$}_(float a) { - a = clamp(a, -10., 10.); - a = exp(2.*a); - return (a - 1.) / (a + 1.); - } - vec4 ${$}_(vec4 v) { - v = clamp(v, -10., 10.); - v = exp(2.*v); - return (v - 1.) / (v + 1.); - } - `,name:$,type:p.FunctionType.ValueBased}}function F($){return{body:` - float ${$}_(float a) { - return ${$}(a); - } - vec4 ${$}_(vec4 v) { - return ${$}(v); - } - `,name:$,type:p.FunctionType.ValueBased}}n.glslAbs=f,n.glslAcos=a,n.glslAsin=o,n.glslAtan=t,n.glslCeil=e,n.glslCos=r,n.glslElu=i,n.glslExp=c,n.glslFloor=g,n.glslClip=m,n.glslIdentity=b,n.glslLeakyRelu=_,n.glslLog=w,n.glslNeg=v,n.glslNot=S,n.glslSin=O,n.glslRelu=E,n.glslSigmoid=T,n.glslSqrt=I,n.glslTan=C,n.glslTanh=B;const N=($,z,J,X)=>{const te=$.session.pack?h.TextureType.packed:h.TextureType.unpacked,ne={name:J.name,inputTypes:[te],inputNames:["A"],cacheHint:X};return Object.assign(Object.assign({},ne),{get:()=>((me,Ie,Oe,ce)=>{const Te=me.session.pack?h.TextureType.packed:h.TextureType.unpacked,_e=(0,s.getGlsl)(me.session.backend.glContext.version);return Object.assign(Object.assign({},Ie),{output:{dims:Oe.dims,type:Oe.type,textureType:Te},shaderSource:` - ${ce.body} - void main() { - vec4 v = ${_e.texture2D}(A, TexCoords); - v = ${ce.name}_(v); - ${_e.output} = v; - } - `,hasMain:!0})})($,ne,z,J)})};n.abs=($,z)=>[$.run(N($,z[0],f()),z)],n.acos=($,z)=>[$.run(N($,z[0],a()),z)],n.asin=($,z)=>[$.run(N($,z[0],o()),z)],n.atan=($,z)=>[$.run(N($,z[0],t()),z)],n.clip=($,z,J)=>[$.run(N($,z[0],m(J.min,J.max),J.cacheKey),z)],n.parseClipAttributes=$=>(0,d.createAttributeWithCacheKey)({min:$.attributes.getFloat("min",l.MIN_CLIP),max:$.attributes.getFloat("max",l.MAX_CLIP)}),n.clipV11=($,z)=>{const J=H($,z);return(0,n.clip)($,[z[0]],J)};const H=($,z)=>{if(z.length>=3&&(!$.session.isInitializer(z[1].dataId)||!$.session.isInitializer(z[2].dataId)))throw new Error("dynamic clip attributes are not allowed");const J=z.length>=3?z[1].numberData[0]:l.MIN_CLIP,X=z.length>=3?z[2].numberData[0]:l.MAX_CLIP;return(0,d.createAttributeWithCacheKey)({min:J,max:X})};n.ceil=($,z)=>[$.run(N($,z[0],e()),z)],n.cos=($,z)=>[$.run(N($,z[0],r()),z)],n.elu=($,z,J)=>[$.run(N($,z[0],i(J.alpha),J.cacheKey),z)],n.parseEluAttributes=$=>(0,d.createAttributeWithCacheKey)({alpha:$.attributes.getFloat("alpha",1)}),n.exp=($,z)=>[$.run(N($,z[0],c()),z)],n.floor=($,z)=>[$.run(N($,z[0],g()),z)],n.identity=($,z)=>[$.run(N($,z[0],b()),z)],n.leakyRelu=($,z,J)=>[$.run(N($,z[0],_(J.alpha),J.cacheKey),z)],n.parseLeakyReluAttributes=$=>(0,d.createAttributeWithCacheKey)({alpha:$.attributes.getFloat("alpha",.01)}),n.log=($,z)=>[$.run(N($,z[0],w()),z)],n.neg=($,z)=>[$.run(N($,z[0],v()),z)],n.not=($,z)=>[$.run(N($,z[0],S()),z)],n.relu=($,z)=>[$.run(N($,z[0],E()),z)],n.sigmoid=($,z)=>[$.run(N($,z[0],T()),z)],n.sin=($,z)=>[$.run(N($,z[0],O()),z)],n.sqrt=($,z)=>[$.run(N($,z[0],I()),z)],n.tan=($,z)=>[$.run(N($,z[0],C()),z)],n.tanh=($,z)=>[$.run(N($,z[0],B()),z)]},5611:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackProgramInfoLoader=n.createUnpackProgramInfo=void 0;const d=u(5060),l=u(2039),p=u(9390),s=u(2827),h={name:"unpack",inputNames:["A"],inputTypes:[l.TextureType.packed]};n.createUnpackProgramInfo=(f,a)=>{const o=a.dims.length,t=(0,s.getChannels)("rc",o),e=t.slice(-2),r=(0,p.getCoordsDataType)(o),i=(0,s.unpackFromChannel)(),c=a.dims.length===0?"":function(b,_){if(b===1)return"rc";let w="";for(let v=0;vObject.assign(Object.assign({},h),{get:()=>(0,n.createUnpackProgramInfo)(f,a)})},8428:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseUnsqueezeAttributes=n.unsqueezeV13=n.unsqueeze=void 0;const d=u(2517);n.unsqueeze=(s,h,f)=>{l(h);const a=d.ShapeUtil.unsqueezeShape(h[0].dims,f);return[s.reshapeUnpacked(h[0],a)]},n.unsqueezeV13=(s,h)=>(p(h),(0,n.unsqueeze)(s,[h[0]],Array.from(h[1].integerData))),n.parseUnsqueezeAttributes=s=>s.attributes.getInts("axes");const l=s=>{if(!s||s.length!==1)throw new Error("Unsqueeze requires 1 input.");if(s[0].type==="string")throw new Error("invalid input tensor types.")},p=s=>{if(!s||s.length!==2)throw new Error("Unsqueeze requires 2 inputs.");if(s[1].type!=="int32")throw new Error("Invalid input type.")}},9793:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.scalesValidation=n.validateInputs=n.parseUpsampleAttributes=n.parseUpsampleAttributesV9=n.parseUpsampleAttributesV7=n.upsample=void 0;const d=u(246),l=u(5060),p=u(2039),s={name:"Upsample",inputNames:["X"],inputTypes:[p.TextureType.unpacked]};n.upsample=(f,a,o)=>((0,n.validateInputs)(a,o),[f.run(Object.assign(Object.assign({},s),{cacheHint:o.cacheKey,get:()=>h(f,a,o)}),a)]),n.parseUpsampleAttributesV7=f=>(0,n.parseUpsampleAttributes)(f,7),n.parseUpsampleAttributesV9=f=>(0,n.parseUpsampleAttributes)(f,9),n.parseUpsampleAttributes=(f,a)=>{const o=a>=10,t=f.attributes.getString("mode","nearest");if(t!=="nearest"&&t!=="linear"&&(a<11||t!=="cubic"))throw new Error(`unrecognized mode: ${t}`);let e=[];a<9&&(e=f.attributes.getFloats("scales"),(0,n.scalesValidation)(e,t,o));const r=f.attributes.getFloat("extrapolation_value",0),i=a>10?f.attributes.getString("coordinate_transformation_mode","half_pixel"):"asymmetric";if(["asymmetric","pytorch_half_pixel","tf_half_pixel_for_nn","align_corners","tf_crop_and_resize","half_pixel"].indexOf(i)===-1)throw new Error(`coordinate_transform_mode '${i}' is not supported`);const c=i==="tf_crop_and_resize",g=c,m=t==="nearest"&&a>=11?f.attributes.getString("nearest_mode","round_prefer_floor"):"";if(["round_prefer_floor","round_prefer_ceil","floor","ceil",""].indexOf(m)===-1)throw new Error(`nearest_mode '${m}' is not supported`);const b=f.attributes.getFloat("cubic_coeff_a",-.75),_=f.attributes.getInt("exclude_outside",0)!==0;if(_&&t!=="cubic")throw new Error("exclude_outside can be set to 1 only when mode is CUBIC.");const w=a<11||t==="nearest"&&i==="asymmetric"&&m==="floor";let v=0,S=0,O=0;return a>10?f.inputs.length>2?(v=1,S=2,O=3):(S=1,O=2):a===9&&(S=1),(0,d.createAttributeWithCacheKey)({opset:a,isResize:o,mode:t,scales:e,extrapolationValue:r,coordinateTransformMode:i,useExtrapolation:g,needRoiInput:c,nearestMode:m,cubicCoefficientA:b,excludeOutside:_,useNearest2xOptimization:w,roiInputIdx:v,scalesInputIdx:S,sizesInputIdx:O})};const h=(f,a,o)=>{const t=(0,l.getGlsl)(f.session.backend.glContext.version),[e,r]=f.calculateTextureWidthAndHeight(a[0].dims,p.TextureType.unpacked),i=a[0].dims.map((O,E)=>Math.floor(O*o.scales[E])),[c,g]=f.calculateTextureWidthAndHeight(i,p.TextureType.unpacked),m=i.length,b=new Array(m),_=new Array(m);let w=` - int output_pitches[${m}]; - int input_pitches[${m}]; - `;for(let O=m-1;O>=0;O--)b[O]=O===m-1?1:b[O+1]*i[O+1],_[O]=O===m-1?1:_[O+1]*a[0].dims[O+1],w+=` - output_pitches[${O}] = ${b[O]}; - input_pitches[${O}] = ${_[O]}; - `;const v=` - float getInputFloat(int index) { - vec2 coords = offsetToCoords(index, ${e}, ${r}); - float value = getColorAsFloat(${t.texture2D}(X, coords)); - return value; - } - `,S=o.mode==="nearest"?` - ${v} - float process(int indices[${m}]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${c}, ${g}); - - ${w} - - int d, m; - for (int dim = 0; dim < ${m}; ++dim) { - d = output_index / output_pitches[dim]; - m = output_index - d * output_pitches[dim]; - output_index = m; - - if (scales[dim] != 1 && d > 0) { - int d2 = d / scales[dim]; - m = d - d2 * scales[dim]; - d = d2; - } - input_index += input_pitches[dim] * d; - } - - return getInputFloat(input_index); - }`:m===4?` - ${v} - float process(int indices[4]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${c}, ${g}); - - ${w} - - int m; - int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m / output_pitches[1]; - m = m - index_of_dim1 * output_pitches[1]; - index_of_dim2 = m / output_pitches[2]; - m = m - index_of_dim2 * output_pitches[2]; - index_of_dim3 = m; - - int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset; - index_of_input_dim2 = index_of_dim2 / scales[2]; - y_offset = index_of_dim2 - index_of_input_dim2 * scales[2]; - index_of_input_dim3 = index_of_dim3 / scales[3]; - x_offset = index_of_dim3 - index_of_input_dim3 * scales[3]; - - input_index = index_of_dim0 * input_pitches[0] + - index_of_dim1 * input_pitches[1] + - index_of_input_dim2 * input_pitches[2] + - index_of_input_dim3; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim2 = false; - if (index_of_input_dim2 == (${a[0].dims[2]} - 1)) { - // It's the end in dimension 2 - x01 = x00; - end_of_dim2 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[2]); - } - - if (index_of_input_dim3 == (input_pitches[2] - 1)) { - // It's the end in dimension 3 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[3]); - }`:` - ${v} - float process(int indices[2]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${c}, ${g}); - - ${w} - - int m; - int index_of_dim0, index_of_dim1; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m; - - int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset; - index_of_input_dim0 = index_of_dim0 / scales[0]; - y_offset = index_of_dim0 - index_of_input_dim0 * scales[0]; - index_of_input_dim1 = index_of_dim1 / scales[1]; - x_offset = index_of_dim1 - index_of_input_dim1 * scales[1]; - - input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim0 = false; - if (index_of_input_dim0 == (${a[0].dims[0]} - 1)) { - // It's the end in dimension 0 - x01 = x00; - end_of_dim0 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[0]); - } - - if (index_of_input_dim1 == (input_pitches[0] - 1)) { - // It's the end in dimension 1 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[1]); - }`;return Object.assign(Object.assign({},s),{output:{dims:i,type:a[0].type,textureType:p.TextureType.unpacked},shaderSource:S,variables:[{name:"scales",type:"int",arrayLength:o.scales.length,data:o.scales.map(O=>Math.ceil(O))}]})};n.validateInputs=(f,a)=>{if(!f||a.opset<9&&f.length!==1||a.opset>=9&&a.opset<11&&f.length!==2||a.opset>=11&&f.length<2)throw new Error("invalid inputs.");if(a.scales.length>0&&f[0].dims.length!==a.scales.length)throw new Error("Invalid input shape.");if(f[0].type==="string")throw new Error("Invalid input tensor types.")},n.scalesValidation=(f,a,o)=>{if(o){for(const t of f)if(t<=0)throw new Error("Scale value should be greater than 0.")}else for(const t of f)if(t<1)throw new Error("Scale value should be greater than or equal to 1.");if(!(a!=="linear"&&a!=="cubic"||f.length===2||f.length===4&&f[0]===1&&f[1]===1))throw new Error(`'Linear' mode and 'Cubic' mode only support 2-D inputs ('Bilinear', 'Bicubic') or 4-D inputs with the corresponding outermost 2 scale values being 1 in the ${o?"Resize":"Upsample"} opeartor.`)}},1958:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ProgramManager=void 0;const d=u(1670),l=u(6231),p=u(8879),s=u(5060);n.ProgramManager=class{constructor(h,f,a){this.profiler=h,this.glContext=f,this.textureLayoutStrategy=a,this.repo=new Map,this.attributesBound=!1}getArtifact(h){return this.repo.get(h)}setArtifact(h,f){this.repo.set(h,f)}run(h,f,a){var o;this.profiler.event("op",`ProgramManager.run ${(o=h.programInfo.name)!==null&&o!==void 0?o:"unknown kernel"}`,()=>{var t;const e=this.glContext.gl,r=h.program;e.useProgram(r);try{this.bindOutput(a),this.attributesBound||this.bindAttributes(h.attribLocations),this.bindUniforms(h.uniformLocations,(t=h.programInfo.variables)!==null&&t!==void 0?t:[],f)}catch(i){throw l.Logger.error("ProgramManager",h.programInfo.shaderSource),i}this.profiler.event("backend","GlContext.draw()",()=>{this.glContext.draw()})},this.glContext)}dispose(){this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach(h=>this.glContext.deleteProgram(h.program))}build(h,f,a){return this.profiler.event("backend","ProgramManager.build",()=>{const o=new p.GlslPreprocessor(this.glContext,h,f,a),t=o.preprocess(),e=this.compile(t);return{programInfo:h,program:e,uniformLocations:this.getUniformLocations(e,o.context.programInfo.inputNames,o.context.programInfo.variables),attribLocations:this.getAttribLocations(e)}})}compile(h){if(!this.vertexShader){l.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");const o=(0,s.getVertexShaderSource)(this.glContext.version);this.vertexShader=this.glContext.compileShader(o,this.glContext.gl.VERTEX_SHADER)}d.env.debug&&l.Logger.verbose("ProrgramManager",`FragShader: -${h} -`);const f=this.glContext.compileShader(h,this.glContext.gl.FRAGMENT_SHADER),a=this.glContext.createProgram(this.vertexShader,f);return this.glContext.deleteShader(f),a}bindOutput(h){const f=h.width,a=h.height;l.Logger.verbose("ProrgramManager",`Binding output texture to Framebuffer: w/h=${f}/${a}, shape=${h.shape}, type=${h.tensor.type}`),this.glContext.attachFramebuffer(h.texture,f,a)}bindAttributes(h){const f=h.position,a=h.textureCoord;this.glContext.setVertexAttributes(f,a),this.attributesBound=!0}bindUniforms(h,f,a){var o;const t=this.glContext.gl;let e=0;for(const{name:r,type:i,location:c,arrayLength:g}of h){const m=(o=f.find(b=>b.name===r))===null||o===void 0?void 0:o.data;if(i!=="sampler2D"&&!m)throw new Error(`variable '${r}' does not have data defined in program info`);switch(i){case"sampler2D":this.bindTexture(a[e],c,e),e++;break;case"float":g?t.uniform1fv(c,m):t.uniform1f(c,m);break;case"int":g?t.uniform1iv(c,m):t.uniform1i(c,m);break;default:throw new Error(`Uniform not implemented: ${i}`)}}}bindTexture(h,f,a){this.glContext.bindTextureToUniform(h.texture,a,f)}getAttribLocations(h){return{position:this.getAttribLocation(h,"position"),textureCoord:this.getAttribLocation(h,"textureCoord")}}getUniformLocations(h,f,a){const o=[];if(f)for(const t of f)o.push({name:t,type:"sampler2D",location:this.getUniformLocation(h,t)});if(a)for(const t of a)o.push(Object.assign(Object.assign({},t),{location:this.getUniformLocation(h,t.name)}));return o}getUniformLocation(h,f){const a=this.glContext.gl.getUniformLocation(h,f);if(a===null)throw new Error(`Uniform ${f} not found.`);return a}getAttribLocation(h,f){return this.glContext.gl.getAttribLocation(h,f)}}},6416:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLSessionHandler=void 0;const d=u(6231),l=u(1047),p=u(8316),s=u(1640),h=u(1958),f=u(7859),a=u(5702);n.WebGLSessionHandler=class{constructor(o,t){this.backend=o,this.context=t,this.layoutStrategy=new f.PreferLogicalStrategy(o.glContext.maxTextureSize),this.programManager=new h.ProgramManager(this.context.profiler,o.glContext,this.layoutStrategy),this.textureManager=new a.TextureManager(o.glContext,this.layoutStrategy,this.context.profiler,{reuseTextures:o.textureCacheMode==="full"}),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map,this.pack=o.pack,this.pack2unpackMap=new Map,this.unpack2packMap=new Map}createInferenceHandler(){return new p.WebGLInferenceHandler(this)}onGraphInitialized(o){const t=o.getValues().filter(e=>e.from===-1&&e.tensor).map(e=>e.tensor.dataId);this.initializers=new Set(t)}isInitializer(o){return!!this.initializers&&this.initializers.has(o)}addInitializer(o){this.initializers.add(o)}getTextureData(o,t){return t?this.packedTextureDataCache.get(o):this.unpackedTextureDataCache.get(o)}setTextureData(o,t,e=!1){d.Logger.verbose("WebGLSessionHandler","Storing Texture data in cache"),e?this.packedTextureDataCache.set(o,t):this.unpackedTextureDataCache.set(o,t)}dispose(){this.programManager.dispose(),this.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.unpackedTextureDataCache=new Map}resolve(o,t,e){const r=(0,l.resolveOperator)(o,t,s.WEBGL_OP_RESOLVE_RULES);return{impl:r.opImpl,context:r.opInit?r.opInit(o,e):o}}}},7769:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Uint8DataEncoder=n.RGBAFloatDataEncoder=n.RedFloat32DataEncoder=void 0;const d=u(6231);n.RedFloat32DataEncoder=class{constructor(l,p=1){if(p===1)this.internalFormat=l.R32F,this.format=l.RED,this.textureType=l.FLOAT,this.channelSize=p;else{if(p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=l.RGBA32F,this.format=l.RGBA,this.textureType=l.FLOAT,this.channelSize=p}}encode(l,p){let s,h;return l.constructor!==Float32Array&&(d.Logger.warning("Encoder","data was not of type Float32; creating new Float32Array"),h=new Float32Array(l)),p*this.channelSize>l.length?(d.Logger.warning("Encoder","Source data too small. Allocating larger array"),h=l,s=this.allocate(p*this.channelSize),h.forEach((f,a)=>s[a]=f)):(h=l,s=h),s}allocate(l){return new Float32Array(4*l)}decode(l,p){return this.channelSize===1?l.filter((s,h)=>h%4==0).subarray(0,p):l.subarray(0,p)}},n.RGBAFloatDataEncoder=class{constructor(l,p=1,s){if(p!==1&&p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=l.RGBA,this.format=l.RGBA,this.channelSize=p,this.textureType=s||l.FLOAT}encode(l,p){let s=l;return this.channelSize===1&&(d.Logger.verbose("Encoder","Exploding into a larger array"),s=this.allocate(p),l.forEach((h,f)=>s[4*f]=h)),s}allocate(l){return new Float32Array(4*l)}decode(l,p){return this.channelSize===1?l.filter((s,h)=>h%4==0).subarray(0,p):l.subarray(0,p)}},n.Uint8DataEncoder=class{constructor(l,p=1){if(this.channelSize=4,p===1)this.internalFormat=l.ALPHA,this.format=l.ALPHA,this.textureType=l.UNSIGNED_BYTE,this.channelSize=p;else{if(p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=l.RGBA,this.format=l.RGBA,this.textureType=l.UNSIGNED_BYTE,this.channelSize=p}}encode(l,p){return new Uint8Array(l.buffer,l.byteOffset,l.byteLength)}allocate(l){return new Uint8Array(l*this.channelSize)}decode(l,p){if(l instanceof Uint8Array)return l.subarray(0,p);throw new Error(`Invalid array type: ${l.constructor}`)}}},7859:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getBatchDim=n.sizeToSquarishShape=n.getRowsCols=n.sizeFromShape=n.isInt=n.parseAxisParam=n.squeezeShape=n.PreferLogicalStrategy=n.AlwaysKeepOriginalSizeStrategy=void 0;const d=u(6231),l=u(2517);function p(o,t){const e=[],r=[],i=t!=null&&Array.isArray(t)&&t.length===0,c=t==null||i?null:s(t,o).sort();let g=0;for(let m=0;mm)&&o[m]===1&&(e.push(o[m]),r.push(m)),c[g]<=m&&g++}o[m]!==1&&(e.push(o[m]),r.push(m))}return{newShape:e,keptDims:r}}function s(o,t){const e=t.length;return o=o==null?t.map((r,i)=>i):[].concat(o),(0,l.assert)(o.every(r=>r>=-e&&r`All values in axis param must be in range [-${e}, ${e}) but got axis ${o}`),(0,l.assert)(o.every(h),()=>`All values in axis param must be integers but got axis ${o}`),o.map(r=>r<0?e+r:r)}function h(o){return o%1==0}function f(o){if(o.length===0)return 1;let t=o[0];for(let e=1;e=o.length?1:o.slice(t.breakAxis).reduce((m,b)=>m*b),g=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((m,b)=>m*b);if(!(c>e||g>e))return[c,g];d.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}const r=o.reduce((c,g)=>c*g);let i=Math.floor(Math.sqrt(r));for(;i=e||r%i!=0)throw new Error(`The given dimensions are outside this GPU's boundaries: ${o}`);return[i,r/i]}},n.PreferLogicalStrategy=class{constructor(o){this.maxTextureSize=o}computeTextureWH(o,t){const e=this.computeTexture(o,t);return t&&t.isPacked&&(e[0]/=2,e[1]/=2),t&&t.reverseWH?[e[1],e[0]]:e}computeTexture(o,t){const e=t&&t.isPacked;if(o.length===0)return e?[2,2]:[1,1];let r=this.maxTextureSize;if(t&&t.breakAxis!==void 0){const g=t.breakAxis>=o.length?1:o.slice(t.breakAxis).reduce((b,_)=>b*_),m=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((b,_)=>b*_);if(!(g>r||m>r))return[g,m];d.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}let i=o.slice(0);e&&(r*=2,i=i.map((g,m)=>m>=i.length-2?i[m]%2==0?i[m]:i[m]+1:i[m]),i.length===1&&(i=[2,i[0]])),i.length!==2&&(i=p(i).newShape);const c=f(i);return i.length<=1&&c<=r?[1,c]:i.length===2&&i[0]<=r&&i[1]<=r?i:i.length===3&&i[0]*i[1]<=r&&i[2]<=r?[i[0]*i[1],i[2]]:i.length===3&&i[0]<=r&&i[1]*i[2]<=r?[i[0],i[1]*i[2]]:i.length===4&&i[0]*i[1]*i[2]<=r&&i[3]<=r?[i[0]*i[1]*i[2],i[3]]:i.length===4&&i[0]<=r&&i[1]*i[2]*i[3]<=r?[i[0],i[1]*i[2]*i[3]]:e?a(c/4).map(g=>2*g):a(c)}},n.squeezeShape=p,n.parseAxisParam=s,n.isInt=h,n.sizeFromShape=f,n.getRowsCols=function(o){if(o.length===0)throw Error("Cannot get rows and columns of an empty shape array.");return[o.length>1?o[o.length-2]:1,o[o.length-1]]},n.sizeToSquarishShape=a,n.getBatchDim=function(o,t=2){return f(o.slice(0,o.length-t))}},4057:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createTextureLayoutFromShape=n.calculateTextureWidthAndHeight=n.createTextureLayoutFromTextureType=void 0;const d=u(2517),l=u(2039);n.createTextureLayoutFromTextureType=(p,s,h)=>{const f=h===l.TextureType.unpacked||h===l.TextureType.unpackedReversed?1:4,a=h===l.TextureType.packed,o=h===l.TextureType.unpackedReversed||h===l.TextureType.packed,t=h===l.TextureType.packedLastDimension?s.length-1:void 0,e=h===l.TextureType.packedLastDimension?s.map((r,i)=>i===s.length-1?4*r:r):void 0;return(0,n.createTextureLayoutFromShape)(p,s,f,e,{isPacked:a,reverseWH:o,breakAxis:t})},n.calculateTextureWidthAndHeight=(p,s,h)=>{const f=(0,n.createTextureLayoutFromTextureType)(p,s,h);return[f.width,f.height]},n.createTextureLayoutFromShape=(p,s,h=1,f,a)=>{const o=!(!a||!a.isPacked),[t,e]=p.computeTextureWH(o&&f||s,a),r=s.length;let i=s.slice(0);if(r===0&&(i=[1]),h===1)f=s;else if(o){if(h!==4)throw new Error("a packed texture must be 4-channel");f=s,r>0&&(i[r-1]=Math.ceil(i[r-1]/2)),r>1&&(i[r-2]=Math.ceil(i[r-2]/2))}else if(!f)throw new Error("Unpacked shape is needed when using channels > 1");return{width:t,height:e,channels:h,isPacked:o,shape:i,strides:d.ShapeUtil.computeStrides(i),unpackedShape:f,reversedWH:a&&a.reverseWH}}},5702:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.TextureManager=void 0;const d=u(6231);n.TextureManager=class{constructor(l,p,s,h){this.glContext=l,this.layoutStrategy=p,this.profiler=s,this.config=h,this.pendingRead=new Map,h.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}createTextureFromLayout(l,p,s,h){const f=this.toEncoderType(l),a=this.glContext.getEncoder(f,p.channels||1,h);if(p.isPacked&&h===1)throw new Error("not implemented");const o=p.width,t=p.height;let e,r;if(this.config.reuseTextures){e=`${o}x${t}_${a.format}_${a.internalFormat}_${a.textureType}`,r=this.inUseTextures.get(e),r||(r=[],this.inUseTextures.set(e,r));const c=this.idleTextures.get(e);if(c&&c.length>0){const g=c.pop();return r.push(g),h===1&&this.glContext.updateTexture(g,o,t,a,this.toTextureData(l,s)),g}}d.Logger.verbose("TextureManager",`Creating new texture of size ${p.width}x${p.height}`);const i=this.glContext.allocateTexture(o,t,a,this.toTextureData(l,s));return this.config.reuseTextures&&(r.push(i),this.textureLookup.set(i,e)),i}readTexture(l,p,s){return s||(s=1),this.profiler.event("backend","TextureManager.readTexture",()=>{const h=l.shape.reduce((a,o)=>a*o)*s,f=this.glContext.readTexture(l.texture,l.width,l.height,h,this.toEncoderType(p),s);return this.toTensorData(p,f)})}async readTextureAsync(l,p,s){const h=l.tensor.dataId;if(s||(s=1),this.pendingRead.has(h)){const f=this.pendingRead.get(h);return new Promise(a=>f==null?void 0:f.push(a))}return this.profiler.event("backend","TextureManager.readTextureAsync",async()=>{this.pendingRead.set(h,[]);const f=l.shape.reduce((e,r)=>e*r)*s;await this.glContext.createAndWaitForFence();const a=this.glContext.readTexture(l.texture,l.width,l.height,f,this.toEncoderType(p),s),o=this.toTensorData(p,a),t=this.pendingRead.get(h);return this.pendingRead.delete(h),t==null||t.forEach(e=>e(o)),o})}readUint8TextureAsFloat(l){return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",()=>{const p=l.shape.reduce((h,f)=>h*f),s=this.glContext.readTexture(l.texture,l.width,l.height,4*p,"byte",4);return new Float32Array(s.buffer,s.byteOffset,p)})}releaseTexture(l,p){let s;if(this.config.reuseTextures&&(s=this.textureLookup.get(l.texture),s)){p&&this.textureLookup.delete(s);const h=this.inUseTextures.get(s);if(h){const f=h.indexOf(l.texture);if(f!==-1){h.splice(f,1);let a=this.idleTextures.get(s);a||(a=[],this.idleTextures.set(s,a)),a.push(l.texture)}}}s&&!p||(d.Logger.verbose("TextureManager",`Deleting texture of size ${l.width}x${l.height}`),this.glContext.deleteTexture(l.texture))}toTensorData(l,p){switch(l){case"int16":return p instanceof Int16Array?p:Int16Array.from(p);case"int32":return p instanceof Int32Array?p:Int32Array.from(p);case"int8":return p instanceof Int8Array?p:Int8Array.from(p);case"uint16":return p instanceof Uint16Array?p:Uint16Array.from(p);case"uint32":return p instanceof Uint32Array?p:Uint32Array.from(p);case"uint8":case"bool":return p instanceof Uint8Array?p:Uint8Array.from(p);case"float32":return p instanceof Float32Array?p:Float32Array.from(p);case"float64":return p instanceof Float64Array?p:Float64Array.from(p);default:throw new Error(`TensorData type ${l} is not supported`)}}toTextureData(l,p){if(p)return p instanceof Float32Array?p:new Float32Array(p)}toEncoderType(l){return"float"}clearActiveTextures(){this.glContext.clearActiveTextures()}}},2039:(y,n)=>{var u;Object.defineProperty(n,"__esModule",{value:!0}),n.TextureType=void 0,(u=n.TextureType||(n.TextureType={}))[u.unpacked=0]="unpacked",u[u.unpackedReversed=1]="unpackedReversed",u[u.packed=2]="packed",u[u.downloadUint8AsFloat=3]="downloadUint8AsFloat",u[u.packedLastDimension=4]="packedLastDimension"},9390:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getGlChannels=n.getCoordsDataType=n.getSqueezedParams=n.squeezeInputShape=n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=n.generateShaderFuncNameFromInputSamplerName=n.repeatedTry=n.getPackedShape=void 0;const d=u(2517);n.getPackedShape=function(l){const p=l.length;return l.slice(0,p-1).concat(l[p-1]/4)},n.repeatedTry=async function(l,p=h=>0,s){return new Promise((h,f)=>{let a=0;const o=()=>{if(l())return void h();a++;const t=p(a);s!=null&&a>=s?f():setTimeout(o,t)};o()})},n.generateShaderFuncNameFromInputSamplerName=function(l){return(0,d.assert)(l!==void 0&&l.length!==0,()=>"empty string found for sampler name"),"get"+l.charAt(0).toUpperCase()+l.slice(1)},n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=function(l){return(0,d.assert)(l!==void 0&&l.length!==0,()=>"empty string found for sampler name"),"get"+l.charAt(0).toUpperCase()+l.slice(1)+"AtOutCoords"},n.squeezeInputShape=function(l,p){let s=JSON.parse(JSON.stringify(l));return s=p,s},n.getSqueezedParams=function(l,p){return p.map(s=>l[s]).join(", ")},n.getCoordsDataType=function(l){if(l<=1)return"int";if(l===2)return"ivec2";if(l===3)return"ivec3";if(l===4)return"ivec4";if(l===5)return"ivec5";if(l===6)return"ivec6";throw Error(`GPU for rank ${l} is not yet supported`)},n.getGlChannels=function(l=6){return["x","y","z","w","u","v"].slice(0,l)}},7305:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createNewWebGLContext=n.createWebGLContext=void 0;const d=u(6231),l=u(1713),p={};function s(h){const f=function(){if(typeof document>"u"){if(typeof OffscreenCanvas>"u")throw new TypeError("failed to create canvas: OffscreenCanvas is not supported");return new OffscreenCanvas(1,1)}const t=document.createElement("canvas");return t.width=1,t.height=1,t}();let a;const o={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!h||h==="webgl2")&&(a=f.getContext("webgl2",o),a))try{return new l.WebGLContext(a,2)}catch(t){d.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl2'. Error: ${t}`)}if((!h||h==="webgl")&&(a=f.getContext("webgl",o)||f.getContext("experimental-webgl",o),a))try{return new l.WebGLContext(a,1)}catch(t){d.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: ${t}`)}throw new Error("WebGL is not supported")}n.createWebGLContext=function h(f){let a;f&&f!=="webgl2"||!("webgl2"in p)?f&&f!=="webgl"||!("webgl"in p)||(a=p.webgl):a=p.webgl2,a=a||s(f),f=f||a.version===1?"webgl":"webgl2";const o=a.gl;return p[f]=a,o.isContextLost()?(delete p[f],h(f)):(o.disable(o.DEPTH_TEST),o.disable(o.STENCIL_TEST),o.disable(o.BLEND),o.disable(o.DITHER),o.disable(o.POLYGON_OFFSET_FILL),o.disable(o.SAMPLE_COVERAGE),o.enable(o.SCISSOR_TEST),o.enable(o.CULL_FACE),o.cullFace(o.BACK),a)},n.createNewWebGLContext=s},1713:function(y,n,u){var d=this&&this.__createBinding||(Object.create?function(o,t,e,r){r===void 0&&(r=e);var i=Object.getOwnPropertyDescriptor(t,e);i&&!("get"in i?!t.__esModule:i.writable||i.configurable)||(i={enumerable:!0,get:function(){return t[e]}}),Object.defineProperty(o,r,i)}:function(o,t,e,r){r===void 0&&(r=e),o[r]=t[e]}),l=this&&this.__setModuleDefault||(Object.create?function(o,t){Object.defineProperty(o,"default",{enumerable:!0,value:t})}:function(o,t){o.default=t}),p=this&&this.__importStar||function(o){if(o&&o.__esModule)return o;var t={};if(o!=null)for(var e in o)e!=="default"&&Object.prototype.hasOwnProperty.call(o,e)&&d(t,o,e);return l(t,o),t};Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLContext=n.linearSearchLastTrue=void 0;const s=u(1670),h=p(u(7769)),f=u(9390);function a(o){let t=0;for(;tthis.isTimerResultAvailable(o)),this.getTimerResult(o)}async createAndWaitForFence(){const o=this.createFence(this.gl);return this.pollFence(o)}createFence(o){let t;const e=o,r=e.fenceSync(e.SYNC_GPU_COMMANDS_COMPLETE,0);return o.flush(),t=r===null?()=>!0:()=>{const i=e.clientWaitSync(r,0,0);return i===e.ALREADY_SIGNALED||i===e.CONDITION_SATISFIED},{query:r,isFencePassed:t}}async pollFence(o){return new Promise(t=>{this.addItemToPoll(()=>o.isFencePassed(),()=>t())})}pollItems(){const o=a(this.itemsToPoll.map(t=>t.isDoneFn));for(let t=0;t<=o;++t){const{resolveFn:e}=this.itemsToPoll[t];e()}this.itemsToPoll=this.itemsToPoll.slice(o+1)}async addItemToPoll(o,t){this.itemsToPoll.push({isDoneFn:o,resolveFn:t}),this.itemsToPoll.length>1||await(0,f.repeatedTry)(()=>(this.pollItems(),this.itemsToPoll.length===0))}}},1036:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ExecutionPlan=void 0;const d=u(6231);class l{constructor(s,h){this.op=s,this.node=h}}n.ExecutionPlan=class{constructor(p,s,h){this.graph=p,this.profiler=h,this.initialize(s)}initialize(p){this.profiler.event("session","ExecutionPlan.initialize",()=>{const s=this.graph.getNodes();if(s.length!==p.length)throw new Error("The size of nodes and OPs do not match.");this._ops=p.map((h,f)=>new l(h,s[f])),this.reset(),this._starter=[],this._ops.forEach((h,f)=>{let a=!0;for(const o of h.node.inputs)if(!this._values[o]&&this.graph.getInputIndices().indexOf(o)===-1){a=!1;break}a&&this._starter.push(f)})})}reset(){this._values=this.graph.getValues().map(p=>p.tensor)}async execute(p,s){return this.profiler.event("session","ExecutionPlan.execute",async()=>{this.reset();const h=p.createInferenceHandler(),f=this.graph.getInputIndices();if(s.length!==f.length)throw new Error(`number of input tensors don't match the number of inputs to the model: actual: ${s.length} expected: ${f.length}`);s.forEach((i,c)=>{const g=f[c];this._values[g]=i});const a=this._starter.slice(0),o=this.graph.getValues(),t=this.graph.getNodes();let e=0;for(;ethis._values[w]);if(g.indexOf(void 0)!==-1)throw new Error(`unresolved input detected: op: ${c.node}`);const m=g;d.Logger.verbose("ExecPlan",`Runing op:${c.node.name} (${m.map((w,v)=>`'${c.node.inputs[v]}': ${w.type}[${w.dims.join(",")}]`).join(", ")})`);const b=await this.profiler.event("node",c.node.name,async()=>c.op.impl(h,m,c.op.context));if(b.length!==c.node.outputs.length)throw new Error("the size of output does not match model definition.");b.forEach((w,v)=>{const S=c.node.outputs[v];if(this._values[S])throw new Error(`output [${S}] already has value: op:${c.node.name}`);this._values[S]=w});const _=new Set;b.forEach((w,v)=>{const S=c.node.outputs[v];for(const O of o[S].to){const E=t[O];let T=!0;for(const I of E.inputs)if(!this._values[I]){T=!1;break}T&&_.add(O)}}),a.push(..._)}const r=[];for(let i=0;i{Object.defineProperty(n,"__esModule",{value:!0}),n.Graph=void 0;const d=u(1446),l=u(7778),p=u(9395),s=u(9162),h=u(2517);var f=p.onnxruntime.experimental.fbs;n.Graph={from:(e,r)=>new t(e,r)};class a{constructor(r){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,r&&(this.type=h.ProtoUtil.tensorValueTypeFromProto(r.type.tensorType))}get from(){return this._from}get to(){return this._to}}class o{constructor(r,i){r instanceof d.onnx.NodeProto?(this.name=r.name,this.opType=r.opType,this.attributes=new l.Attribute(r.attribute)):r instanceof f.Node&&(this.name=i??r.name(),this.opType=r.opType(),this.attributes=new l.Attribute(h.ProtoUtil.tensorAttributesFromORTFormat(r))),this.inputs=[],this.outputs=[],this.executeNode=!0}}class t{constructor(r,i){if(!r)throw new TypeError("graph is empty");this.buildGraph(r),this.transformGraph(i),this.checkIsAcyclic()}getInputIndices(){return this._allInputIndices}getInputNames(){return this._allInputNames}getOutputIndices(){return this._allOutputIndices}getOutputNames(){return this._allOutputNames}getValues(){return this._allData}getNodes(){return this._nodes}buildGraph(r){if(r instanceof d.onnx.GraphProto)this.buildGraphFromOnnxFormat(r);else{if(!(r instanceof f.Graph))throw new TypeError("Graph type is not supported.");this.buildGraphFromOrtFormat(r)}}buildGraphFromOnnxFormat(r){const i=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];const c=new Map;if(!r.input)throw new Error("missing information in graph: input");const g=[];for(const m of r.input){if(i.has(m.name))throw new Error(`duplicated input name: ${m.name}`);const b=this._allData.push(new a(m))-1;i.set(m.name,b),g.push(m.name)}if(!r.initializer)throw new Error("missing information in graph: initializer");for(const m of r.initializer){let b=i.get(m.name);if(b===void 0){const _=new a;_.type={shape:{dims:h.ProtoUtil.tensorDimsFromProto(m.dims)},tensorType:h.ProtoUtil.tensorDataTypeFromProto(m.dataType)},b=this._allData.push(_)-1,i.set(m.name,b)}this._allData[b]._from=-1,this._allData[b].tensor=s.Tensor.fromProto(m)}for(let m=0;m{this._allData[g]._to.forEach(m=>{r.add(m)})});const i=Array.from(r),c=new Array(this._nodes.length).fill("white");for(;i.length>0;){const g=i.pop();c[g]==="gray"?c[g]="black":(i.push(g),c[g]="gray",this._nodes[g].outputs.forEach(m=>{const b=this._allData[m];if(b.tensor!==void 0)throw new Error("node outputs should not be initialized");if(b._from!==g)throw new Error("from property of the Value object doesn't match index of Node being processed");b._to.forEach(_=>{if(c[_]==="gray")throw new Error("model graph is cyclic");c[_]==="white"&&i.push(_)})}))}}transformGraph(r){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),this.fuseConvActivationNodes(),r&&r.transformGraph(this),this.finalizeGraph()}finalizeGraph(){let r=0;for(let i=0;i0&&(this._nodes[i].inputs.forEach(c=>{const g=this._allData[c]._to.indexOf(i+r);g!==-1&&(this._allData[c]._to[g]=i)}),this._nodes[i].outputs.forEach(c=>{this._allData[c]._from&&this._allData[c]._from===i+r&&(this._allData[c]._from=i)})):(r++,this._nodes[i].outputs.forEach(c=>{this._allData[c]._from=-2}),this._nodes.splice(i,1),i--);r=0;for(let i=0;i0){let c=-1;this._allData[i].from!==void 0&&this._allData[i].from!==-1?(c=this._nodes[this._allData[i].from].outputs.indexOf(i+r),c!==-1&&(this._nodes[this._allData[i].from].outputs[c]=i)):(c=this._allInputIndices.indexOf(i+r),c!==-1&&(this._allInputIndices[c]=i)),this._allData[i].to.forEach(g=>{c=this._nodes[g].inputs.indexOf(i+r),c!==-1&&(this._nodes[g].inputs[c]=i)}),this._allData[i].to.length===0&&(c=this._allOutputIndices.indexOf(i+r),c!==-1&&(this._allOutputIndices[c]=i))}}else r++,this._allData.splice(i,1),i--}deleteNode(r){const i=this._nodes[r];if(i.outputs.length>1){for(let w=1;w0)throw new Error("Node deletion with more than one output connected to other nodes is not supported. ")}i.executeNode=!1;const c=i.inputs[0],g=i.outputs[0],m=this._allData[g].to,b=this._allData[c].to.indexOf(r);if(b===-1)throw new Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[c].to.splice(b,1),this._allData[g]._to=[];const _=this._allOutputIndices.indexOf(g);if(_!==-1&&(this._allOutputIndices[_]=c),m&&m.length>0)for(const w of m){const v=this._nodes[w].inputs.indexOf(g);if(v===-1)throw new Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[w].inputs[v]=c,this._allData[c].to.push(w)}}removeAllDropoutNodes(){let r=0;for(const i of this._nodes){if(i.opType==="Dropout"){if(i.inputs.length!==1)throw new Error("Dropout nodes should only contain one input. ");if(i.outputs.length!==1&&i.outputs.length!==2)throw new Error("Dropout nodes should contain either 1 or 2 output(s)");if(i.outputs.length===2&&this._allData[i.outputs[1]]._to.length!==0)throw new Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(r)}r++}}removeAllIdentityNodes(){let r=0;for(const i of this._nodes)i.opType==="Identity"&&this.deleteNode(r),r++}isActivation(r){switch(r.opType){case"Relu":case"Sigmoid":case"Clip":return!0;default:return!1}}fuseConvActivationNodes(){for(const r of this._nodes)if(r.opType==="Conv"){const i=this._allData[r.outputs[0]]._to;if(i.length===1&&this.isActivation(this._nodes[i[0]])){const c=this._nodes[i[0]];if(c.opType==="Clip")if(c.inputs.length===1)try{r.attributes.set("activation_params","floats",[c.attributes.getFloat("min"),c.attributes.getFloat("max")])}catch{r.attributes.set("activation_params","floats",[h.MIN_CLIP,h.MAX_CLIP])}else{if(!(c.inputs.length>=3&&this._allData[c.inputs[1]].tensor!==void 0&&this._allData[c.inputs[2]].tensor!==void 0))continue;r.attributes.set("activation_params","floats",[this._allData[c.inputs[1]].tensor.floatData[0],this._allData[c.inputs[2]].tensor.floatData[0]])}r.attributes.set("activation","string",c.opType),this.deleteNode(i[0])}}}}},6231:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.now=n.Profiler=n.Logger=void 0;const u={verbose:1e3,info:2e3,warning:4e3,error:5e3,fatal:6e3},d={none:new class{log(o,t,e){}},console:new class{log(o,t,e){console.log(`${this.color(o)} ${e?"\x1B[35m"+e+"\x1B[0m ":""}${t}`)}color(o){switch(o){case"verbose":return"\x1B[34;40mv\x1B[0m";case"info":return"\x1B[32mi\x1B[0m";case"warning":return"\x1B[30;43mw\x1B[0m";case"error":return"\x1B[31;40me\x1B[0m";case"fatal":return"\x1B[101mf\x1B[0m";default:throw new Error(`unsupported severity: ${o}`)}}}},l={provider:"console",minimalSeverity:"warning",logDateTime:!0,logSourceLocation:!1};let p={"":l};function s(o,t,e,r){if(t===void 0)return i=o,{verbose:s.verbose.bind(null,i),info:s.info.bind(null,i),warning:s.warning.bind(null,i),error:s.error.bind(null,i),fatal:s.fatal.bind(null,i)};if(e===void 0)h(o,t);else if(typeof e=="number"&&r===void 0)h(o,t);else if(typeof e=="string"&&r===void 0)h(o,e,0,t);else{if(typeof e!="string"||typeof r!="number")throw new TypeError("input is valid");h(o,e,0,t)}var i}function h(o,t,e,r){const i=p[r||""]||p[""];u[o]{g.then(async _=>{i&&await i.end(),m(_)},async _=>{i&&await i.end(),b(_)})});if(!c&&i){const m=i.end();if(m&&typeof m.then=="function")return new Promise((b,_)=>{m.then(()=>{b(g)},w=>{_(w)})})}return g}begin(o,t,e){if(!this._started)throw new Error("profiler is not started yet");if(e===void 0){const r=(0,n.now)();return this.flush(r),new f(o,t,r,i=>this.endSync(i))}{const r=e.beginTimer();return new f(o,t,0,async i=>this.end(i),r,e)}}async end(o){const t=await o.checkTimer();this._timingEvents.length=this._flushBatchSize||o-this._flushTime>=this._flushIntervalInMilliseconds){for(const t=this._flushPointer;this._flushPointerperformance.now():Date.now},2644:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Model=void 0;const d=u(5686),l=u(1446),p=u(7070),s=u(9395),h=u(2517);var f=s.onnxruntime.experimental.fbs;n.Model=class{constructor(){}load(a,o,t){if(!t)try{return void this.loadFromOnnxFormat(a,o)}catch(e){if(t!==void 0)throw e}this.loadFromOrtFormat(a,o)}loadFromOnnxFormat(a,o){const t=l.onnx.ModelProto.decode(a);if(h.LongUtil.longToNumber(t.irVersion)<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=t.opsetImport.map(e=>({domain:e.domain,version:h.LongUtil.longToNumber(e.version)})),this._graph=p.Graph.from(t.graph,o)}loadFromOrtFormat(a,o){const t=new d.flatbuffers.ByteBuffer(a),e=f.InferenceSession.getRootAsInferenceSession(t).model();if(h.LongUtil.longToNumber(e.irVersion())<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=[];for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.FLOAT_TYPES=n.INT_TYPES=n.NUMBER_TYPES=void 0,n.NUMBER_TYPES=["float32","float64","int32","int16","int8","uint16","uint32","uint8"],n.INT_TYPES=["int32","int16","int8","uint16","uint32","uint8"],n.FLOAT_TYPES=["float32","float64"]},1047:(y,n)=>{function u(d,l){if(l.endsWith("+")){const p=Number.parseInt(l.substring(0,l.length-1),10);return!isNaN(p)&&p<=d}if(l.split("-").length===2){const p=l.split("-"),s=Number.parseInt(p[0],10),h=Number.parseInt(p[1],10);return!isNaN(s)&&!isNaN(h)&&s<=d&&d<=h}return Number.parseInt(l,10)===d}Object.defineProperty(n,"__esModule",{value:!0}),n.resolveOperator=void 0,n.resolveOperator=function(d,l,p){for(const s of p){const h=s[0],f=s[1],a=s[2],o=s[3],t=s[4];if(d.opType===h){for(const e of l)if((e.domain===f||e.domain==="ai.onnx"&&f==="")&&u(e.version,a))return{opImpl:o,opInit:t}}}throw new TypeError(`cannot resolve operator '${d.opType}' with opsets: ${l.map(s=>`${s.domain||"ai.onnx"} v${s.version}`).join(", ")}`)}},9395:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.onnxruntime=void 0;const d=u(5686);var l,p;l=n.onnxruntime||(n.onnxruntime={}),function(s){(function(h){h[h.UNDEFINED=0]="UNDEFINED",h[h.FLOAT=1]="FLOAT",h[h.INT=2]="INT",h[h.STRING=3]="STRING",h[h.TENSOR=4]="TENSOR",h[h.GRAPH=5]="GRAPH",h[h.FLOATS=6]="FLOATS",h[h.INTS=7]="INTS",h[h.STRINGS=8]="STRINGS",h[h.TENSORS=9]="TENSORS",h[h.GRAPHS=10]="GRAPHS",h[h.SPARSE_TENSOR=11]="SPARSE_TENSOR",h[h.SPARSE_TENSORS=12]="SPARSE_TENSORS"})(s.AttributeType||(s.AttributeType={}))}((p=l.experimental||(l.experimental={})).fbs||(p.fbs={})),function(s){(function(h){(function(f){(function(a){a[a.UNKNOWN=0]="UNKNOWN",a[a.VALUE=1]="VALUE",a[a.PARAM=2]="PARAM"})(f.DimensionValueType||(f.DimensionValueType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(a){a[a.UNDEFINED=0]="UNDEFINED",a[a.FLOAT=1]="FLOAT",a[a.UINT8=2]="UINT8",a[a.INT8=3]="INT8",a[a.UINT16=4]="UINT16",a[a.INT16=5]="INT16",a[a.INT32=6]="INT32",a[a.INT64=7]="INT64",a[a.STRING=8]="STRING",a[a.BOOL=9]="BOOL",a[a.FLOAT16=10]="FLOAT16",a[a.DOUBLE=11]="DOUBLE",a[a.UINT32=12]="UINT32",a[a.UINT64=13]="UINT64",a[a.COMPLEX64=14]="COMPLEX64",a[a.COMPLEX128=15]="COMPLEX128",a[a.BFLOAT16=16]="BFLOAT16"})(f.TensorDataType||(f.TensorDataType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(a){a[a.Primitive=0]="Primitive",a[a.Fused=1]="Fused"})(f.NodeType||(f.NodeType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(a){a[a.NONE=0]="NONE",a[a.tensor_type=1]="tensor_type",a[a.sequence_type=2]="sequence_type",a[a.map_type=3]="map_type"})(f.TypeInfoValue||(f.TypeInfoValue={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsShape(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsShape(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}dim(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new s.experimental.fbs.Dimension).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}dimLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}static startShape(t){t.startObject(1)}static addDim(t,e){t.addFieldOffset(0,e,0)}static createDimVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startDimVector(t,e){t.startVector(4,e,4)}static endShape(t){return t.endObject()}static createShape(t,e){return a.startShape(t),a.addDim(t,e),a.endShape(t)}}f.Shape=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimension(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimension(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}value(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.DimensionValue).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}denotation(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimension(t){t.startObject(2)}static addValue(t,e){t.addFieldOffset(0,e,0)}static addDenotation(t,e){t.addFieldOffset(1,e,0)}static endDimension(t){return t.endObject()}static createDimension(t,e,r){return a.startDimension(t),a.addValue(t,e),a.addDenotation(t,r),a.endDimension(t)}}f.Dimension=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimensionValue(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimensionValue(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}dimType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt8(this.bb_pos+t):s.experimental.fbs.DimensionValueType.UNKNOWN}dimValue(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}dimParam(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimensionValue(t){t.startObject(3)}static addDimType(t,e){t.addFieldInt8(0,e,s.experimental.fbs.DimensionValueType.UNKNOWN)}static addDimValue(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static addDimParam(t,e){t.addFieldOffset(2,e,0)}static endDimensionValue(t){return t.endObject()}static createDimensionValue(t,e,r,i){return a.startDimensionValue(t),a.addDimType(t,e),a.addDimValue(t,r),a.addDimParam(t,i),a.endDimensionValue(t)}}f.DimensionValue=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensorTypeAndShape(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensorTypeAndShape(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}elemType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}shape(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Shape).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startTensorTypeAndShape(t){t.startObject(2)}static addElemType(t,e){t.addFieldInt32(0,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addShape(t,e){t.addFieldOffset(1,e,0)}static endTensorTypeAndShape(t){return t.endObject()}static createTensorTypeAndShape(t,e,r){return a.startTensorTypeAndShape(t),a.addElemType(t,e),a.addShape(t,r),a.endTensorTypeAndShape(t)}}f.TensorTypeAndShape=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsMapType(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsMapType(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}keyType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}valueType(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startMapType(t){t.startObject(2)}static addKeyType(t,e){t.addFieldInt32(0,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addValueType(t,e){t.addFieldOffset(1,e,0)}static endMapType(t){return t.endObject()}static createMapType(t,e,r){return a.startMapType(t),a.addKeyType(t,e),a.addValueType(t,r),a.endMapType(t)}}f.MapType=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSequenceType(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSequenceType(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}elemType(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSequenceType(t){t.startObject(1)}static addElemType(t,e){t.addFieldOffset(0,e,0)}static endSequenceType(t){return t.endObject()}static createSequenceType(t,e){return a.startSequenceType(t),a.addElemType(t,e),a.endSequenceType(t)}}f.SequenceType=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(h.fbs||(h.fbs={})).EdgeEnd=class{constructor(){this.bb=null,this.bb_pos=0}__init(f,a){return this.bb_pos=f,this.bb=a,this}nodeIndex(){return this.bb.readUint32(this.bb_pos)}srcArgIndex(){return this.bb.readInt32(this.bb_pos+4)}dstArgIndex(){return this.bb.readInt32(this.bb_pos+8)}static createEdgeEnd(f,a,o,t){return f.prep(4,12),f.writeInt32(t),f.writeInt32(o),f.writeInt32(a),f.offset()}}})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNodeEdge(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNodeEdge(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}nodeIndex(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readUint32(this.bb_pos+t):0}inputEdges(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}inputEdgesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}outputEdges(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new s.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}outputEdgesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNodeEdge(t){t.startObject(3)}static addNodeIndex(t,e){t.addFieldInt32(0,e,0)}static addInputEdges(t,e){t.addFieldOffset(1,e,0)}static startInputEdgesVector(t,e){t.startVector(12,e,4)}static addOutputEdges(t,e){t.addFieldOffset(2,e,0)}static startOutputEdgesVector(t,e){t.startVector(12,e,4)}static endNodeEdge(t){return t.endObject()}static createNodeEdge(t,e,r,i){return a.startNodeEdge(t),a.addNodeIndex(t,e),a.addInputEdges(t,r),a.addOutputEdges(t,i),a.endNodeEdge(t)}}f.NodeEdge=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNode(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNode(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}sinceVersion(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):0}index(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readUint32(this.bb_pos+t):0}opType(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.NodeType.Primitive}executionProviderType(t){let e=this.bb.__offset(this.bb_pos,18);return e?this.bb.__string(this.bb_pos+e,t):null}inputs(t,e){let r=this.bb.__offset(this.bb_pos,20);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,22);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}attributes(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?(e||new s.experimental.fbs.Attribute).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}attributesLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCounts(t){let e=this.bb.__offset(this.bb_pos,26);return e?this.bb.readInt32(this.bb.__vector(this.bb_pos+e)+4*t):0}inputArgCountsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCountsArray(){let t=this.bb.__offset(this.bb_pos,26);return t?new Int32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}implicitInputs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}implicitInputsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNode(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDomain(t,e){t.addFieldOffset(2,e,0)}static addSinceVersion(t,e){t.addFieldInt32(3,e,0)}static addIndex(t,e){t.addFieldInt32(4,e,0)}static addOpType(t,e){t.addFieldOffset(5,e,0)}static addType(t,e){t.addFieldInt32(6,e,s.experimental.fbs.NodeType.Primitive)}static addExecutionProviderType(t,e){t.addFieldOffset(7,e,0)}static addInputs(t,e){t.addFieldOffset(8,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(9,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addAttributes(t,e){t.addFieldOffset(10,e,0)}static createAttributesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startAttributesVector(t,e){t.startVector(4,e,4)}static addInputArgCounts(t,e){t.addFieldOffset(11,e,0)}static createInputArgCountsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startInputArgCountsVector(t,e){t.startVector(4,e,4)}static addImplicitInputs(t,e){t.addFieldOffset(12,e,0)}static createImplicitInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startImplicitInputsVector(t,e){t.startVector(4,e,4)}static endNode(t){return t.endObject()}static createNode(t,e,r,i,c,g,m,b,_,w,v,S,O,E){return a.startNode(t),a.addName(t,e),a.addDocString(t,r),a.addDomain(t,i),a.addSinceVersion(t,c),a.addIndex(t,g),a.addOpType(t,m),a.addType(t,b),a.addExecutionProviderType(t,_),a.addInputs(t,w),a.addOutputs(t,v),a.addAttributes(t,S),a.addInputArgCounts(t,O),a.addImplicitInputs(t,E),a.endNode(t)}}f.Node=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsValueInfo(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsValueInfo(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startValueInfo(t){t.startObject(3)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldOffset(2,e,0)}static endValueInfo(t){return t.endObject()}static createValueInfo(t,e,r,i){return a.startValueInfo(t),a.addName(t,e),a.addDocString(t,r),a.addType(t,i),a.endValueInfo(t)}}f.ValueInfo=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTypeInfo(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTypeInfo(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}denotation(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}valueType(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readUint8(this.bb_pos+t):s.experimental.fbs.TypeInfoValue.NONE}value(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__union(t,this.bb_pos+e):null}static startTypeInfo(t){t.startObject(3)}static addDenotation(t,e){t.addFieldOffset(0,e,0)}static addValueType(t,e){t.addFieldInt8(1,e,s.experimental.fbs.TypeInfoValue.NONE)}static addValue(t,e){t.addFieldOffset(2,e,0)}static endTypeInfo(t){return t.endObject()}static createTypeInfo(t,e,r,i){return a.startTypeInfo(t),a.addDenotation(t,e),a.addValueType(t,r),a.addValue(t,i),a.endTypeInfo(t)}}f.TypeInfo=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsOperatorSetId(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsOperatorSetId(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}domain(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}version(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}static startOperatorSetId(t){t.startObject(2)}static addDomain(t,e){t.addFieldOffset(0,e,0)}static addVersion(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static endOperatorSetId(t){return t.endObject()}static createOperatorSetId(t,e,r){return a.startOperatorSetId(t),a.addDomain(t,e),a.addVersion(t,r),a.endOperatorSetId(t)}}f.OperatorSetId=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensor(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensor(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}dataType(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}rawData(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.readUint8(this.bb.__vector(this.bb_pos+e)+t):0}rawDataLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}rawDataArray(){let t=this.bb.__offset(this.bb_pos,12);return t?new Uint8Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}stringData(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringDataLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}static startTensor(t){t.startObject(6)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static addDataType(t,e){t.addFieldInt32(3,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addRawData(t,e){t.addFieldOffset(4,e,0)}static createRawDataVector(t,e){t.startVector(1,e.length,1);for(let r=e.length-1;r>=0;r--)t.addInt8(e[r]);return t.endVector()}static startRawDataVector(t,e){t.startVector(1,e,1)}static addStringData(t,e){t.addFieldOffset(5,e,0)}static createStringDataVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringDataVector(t,e){t.startVector(4,e,4)}static endTensor(t){return t.endObject()}static createTensor(t,e,r,i,c,g,m){return a.startTensor(t),a.addName(t,e),a.addDocString(t,r),a.addDims(t,i),a.addDataType(t,c),a.addRawData(t,g),a.addStringData(t,m),a.endTensor(t)}}f.Tensor=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSparseTensor(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSparseTensor(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}values(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}indices(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSparseTensor(t){t.startObject(3)}static addValues(t,e){t.addFieldOffset(0,e,0)}static addIndices(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static endSparseTensor(t){return t.endObject()}static createSparseTensor(t,e,r,i){return a.startSparseTensor(t),a.addValues(t,e),a.addIndices(t,r),a.addDims(t,i),a.endSparseTensor(t)}}f.SparseTensor=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsAttribute(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsAttribute(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.AttributeType.UNDEFINED}f(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readFloat32(this.bb_pos+t):0}i(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}s(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}t(t){let e=this.bb.__offset(this.bb_pos,16);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}g(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}floats(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.readFloat32(this.bb.__vector(this.bb_pos+e)+4*t):0}floatsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}floatsArray(){let t=this.bb.__offset(this.bb_pos,20);return t?new Float32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}ints(t){let e=this.bb.__offset(this.bb_pos,22);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}intsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}strings(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringsLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}tensors(t,e){let r=this.bb.__offset(this.bb_pos,26);return r?(e||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}tensorsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}graphs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?(e||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}graphsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startAttribute(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldInt32(2,e,s.experimental.fbs.AttributeType.UNDEFINED)}static addF(t,e){t.addFieldFloat32(3,e,0)}static addI(t,e){t.addFieldInt64(4,e,t.createLong(0,0))}static addS(t,e){t.addFieldOffset(5,e,0)}static addT(t,e){t.addFieldOffset(6,e,0)}static addG(t,e){t.addFieldOffset(7,e,0)}static addFloats(t,e){t.addFieldOffset(8,e,0)}static createFloatsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addFloat32(e[r]);return t.endVector()}static startFloatsVector(t,e){t.startVector(4,e,4)}static addInts(t,e){t.addFieldOffset(9,e,0)}static createIntsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startIntsVector(t,e){t.startVector(8,e,8)}static addStrings(t,e){t.addFieldOffset(10,e,0)}static createStringsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringsVector(t,e){t.startVector(4,e,4)}static addTensors(t,e){t.addFieldOffset(11,e,0)}static createTensorsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startTensorsVector(t,e){t.startVector(4,e,4)}static addGraphs(t,e){t.addFieldOffset(12,e,0)}static createGraphsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startGraphsVector(t,e){t.startVector(4,e,4)}static endAttribute(t){return t.endObject()}static createAttribute(t,e,r,i,c,g,m,b,_,w,v,S,O,E){return a.startAttribute(t),a.addName(t,e),a.addDocString(t,r),a.addType(t,i),a.addF(t,c),a.addI(t,g),a.addS(t,m),a.addT(t,b),a.addG(t,_),a.addFloats(t,w),a.addInts(t,v),a.addStrings(t,S),a.addTensors(t,O),a.addGraphs(t,E),a.endAttribute(t)}}f.Attribute=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsGraph(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsGraph(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}initializers(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}initializersLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeArgs(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.ValueInfo).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeArgsLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}nodes(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new s.experimental.fbs.Node).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}maxNodeIndex(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readUint32(this.bb_pos+t):0}nodeEdges(t,e){let r=this.bb.__offset(this.bb_pos,12);return r?(e||new s.experimental.fbs.NodeEdge).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeEdgesLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}inputs(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,16);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.__vector_len(this.bb_pos+t):0}sparseInitializers(t,e){let r=this.bb.__offset(this.bb_pos,18);return r?(e||new s.experimental.fbs.SparseTensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}sparseInitializersLength(){let t=this.bb.__offset(this.bb_pos,18);return t?this.bb.__vector_len(this.bb_pos+t):0}static startGraph(t){t.startObject(8)}static addInitializers(t,e){t.addFieldOffset(0,e,0)}static createInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInitializersVector(t,e){t.startVector(4,e,4)}static addNodeArgs(t,e){t.addFieldOffset(1,e,0)}static createNodeArgsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeArgsVector(t,e){t.startVector(4,e,4)}static addNodes(t,e){t.addFieldOffset(2,e,0)}static createNodesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodesVector(t,e){t.startVector(4,e,4)}static addMaxNodeIndex(t,e){t.addFieldInt32(3,e,0)}static addNodeEdges(t,e){t.addFieldOffset(4,e,0)}static createNodeEdgesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeEdgesVector(t,e){t.startVector(4,e,4)}static addInputs(t,e){t.addFieldOffset(5,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(6,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addSparseInitializers(t,e){t.addFieldOffset(7,e,0)}static createSparseInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSparseInitializersVector(t,e){t.startVector(4,e,4)}static endGraph(t){return t.endObject()}static createGraph(t,e,r,i,c,g,m,b,_){return a.startGraph(t),a.addInitializers(t,e),a.addNodeArgs(t,r),a.addNodes(t,i),a.addMaxNodeIndex(t,c),a.addNodeEdges(t,g),a.addInputs(t,m),a.addOutputs(t,b),a.addSparseInitializers(t,_),a.endGraph(t)}}f.Graph=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsModel(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsModel(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}irVersion(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}opsetImport(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.OperatorSetId).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}opsetImportLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}producerName(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}producerVersion(t){let e=this.bb.__offset(this.bb_pos,10);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.__string(this.bb_pos+e,t):null}modelVersion(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}docString(t){let e=this.bb.__offset(this.bb_pos,16);return e?this.bb.__string(this.bb_pos+e,t):null}graph(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}graphDocString(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.__string(this.bb_pos+e,t):null}static startModel(t){t.startObject(9)}static addIrVersion(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}static addOpsetImport(t,e){t.addFieldOffset(1,e,0)}static createOpsetImportVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOpsetImportVector(t,e){t.startVector(4,e,4)}static addProducerName(t,e){t.addFieldOffset(2,e,0)}static addProducerVersion(t,e){t.addFieldOffset(3,e,0)}static addDomain(t,e){t.addFieldOffset(4,e,0)}static addModelVersion(t,e){t.addFieldInt64(5,e,t.createLong(0,0))}static addDocString(t,e){t.addFieldOffset(6,e,0)}static addGraph(t,e){t.addFieldOffset(7,e,0)}static addGraphDocString(t,e){t.addFieldOffset(8,e,0)}static endModel(t){return t.endObject()}static createModel(t,e,r,i,c,g,m,b,_,w){return a.startModel(t),a.addIrVersion(t,e),a.addOpsetImport(t,r),a.addProducerName(t,i),a.addProducerVersion(t,c),a.addDomain(t,g),a.addModelVersion(t,m),a.addDocString(t,b),a.addGraph(t,_),a.addGraphDocString(t,w),a.endModel(t)}}f.Model=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsKernelCreateInfos(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsKernelCreateInfos(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}nodeIndices(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readUint32(this.bb.__vector(this.bb_pos+e)+4*t):0}nodeIndicesLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeIndicesArray(){let t=this.bb.__offset(this.bb_pos,4);return t?new Uint32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}kernelDefHashes(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.readUint64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}kernelDefHashesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startKernelCreateInfos(t){t.startObject(2)}static addNodeIndices(t,e){t.addFieldOffset(0,e,0)}static createNodeIndicesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startNodeIndicesVector(t,e){t.startVector(4,e,4)}static addKernelDefHashes(t,e){t.addFieldOffset(1,e,0)}static createKernelDefHashesVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startKernelDefHashesVector(t,e){t.startVector(8,e,8)}static endKernelCreateInfos(t){return t.endObject()}static createKernelCreateInfos(t,e,r){return a.startKernelCreateInfos(t),a.addNodeIndices(t,e),a.addKernelDefHashes(t,r),a.endKernelCreateInfos(t)}}f.KernelCreateInfos=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSubGraphSessionState(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSubGraphSessionState(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}graphId(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSubGraphSessionState(t){t.startObject(2)}static addGraphId(t,e){t.addFieldOffset(0,e,0)}static addSessionState(t,e){t.addFieldOffset(1,e,0)}static endSubGraphSessionState(t){let e=t.endObject();return t.requiredField(e,4),e}static createSubGraphSessionState(t,e,r){return a.startSubGraphSessionState(t),a.addGraphId(t,e),a.addSessionState(t,r),a.endSubGraphSessionState(t)}}f.SubGraphSessionState=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSessionState(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSessionState(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}kernels(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.KernelCreateInfos).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}subGraphSessionStates(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.SubGraphSessionState).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}subGraphSessionStatesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSessionState(t){t.startObject(2)}static addKernels(t,e){t.addFieldOffset(0,e,0)}static addSubGraphSessionStates(t,e){t.addFieldOffset(1,e,0)}static createSubGraphSessionStatesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSubGraphSessionStatesVector(t,e){t.startVector(4,e,4)}static endSessionState(t){return t.endObject()}static createSessionState(t,e,r){return a.startSessionState(t),a.addKernels(t,e),a.addSubGraphSessionStates(t,r),a.endSessionState(t)}}f.SessionState=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class a{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsInferenceSession(t,e){return(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsInferenceSession(t,e){return t.setPosition(t.position()+d.flatbuffers.SIZE_PREFIX_LENGTH),(e||new a).__init(t.readInt32(t.position())+t.position(),t)}static bufferHasIdentifier(t){return t.__has_identifier("ORTM")}ortVersion(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}model(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Model).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new s.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startInferenceSession(t){t.startObject(3)}static addOrtVersion(t,e){t.addFieldOffset(0,e,0)}static addModel(t,e){t.addFieldOffset(1,e,0)}static addSessionState(t,e){t.addFieldOffset(2,e,0)}static endInferenceSession(t){return t.endObject()}static finishInferenceSessionBuffer(t,e){t.finish(e,"ORTM")}static finishSizePrefixedInferenceSessionBuffer(t,e){t.finish(e,"ORTM",!0)}static createInferenceSession(t,e,r,i){return a.startInferenceSession(t),a.addOrtVersion(t,e),a.addModel(t,r),a.addSessionState(t,i),a.endInferenceSession(t)}}f.InferenceSession=a})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={}))},7448:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxjsSessionHandler=void 0;const d=u(1670),l=u(9162);n.OnnxjsSessionHandler=class{constructor(p){this.session=p,this.inputNames=this.session.inputNames,this.outputNames=this.session.outputNames}async dispose(){}async run(p,s,h){const f=new Map;for(const t in p)if(Object.hasOwnProperty.call(p,t)){const e=p[t];f.set(t,new l.Tensor(e.dims,e.type,void 0,void 0,e.data))}const a=await this.session.run(f),o={};return a.forEach((t,e)=>{o[e]=new d.Tensor(t.type,t.data,t.dims)}),o}startProfiling(){this.session.startProfiling()}endProfiling(){this.session.endProfiling()}}},6919:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Session=void 0;const d=u(7067),l=u(1296),p=u(7091),s=u(1036),h=u(6231),f=u(2644);n.Session=class{constructor(a={}){this._initialized=!1,this.backendHint=a.backendHint,this.profiler=h.Profiler.create(a.profiler),this.context={profiler:this.profiler,graphInputTypes:[],graphInputDims:[]}}get inputNames(){return this._model.graph.getInputNames()}get outputNames(){return this._model.graph.getOutputNames()}startProfiling(){this.profiler.start()}endProfiling(){this.profiler.stop()}async loadModel(a,o,t){await this.profiler.event("session","Session.loadModel",async()=>{const e=await(0,p.resolveBackend)(this.backendHint);if(this.sessionHandler=e.createSessionHandler(this.context),this._model=new f.Model,typeof a=="string"){const r=a.endsWith(".ort");if(typeof fetch>"u"){const i=await(0,l.promisify)(d.readFile)(a);this.initialize(i,r)}else{const i=await fetch(a),c=await i.arrayBuffer();this.initialize(new Uint8Array(c),r)}}else if(ArrayBuffer.isView(a))this.initialize(a);else{const r=new Uint8Array(a,o||0,t||a.byteLength);this.initialize(r)}})}initialize(a,o){if(this._initialized)throw new Error("already initialized");this.profiler.event("session","Session.initialize",()=>{const t=this.sessionHandler.transformGraph?this.sessionHandler:void 0;this._model.load(a,t,o),this.sessionHandler.onGraphInitialized&&this.sessionHandler.onGraphInitialized(this._model.graph),this.initializeOps(this._model.graph),this._executionPlan=new s.ExecutionPlan(this._model.graph,this._ops,this.profiler)}),this._initialized=!0}async run(a){if(!this._initialized)throw new Error("session not initialized yet");return this.profiler.event("session","Session.run",async()=>{const o=this.normalizeAndValidateInputs(a),t=await this._executionPlan.execute(this.sessionHandler,o);return this.createOutput(t)})}normalizeAndValidateInputs(a){const o=this._model.graph.getInputNames();if(Array.isArray(a)){if(a.length!==o.length)throw new Error(`incorrect input array length: expected ${o.length} but got ${a.length}`)}else{if(a.size!==o.length)throw new Error(`incorrect input map size: expected ${o.length} but got ${a.size}`);const t=new Array(a.size);let e=0;for(let r=0;rtypeof E=="string")))throw new TypeError("cache should be a string array");O&&(this.cache=new Array(S))}else{if(w!==void 0){const E=e(m);if(!(w instanceof E))throw new TypeError(`cache should be type ${E.name}`)}if(O){const E=new ArrayBuffer(S*function(T){switch(T){case"bool":case"int8":case"uint8":return 1;case"int16":case"uint16":return 2;case"int32":case"uint32":case"float32":return 4;case"float64":return 8;default:throw new Error(`cannot calculate sizeof() on type ${T}`)}}(m));this.cache=function(T,I){return new(e(I))(T)}(E,m)}}}static fromProto(g){if(!g)throw new Error("cannot construct Value from an empty tensor");const m=f.ProtoUtil.tensorDataTypeFromProto(g.dataType),b=f.ProtoUtil.tensorDimsFromProto(g.dims),_=new o(b,m);if(m==="string")g.stringData.forEach((w,v)=>{_.data[v]=(0,f.decodeUtf8String)(w)});else if(g.rawData&&typeof g.rawData.byteLength=="number"&&g.rawData.byteLength>0){const w=_.data,v=new DataView(g.rawData.buffer,g.rawData.byteOffset,g.rawData.byteLength),S=t(g.dataType),O=g.rawData.byteLength/S;if(g.rawData.byteLength%S!=0)throw new Error("invalid buffer length");if(w.length!==O)throw new Error("buffer length mismatch");for(let E=0;E0){const w=_.data,v=new DataView(g.rawDataArray().buffer,g.rawDataArray().byteOffset,g.rawDataLength()),S=t(g.dataType()),O=g.rawDataLength()/S;if(g.rawDataLength()%S!=0)throw new Error("invalid buffer length");if(w.length!==O)throw new Error("buffer length mismatch");for(let E=0;E1&&I>1)return;O[S-E]=Math.max(T,I)}return O}static index(m,b){const _=new Array(b.length);return a.fillIndex(m,b,_),_}static fillIndex(m,b,_){const w=m.length-b.length;for(let v=0;v=0;J--)T[J]=B%S[J],B=Math.floor(B/S[J]);H||(a.fillIndex(T,m.dims,I),F=m.get(I)),$||(a.fillIndex(T,b.dims,C),N=b.get(C)),E.set(T,_(F,N))}}return E}}static isValidBroadcast(m,b){const _=m.length,w=b.length;if(_>w)return!1;for(let v=1;v<=_;v++)if(m[_-v]!==1&&m[_-v]!==b[w-v])return!1;return!0}static getBroadcastDims(m,b){const _=m.length,w=[];for(let v=0;v<_;v++){const S=_-1-v,O=m[S]||1;(b[b.length-1-v]||1)>1&&O===1&&w.unshift(S)}return w}}n.BroadcastUtil=a,n.arrayCopyHelper=function(g,m,b,_,w){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;vp.default.isLong(b)?b.toNumber():b)}static tensorValueTypeFromProto(m){return{tensorType:o.tensorDataTypeFromProto(m.elemType),shape:{dims:o.tensorDimsFromProto(m.shape.dim.map(b=>b.dimValue))}}}static tensorDimsFromORTFormat(m){const b=[];for(let _=0;_m.length)throw new Error(`invalid dimension of ${b} for sizeFromDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,b,m.length)}static sizeToDimension(m,b){if(b<0||b>m.length)throw new Error(`invalid dimension of ${b} for sizeToDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,0,b)}static getSizeFromDimensionRange(m,b,_){let w=1;for(let v=b;v<_;v++){if(m[v]<=0)throw new Error("cannot get valid size from specified dimension range. Most likely the range contains 0 or negative values in them.");w*=m[v]}return w}static computeStrides(m){const b=m.length;if(b===0)return[];if(b===1)return[1];const _=new Array(b);_[b-1]=1,_[b-2]=m[b-1];for(let w=b-3;w>=0;--w)_[w]=_[w+1]*m[w+1];return _}static transpose(m){return m.slice().reverse()}static indicesToOffset(m,b,_){_===void 0&&(_=m.length);let w=0;for(let v=0;v<_;++v)w+=b[v]*m[v];return w}static offsetToIndices(m,b){const _=b.length;if(_===0)return[];if(_===1)return[m*b[0]];const w=new Array(b.length);for(let v=0;v=b)throw new Error("unsupported axis for this operation.");return m<0?m+b:m}static normalizeAxes(m,b){return m.map(_=>this.normalizeAxis(_,b))}static incrementIndex(m,b,_){if(b.length===0||m.length===0)throw new Error("Index incrementing unsupported for scalar Tensor");if(_===void 0)_=b.length;else if(_<=0||_>b.length)throw new Error("Incorrect axis to increment on");for(let w=_-1;w>=0&&(m[w]++,!(m[w]=m.length)throw new Error("the dimension with value zero exceeds the dimension size of the input tensor");w[E]=m[E]}else w[E]=b[E];S*=w[E]}}const O=e.size(m);if(v!==-1){if(O%S!=0)throw new Error(`the input tensor cannot be reshaped to the requested shape. Input shape: [${m}] Output shape: [${b}]`);w[v]=O/S}else if(S!==O)throw new Error("reshapedDims and originalDims don't have matching sizes");return w}static sortBasedOnPerm(m,b){return b?b.map(_=>m[_]):m.slice().reverse()}static padShape(m,b){const _=m.length;return m.map((w,v)=>w+b[v]+b[v+_])}static areEqual(m,b){return m.length===b.length&&m.every((_,w)=>_===b[w])}static validateDimsAndCalcSize(m){if(m.length>6)throw new TypeError("Only rank 0 to 6 is supported for tensor shape.");let b=1;for(const _ of m){if(!Number.isInteger(_))throw new TypeError(`Invalid shape: ${_} is not an integer`);if(_<0||_>2147483647)throw new TypeError(`Invalid shape: length ${_} is not allowed`);b*=_}return b}static flattenShape(m,b){b<0&&(b+=m.length);const _=m.reduce((v,S)=>v*S,1),w=m.slice(b).reduce((v,S)=>v*S,1);return[_/w,w]}static squeezeShape(m,b){const _=new Array;b=e.normalizeAxes(b,m.length);for(let w=0;w=0;if(v&&m[w]!==1)throw new Error("squeeze an axis of size different than 1");(b.length===0&&m[w]>1||b.length>0&&!v)&&_.push(m[w])}return _}static unsqueezeShape(m,b){const _=new Array(m.length+b.length);_.fill(0);for(let v=0;v=_.length)throw new Error("'axes' has an out of range axis");if(_[S]!==0)throw new Error("'axes' has a duplicate axis");_[S]=1}let w=0;for(let v=0;v<_.length;v++)_[v]===0&&(_[v]=m[w++]);if(w!==m.length)throw new Error("the unsqueezed dimension could not be established");return _}}n.ShapeUtil=e,n.MathUtil=class{static sqr(g,m,b,_,w){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;v=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let S=0;S=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let S=0;S=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+w>m.length)throw new Error("source indices to be copied are outside bounds");if(b+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;vb.push(N));const O=i.calcReduceShape(S,b,!0),E=e.size(O),T=new h.Tensor(O,m.type),I=e.computeStrides(O),C=e.computeStrides(S),B=new Array(S.length);for(let F=0;F=b.length)return S(m[v]);const T=b[w],I=T>=_.length?1:e.size(_.slice(T+1));for(let C=0;C<_[T];C++)E=C===0?i.calcReduceByAxis(m,b,_,w+1,v,S,O):O(E,i.calcReduceByAxis(m,b,_,w+1,v,S,O)),v+=I;return E}static calcReduceShape(m,b,_){const w=m.slice();for(let v=0;vv!==0)}}n.ReduceUtil=i;class c{static adjustPoolAttributes(m,b,_,w,v,S){if(!m&&_.length!==b.length-2)throw new Error("length of specified kernel shapes should be 2 less than length of input dimensions");if(m)for(let O=0;O=_.length?_.push(b[O+2]):_[O]=b[O+2];for(let O=0;O<_.length;O++)if(O=_[O]||S[O+_.length]>=_[O])throw new Error("pads should be smaller than kernel")}}static adjustPadsBasedOnAutoPad(m,b,_,w,v,S){if(S){if(v.length!==2*(m.length-2))throw new Error("length of pads should be twice the length of data dimensions");if(b.length!==m.length-2)throw new Error("length of strides should be the length of data dimensions");if(w.length!==m.length-2)throw new Error("length of kernel shapes should be the length of data dimensions");for(let O=0;O{Object.defineProperty(n,"__esModule",{value:!0}),n.iterateExtraOptions=void 0,n.iterateExtraOptions=(u,d,l,p)=>{if(typeof u=="object"&&u!==null){if(l.has(u))throw new Error("Circular reference in options");l.add(u)}Object.entries(u).forEach(([s,h])=>{const f=d?d+s:s;if(typeof h=="object")(0,n.iterateExtraOptions)(h,f+".",l,p);else if(typeof h=="string"||typeof h=="number")p(f,h.toString());else{if(typeof h!="boolean")throw new Error("Can't handle extra config type: "+typeof h);p(f,h?"1":"0")}})}},2157:function(y,n,u){var d,l=this&&this.__createBinding||(Object.create?function(I,C,B,F){F===void 0&&(F=B);var N=Object.getOwnPropertyDescriptor(C,B);N&&!("get"in N?!C.__esModule:N.writable||N.configurable)||(N={enumerable:!0,get:function(){return C[B]}}),Object.defineProperty(I,F,N)}:function(I,C,B,F){F===void 0&&(F=B),I[F]=C[B]}),p=this&&this.__setModuleDefault||(Object.create?function(I,C){Object.defineProperty(I,"default",{enumerable:!0,value:C})}:function(I,C){I.default=C}),s=this&&this.__importStar||function(I){if(I&&I.__esModule)return I;var C={};if(I!=null)for(var B in I)B!=="default"&&Object.prototype.hasOwnProperty.call(I,B)&&l(C,I,B);return p(C,I),C};Object.defineProperty(n,"__esModule",{value:!0}),n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=n.initWasm=void 0;const h=u(1670),f=s(u(349)),a=u(6361),o=()=>!!h.env.wasm.proxy&&typeof document<"u";let t,e,r,i=!1,c=!1,g=!1;const m=[],b=[],_=[],w=[],v=[],S=[],O=()=>{if(i||!c||g||!t)throw new Error("worker not ready")},E=I=>{switch(I.data.type){case"init-wasm":i=!1,I.data.err?(g=!0,e[1](I.data.err)):(c=!0,e[0]());break;case"init-ort":I.data.err?r[1](I.data.err):r[0]();break;case"create_allocate":I.data.err?m.shift()[1](I.data.err):m.shift()[0](I.data.out);break;case"create_finalize":I.data.err?b.shift()[1](I.data.err):b.shift()[0](I.data.out);break;case"create":I.data.err?_.shift()[1](I.data.err):_.shift()[0](I.data.out);break;case"release":I.data.err?w.shift()[1](I.data.err):w.shift()[0]();break;case"run":I.data.err?v.shift()[1](I.data.err):v.shift()[0](I.data.out);break;case"end-profiling":I.data.err?S.shift()[1](I.data.err):S.shift()[0]()}},T=typeof document<"u"?(d=document==null?void 0:document.currentScript)===null||d===void 0?void 0:d.src:void 0;n.initWasm=async()=>{if(o()){if(c)return;if(i)throw new Error("multiple calls to 'initWasm()' detected.");if(g)throw new Error("previous call to 'initWasm()' failed.");return i=!0,h.env.wasm.wasmPaths===void 0&&T&&T.indexOf("blob:")!==0&&(h.env.wasm.wasmPaths=T.substr(0,+T.lastIndexOf("/")+1)),new Promise((I,C)=>{t==null||t.terminate(),t=u(9710).Z(),t.onmessage=E,e=[I,C];const B={type:"init-wasm",in:h.env.wasm};t.postMessage(B)})}return(0,a.initializeWebAssembly)(h.env.wasm)},n.initOrt=async(I,C)=>{if(o())return O(),new Promise((B,F)=>{r=[B,F];const N={type:"init-ort",in:{numThreads:I,loggingLevel:C}};t.postMessage(N)});f.initOrt(I,C)},n.createSessionAllocate=async I=>o()?(O(),new Promise((C,B)=>{m.push([C,B]);const F={type:"create_allocate",in:{model:I}};t.postMessage(F,[I.buffer])})):f.createSessionAllocate(I),n.createSessionFinalize=async(I,C)=>o()?(O(),new Promise((B,F)=>{b.push([B,F]);const N={type:"create_finalize",in:{modeldata:I,options:C}};t.postMessage(N)})):f.createSessionFinalize(I,C),n.createSession=async(I,C)=>o()?(O(),new Promise((B,F)=>{_.push([B,F]);const N={type:"create",in:{model:I,options:C}};t.postMessage(N,[I.buffer])})):f.createSession(I,C),n.releaseSession=async I=>{if(o())return O(),new Promise((C,B)=>{w.push([C,B]);const F={type:"release",in:I};t.postMessage(F)});f.releaseSession(I)},n.run=async(I,C,B,F,N)=>o()?(O(),new Promise((H,$)=>{v.push([H,$]);const z={type:"run",in:{sessionId:I,inputIndices:C,inputs:B,outputIndices:F,options:N}};t.postMessage(z,f.extractTransferableBuffers(B))})):f.run(I,C,B,F,N),n.endProfiling=async I=>{if(o())return O(),new Promise((C,B)=>{S.push([C,B]);const F={type:"end-profiling",in:I};t.postMessage(F)});f.endProfiling(I)}},586:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.setRunOptions=void 0;const d=u(7967),l=u(4983),p=u(6361);n.setRunOptions=s=>{const h=(0,p.getInstance)();let f=0;const a=[],o=s||{};try{if((s==null?void 0:s.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof s.logSeverityLevel!="number"||!Number.isInteger(s.logSeverityLevel)||s.logSeverityLevel<0||s.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${s.logSeverityLevel}`);if((s==null?void 0:s.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof s.logVerbosityLevel!="number"||!Number.isInteger(s.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${s.logVerbosityLevel}`);(s==null?void 0:s.terminate)===void 0&&(o.terminate=!1);let t=0;if((s==null?void 0:s.tag)!==void 0&&(t=(0,l.allocWasmString)(s.tag,a)),f=h._OrtCreateRunOptions(o.logSeverityLevel,o.logVerbosityLevel,!!o.terminate,t),f===0)throw new Error("Can't create run options");return(s==null?void 0:s.extra)!==void 0&&(0,d.iterateExtraOptions)(s.extra,"",new WeakSet,(e,r)=>{const i=(0,l.allocWasmString)(e,a),c=(0,l.allocWasmString)(r,a);if(h._OrtAddRunConfigEntry(f,i,c)!==0)throw new Error(`Can't set a run config entry: ${e} - ${r}`)}),[f,a]}catch(t){throw f!==0&&h._OrtReleaseRunOptions(f),a.forEach(h._free),t}}},2306:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxruntimeWebAssemblySessionHandler=void 0;const d=u(2806),l=u(1670),p=u(2850),s=u(2157);let h;n.OnnxruntimeWebAssemblySessionHandler=class{async createSessionAllocate(f){const a=await fetch(f),o=await a.arrayBuffer();return(0,s.createSessionAllocate)(new Uint8Array(o))}async loadModel(f,a){if(h||(await(0,s.initOrt)(l.env.wasm.numThreads,(o=>{switch(o){case"verbose":return 0;case"info":return 1;case"warning":return 2;case"error":return 3;case"fatal":return 4;default:throw new Error(`unsupported logging level: ${o}`)}})(l.env.logLevel)),h=!0),typeof f=="string")if(typeof fetch>"u"){const o=await(0,p.promisify)(d.readFile)(f);[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSession)(o,a)}else{const o=await this.createSessionAllocate(f);[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSessionFinalize)(o,a)}else[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSession)(f,a)}async dispose(){return(0,s.releaseSession)(this.sessionId)}async run(f,a,o){const t=[],e=[];Object.entries(f).forEach(g=>{const m=g[0],b=g[1],_=this.inputNames.indexOf(m);if(_===-1)throw new Error(`invalid input '${m}'`);t.push(b),e.push(_)});const r=[];Object.entries(a).forEach(g=>{const m=g[0],b=this.outputNames.indexOf(m);if(b===-1)throw new Error(`invalid output '${m}'`);r.push(b)});const i=await(0,s.run)(this.sessionId,e,t.map(g=>[g.type,g.dims,g.data]),r,o),c={};for(let g=0;g{Object.defineProperty(n,"__esModule",{value:!0}),n.setSessionOptions=void 0;const d=u(7967),l=u(4983),p=u(6361);n.setSessionOptions=s=>{const h=(0,p.getInstance)();let f=0;const a=[],o=s||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(o);try{(s==null?void 0:s.graphOptimizationLevel)===void 0&&(o.graphOptimizationLevel="all");const t=(i=>{switch(i){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${i}`)}})(o.graphOptimizationLevel);(s==null?void 0:s.enableCpuMemArena)===void 0&&(o.enableCpuMemArena=!0),(s==null?void 0:s.enableMemPattern)===void 0&&(o.enableMemPattern=!0),(s==null?void 0:s.executionMode)===void 0&&(o.executionMode="sequential");const e=(i=>{switch(i){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${i}`)}})(o.executionMode);let r=0;if((s==null?void 0:s.logId)!==void 0&&(r=(0,l.allocWasmString)(s.logId,a)),(s==null?void 0:s.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof s.logSeverityLevel!="number"||!Number.isInteger(s.logSeverityLevel)||s.logSeverityLevel<0||s.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${s.logSeverityLevel}`);if((s==null?void 0:s.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof s.logVerbosityLevel!="number"||!Number.isInteger(s.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${s.logVerbosityLevel}`);if((s==null?void 0:s.enableProfiling)===void 0&&(o.enableProfiling=!1),f=h._OrtCreateSessionOptions(t,!!o.enableCpuMemArena,!!o.enableMemPattern,e,!!o.enableProfiling,0,r,o.logSeverityLevel,o.logVerbosityLevel),f===0)throw new Error("Can't create session options");return s!=null&&s.executionProviders&&((i,c,g)=>{for(const m of c){let b=typeof m=="string"?m:m.name;switch(b){case"xnnpack":b="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${b}`)}const _=(0,l.allocWasmString)(b,g);if((0,p.getInstance)()._OrtAppendExecutionProvider(i,_)!==0)throw new Error(`Can't append execution provider: ${b}`)}})(f,s.executionProviders,a),(s==null?void 0:s.extra)!==void 0&&(0,d.iterateExtraOptions)(s.extra,"",new WeakSet,(i,c)=>{const g=(0,l.allocWasmString)(i,a),m=(0,l.allocWasmString)(c,a);if(h._OrtAddSessionConfigEntry(f,g,m)!==0)throw new Error(`Can't set a session config entry: ${i} - ${c}`)}),[f,a]}catch(t){throw f!==0&&h._OrtReleaseSessionOptions(f),a.forEach(h._free),t}}},4983:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.allocWasmString=void 0;const d=u(6361);n.allocWasmString=(l,p)=>{const s=(0,d.getInstance)(),h=s.lengthBytesUTF8(l)+1,f=s._malloc(h);return s.stringToUTF8(l,f,h),p.push(f),f}},349:(y,n,u)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.extractTransferableBuffers=n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=void 0;const d=u(586),l=u(4919),p=u(4983),s=u(6361);n.initOrt=(t,e)=>{const r=(0,s.getInstance)()._OrtInit(t,e);if(r!==0)throw new Error(`Can't initialize onnxruntime. error code = ${r}`)};const h=new Map;n.createSessionAllocate=t=>{const e=(0,s.getInstance)(),r=e._malloc(t.byteLength);return e.HEAPU8.set(t,r),[r,t.byteLength]},n.createSessionFinalize=(t,e)=>{const r=(0,s.getInstance)();let i=0,c=0,g=[];try{if([c,g]=(0,l.setSessionOptions)(e),i=r._OrtCreateSession(t[0],t[1],c),i===0)throw new Error("Can't create a session")}finally{r._free(t[0]),r._OrtReleaseSessionOptions(c),g.forEach(r._free)}const m=r._OrtGetInputCount(i),b=r._OrtGetOutputCount(i),_=[],w=[],v=[],S=[];for(let O=0;O{const r=(0,n.createSessionAllocate)(t);return(0,n.createSessionFinalize)(r,e)},n.releaseSession=t=>{const e=(0,s.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],c=r[1],g=r[2];c.forEach(e._OrtFree),g.forEach(e._OrtFree),e._OrtReleaseSession(i),h.delete(t)};const f=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},a=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},o=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};n.run=(t,e,r,i,c)=>{const g=(0,s.getInstance)(),m=h.get(t);if(!m)throw new Error("invalid session id");const b=m[0],_=m[1],w=m[2],v=e.length,S=i.length;let O=0,E=[];const T=[],I=[];try{[O,E]=(0,d.setRunOptions)(c);for(let $=0;$g.HEAP32[Oe++]=Te);const ce=g._OrtCreateTensor(f(z),te,ne,Ie,J.length);if(ce===0)throw new Error("Can't create a tensor");T.push(ce)}finally{g.stackRestore(me)}}const C=g.stackSave(),B=g.stackAlloc(4*v),F=g.stackAlloc(4*v),N=g.stackAlloc(4*S),H=g.stackAlloc(4*S);try{let $=B/4,z=F/4,J=N/4,X=H/4;for(let me=0;meve*je);if(Te=a(We),Te==="string"){const ve=[];let je=_e/4;for(let ze=0;ze{const e=(0,s.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],c=e._OrtEndProfiling(i);if(c===0)throw new Error("Can't get an profile file name");e._OrtFree(c)},n.extractTransferableBuffers=t=>{const e=[];for(const r of t){const i=r[2];!Array.isArray(i)&&i.buffer&&e.push(i.buffer)}return e}},6361:function(y,n,u){var d=this&&this.__createBinding||(Object.create?function(c,g,m,b){b===void 0&&(b=m);var _=Object.getOwnPropertyDescriptor(g,m);_&&!("get"in _?!g.__esModule:_.writable||_.configurable)||(_={enumerable:!0,get:function(){return g[m]}}),Object.defineProperty(c,b,_)}:function(c,g,m,b){b===void 0&&(b=m),c[b]=g[m]}),l=this&&this.__setModuleDefault||(Object.create?function(c,g){Object.defineProperty(c,"default",{enumerable:!0,value:g})}:function(c,g){c.default=g}),p=this&&this.__importStar||function(c){if(c&&c.__esModule)return c;var g={};if(c!=null)for(var m in c)m!=="default"&&Object.prototype.hasOwnProperty.call(c,m)&&d(g,c,m);return l(g,c),g},s=this&&this.__importDefault||function(c){return c&&c.__esModule?c:{default:c}};Object.defineProperty(n,"__esModule",{value:!0}),n.dispose=n.getInstance=n.initializeWebAssembly=void 0;const h=p(u(6449)),f=s(u(932)),a=u(3474);let o,t=!1,e=!1,r=!1;const i=(c,g)=>g?c?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":c?"ort-wasm-simd.wasm":"ort-wasm.wasm";n.initializeWebAssembly=async c=>{if(t)return Promise.resolve();if(e)throw new Error("multiple calls to 'initializeWebAssembly()' detected.");if(r)throw new Error("previous call to 'initializeWebAssembly()' failed.");e=!0;const g=c.initTimeout,m=c.numThreads,b=c.simd,_=m>1&&(()=>{try{return typeof SharedArrayBuffer<"u"&&(typeof MessageChannel<"u"&&new MessageChannel().port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch{return!1}})(),w=b&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch{return!1}})(),v=typeof c.wasmPaths=="string"?c.wasmPaths:void 0,S=i(!1,_),O=i(w,_),E=typeof c.wasmPaths=="object"?c.wasmPaths[O]:void 0;let T=!1;const I=[];if(g>0&&I.push(new Promise(C=>{setTimeout(()=>{T=!0,C()},g)})),I.push(new Promise((C,B)=>{const F=_?a:f.default,N={locateFile:(H,$)=>_&&H.endsWith(".worker.js")&&typeof Blob<"u"?URL.createObjectURL(new Blob([u(4154)],{type:"text/javascript"})):H===S?E??(v??$)+O:$+H};if(_)if(typeof Blob>"u")N.mainScriptUrlOrBlob=h.join("/","ort-wasm-threaded.js");else{const H=`var ortWasmThreaded=(function(){var _scriptDir;return ${F.toString()}})();`;N.mainScriptUrlOrBlob=new Blob([H],{type:"text/javascript"})}F(N).then(H=>{e=!1,t=!0,o=H,C()},H=>{e=!1,r=!0,B(H)})})),await Promise.race(I),T)throw new Error(`WebAssembly backend initializing failed due to timeout: ${g}ms`)},n.getInstance=()=>{if(t&&o)return o;throw new Error("WebAssembly is not initialized yet.")},n.dispose=()=>{var c;!t||e||r||(e=!0,(c=o.PThread)===null||c===void 0||c.terminateAllThreads(),o=void 0,e=!1,t=!1,r=!0)}},9710:(y,n,u)=>{u.d(n,{Z:()=>p});var d=u(477),l=u.n(d);function p(){return l()('/*!\n* ONNX Runtime Web v1.14.0\n* Copyright (c) Microsoft Corporation. All rights reserved.\n* Licensed under the MIT License.\n*/\n(()=>{var t={474:(t,e,n)=>{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){function e(){return j.buffer!=D&&N(j.buffer),P}function r(){return j.buffer!=D&&N(j.buffer),U}function a(){return j.buffer!=D&&N(j.buffer),F}function i(){return j.buffer!=D&&N(j.buffer),I}function o(){return j.buffer!=D&&N(j.buffer),W}var u,c,s;t=t||{},u||(u=void 0!==t?t:{}),u.ready=new Promise((function(t,e){c=t,s=e}));var l,f,p,h,d,y,b=Object.assign({},u),m="./this.program",g=(t,e)=>{throw e},v="object"==typeof window,w="function"==typeof importScripts,_="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,O=u.ENVIRONMENT_IS_PTHREAD||!1,A="";function S(t){return u.locateFile?u.locateFile(t,A):A+t}if(_){let e;A=w?n(908).dirname(A)+"/":"//",y=()=>{d||(h=n(384),d=n(908))},l=function(t,e){return y(),t=d.normalize(t),h.readFileSync(t,e?void 0:"utf8")},p=t=>((t=l(t,!0)).buffer||(t=new Uint8Array(t)),t),f=(t,e,n)=>{y(),t=d.normalize(t),h.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(Q())throw process.exitCode=t,e;e instanceof ct||x("exiting due to exception: "+e),process.exit(t)},u.inspect=function(){return"[Emscripten Module object]"};try{e=n(925)}catch(t){throw console.error(\'The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?\'),t}n.g.Worker=e.Worker}else(v||w)&&(w?A=self.location.href:"undefined"!=typeof document&&document.currentScript&&(A=document.currentScript.src),_scriptDir&&(A=_scriptDir),A=0!==A.indexOf("blob:")?A.substr(0,A.replace(/[?#].*/,"").lastIndexOf("/")+1):"",_||(l=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},w&&(p=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),f=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)}));_&&"undefined"==typeof performance&&(n.g.performance=n(953).performance);var T=console.log.bind(console),E=console.warn.bind(console);_&&(y(),T=t=>h.writeSync(1,t+"\\n"),E=t=>h.writeSync(2,t+"\\n"));var M,C=u.print||T,x=u.printErr||E;Object.assign(u,b),b=null,u.thisProgram&&(m=u.thisProgram),u.quit&&(g=u.quit),u.wasmBinary&&(M=u.wasmBinary);var R=u.noExitRuntime||!1;"object"!=typeof WebAssembly&&at("no native wasm support detected");var j,k,D,P,U,F,I,W,H=!1,L="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function z(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function Y(t,e){return(t>>>=0)?z(r(),t,e):""}function B(t,e,n,r){if(!(0>>=0;r=n+r-1;for(var i=0;i=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function G(t){for(var e=0,n=0;n=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function N(t){D=t,u.HEAP8=P=new Int8Array(t),u.HEAP16=new Int16Array(t),u.HEAP32=F=new Int32Array(t),u.HEAPU8=U=new Uint8Array(t),u.HEAPU16=new Uint16Array(t),u.HEAPU32=I=new Uint32Array(t),u.HEAPF32=new Float32Array(t),u.HEAPF64=W=new Float64Array(t)}O&&(D=u.buffer);var V=u.INITIAL_MEMORY||16777216;if(O)j=u.wasmMemory,D=u.buffer;else if(u.wasmMemory)j=u.wasmMemory;else if(!((j=new WebAssembly.Memory({initial:V/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw x("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),_&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");j&&(D=j.buffer),V=D.byteLength,N(D);var $,q=[],X=[],J=[],Z=[];function Q(){return R||!1}function K(){var t=u.preRun.shift();q.unshift(t)}var tt,et=0,nt=null,rt=null;function at(t){throw O?postMessage({cmd:"onAbort",arg:t}):u.onAbort&&u.onAbort(t),x(t="Aborted("+t+")"),H=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),s(t),t}function it(){return tt.startsWith("data:application/octet-stream;base64,")}function ot(){var t=tt;try{if(t==tt&&M)return new Uint8Array(M);if(p)return p(t);throw"both async and sync fetching of the wasm failed"}catch(t){at(t)}}tt="ort-wasm-threaded.wasm",it()||(tt=S(tt));var ut={};function ct(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function st(t){(t=ht.Vb[t])||at(),ht.mc(t)}function lt(t){var e=ht.Cc();if(!e)return 6;ht.ac.push(e),ht.Vb[t.Ub]=e,e.Ub=t.Ub;var n={cmd:"run",start_routine:t.Ic,arg:t.zc,pthread_ptr:t.Ub};return e.$b=()=>{n.time=performance.now(),e.postMessage(n,t.Nc)},e.loaded&&(e.$b(),delete e.$b),0}function ft(t){if(O)return $t(1,1,t);Q()||(ht.oc(),u.onExit&&u.onExit(t),H=!0),g(t,new ct(t))}function pt(t,e){if(!e&&O)throw bt(t),"unwind";Q()||O||(me(),dt(J),be(0),re[1].length&&ae(1,10),re[2].length&&ae(2,10),ht.oc()),ft(t)}var ht={Yb:[],ac:[],qc:[],Vb:{},fc:function(){O&&ht.Ec()},Pc:function(){},Ec:function(){ht.receiveObjectTransfer=ht.Gc,ht.threadInitTLS=ht.pc,ht.setExitStatus=ht.nc,R=!1},nc:function(){},oc:function(){for(var t of Object.values(ht.Vb))ht.mc(t);for(t of ht.Yb)t.terminate();ht.Yb=[]},mc:function(t){var e=t.Ub;delete ht.Vb[e],ht.Yb.push(t),ht.ac.splice(ht.ac.indexOf(t),1),t.Ub=0,Oe(e)},Gc:function(){},pc:function(){ht.qc.forEach((t=>t()))},Fc:function(t,e){t.onmessage=n=>{var r=(n=n.data).cmd;if(t.Ub&&(ht.Bc=t.Ub),n.targetThread&&n.targetThread!=he()){var a=ht.Vb[n.Qc];a?a.postMessage(n,n.transferList):x(\'Internal error! Worker sent a message "\'+r+\'" to target pthread \'+n.targetThread+", but that thread no longer exists!")}else"processProxyingQueue"===r?zt(n.queue):"spawnThread"===r?lt(n):"cleanupThread"===r?st(n.thread):"killThread"===r?(n=n.thread,r=ht.Vb[n],delete ht.Vb[n],r.terminate(),Oe(n),ht.ac.splice(ht.ac.indexOf(r),1),r.Ub=0):"cancelThread"===r?ht.Vb[n.thread].postMessage({cmd:"cancel"}):"loaded"===r?(t.loaded=!0,e&&e(t),t.$b&&(t.$b(),delete t.$b)):"print"===r?C("Thread "+n.threadId+": "+n.text):"printErr"===r?x("Thread "+n.threadId+": "+n.text):"alert"===r?alert("Thread "+n.threadId+": "+n.text):"setimmediate"===n.target?t.postMessage(n):"onAbort"===r?u.onAbort&&u.onAbort(n.arg):r&&x("worker sent an unknown command "+r);ht.Bc=void 0},t.onerror=t=>{throw x("worker sent an error! "+t.filename+":"+t.lineno+": "+t.message),t},_&&(t.on("message",(function(e){t.onmessage({data:e})})),t.on("error",(function(e){t.onerror(e)})),t.on("detachedExit",(function(){}))),t.postMessage({cmd:"load",urlOrBlob:u.mainScriptUrlOrBlob||_scriptDir,wasmMemory:j,wasmModule:k})},yc:function(){var t=S("ort-wasm-threaded.worker.js");ht.Yb.push(new Worker(t))},Cc:function(){return 0==ht.Yb.length&&(ht.yc(),ht.Fc(ht.Yb[0])),ht.Yb.pop()}};function dt(t){for(;0>2>>>0];t=a()[t+48>>2>>>0],Te(e,e-t),Me(e)};var mt=[];function gt(t){var e=mt[t];return e||(t>=mt.length&&(mt.length=t+1),mt[t]=e=$.get(t)),e}u.invokeEntryPoint=function(t,e){t=gt(t)(e),Q()?ht.nc(t):Ae(t)};var vt,wt,_t=[],Ot=0,At=0;function St(t){this.Zb=t,this.Sb=t-24,this.xc=function(t){i()[this.Sb+4>>2>>>0]=t},this.bc=function(){return i()[this.Sb+4>>2>>>0]},this.wc=function(t){i()[this.Sb+8>>2>>>0]=t},this.Dc=function(){return i()[this.Sb+8>>2>>>0]},this.rc=function(){a()[this.Sb>>2>>>0]=0},this.hc=function(t){t=t?1:0,e()[this.Sb+12>>0>>>0]=t},this.uc=function(){return 0!=e()[this.Sb+12>>0>>>0]},this.ic=function(t){t=t?1:0,e()[this.Sb+13>>0>>>0]=t},this.kc=function(){return 0!=e()[this.Sb+13>>0>>>0]},this.fc=function(t,e){this.cc(0),this.xc(t),this.wc(e),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(a(),this.Sb>>2,1)},this.Hc=function(){return 1===Atomics.sub(a(),this.Sb>>2,1)},this.cc=function(t){i()[this.Sb+16>>2>>>0]=t},this.tc=function(){return i()[this.Sb+16>>2>>>0]},this.vc=function(){if(Re(this.bc()))return i()[this.Zb>>2>>>0];var t=this.tc();return 0!==t?t:this.Zb}}function Tt(t){return ye(new St(t).Sb)}function Et(t,e,n,r){return O?$t(3,1,t,e,n,r):Mt(t,e,n,r)}function Mt(t,e,n,r){if("undefined"==typeof SharedArrayBuffer)return x("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var a=[];return O&&0===a.length?Et(t,e,n,r):(t={Ic:n,Ub:t,zc:r,Nc:a},O?(t.Oc="spawnThread",postMessage(t,a),0):lt(t))}function Ct(t,e,n){return O?$t(4,1,t,e,n):0}function xt(t,e){if(O)return $t(5,1,t,e)}function Rt(t,e){if(O)return $t(6,1,t,e)}function jt(t,e,n){if(O)return $t(7,1,t,e,n)}function kt(t,e,n){return O?$t(8,1,t,e,n):0}function Dt(t,e){if(O)return $t(9,1,t,e)}function Pt(t,e,n){if(O)return $t(10,1,t,e,n)}function Ut(t,e,n,r){if(O)return $t(11,1,t,e,n,r)}function Ft(t,e,n,r){if(O)return $t(12,1,t,e,n,r)}function It(t,e,n,r){if(O)return $t(13,1,t,e,n,r)}function Wt(t){if(O)return $t(14,1,t)}function Ht(t,e){if(O)return $t(15,1,t,e)}function Lt(t,e,n){if(O)return $t(16,1,t,e,n)}function zt(t){Atomics.store(a(),t>>2,1),he()&&_e(t),Atomics.compareExchange(a(),t>>2,1,0)}function Yt(t){return i()[t>>>2]+4294967296*a()[t+4>>>2]}function Bt(t,e,n,r,a,i){return O?$t(17,1,t,e,n,r,a,i):-52}function Gt(t,e,n,r,a,i){if(O)return $t(18,1,t,e,n,r,a,i)}function Nt(t){var n=G(t)+1,r=de(n);return r&&B(t,e(),r,n),r}function Vt(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}if(O)return $t(19,1,t,e,n);var o=(new Date).getFullYear(),u=new Date(o,0,1),c=new Date(o,6,1);o=u.getTimezoneOffset();var s=c.getTimezoneOffset(),l=Math.max(o,s);a()[t>>2>>>0]=60*l,a()[e>>2>>>0]=Number(o!=s),t=r(u),e=r(c),t=Nt(t),e=Nt(e),s>2>>>0]=t,i()[n+4>>2>>>0]=e):(i()[n>>2>>>0]=e,i()[n+4>>2>>>0]=t)}function $t(t,e){var n=arguments.length-2,r=arguments;return yt((()=>{for(var a=Ce(8*n),i=a>>3,u=0;u>>0]=c}return we(t,n,a,e)}))}u.executeNotifiedProxyingQueue=zt,wt=_?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:O?()=>performance.now()-u.__performance_now_clock_drift:()=>performance.now();var qt,Xt=[],Jt={};function Zt(){if(!qt){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:m||"./this.program"};for(t in Jt)void 0===Jt[t]?delete e[t]:e[t]=Jt[t];var n=[];for(t in e)n.push(t+"="+e[t]);qt=n}return qt}function Qt(t,n){if(O)return $t(20,1,t,n);var r=0;return Zt().forEach((function(a,o){var u=n+r;for(o=i()[t+4*o>>2>>>0]=u,u=0;u>0>>>0]=a.charCodeAt(u);e()[o>>0>>>0]=0,r+=a.length+1})),0}function Kt(t,e){if(O)return $t(21,1,t,e);var n=Zt();i()[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),i()[e>>2>>>0]=r,0}function te(t){return O?$t(22,1,t):52}function ee(t,e,n,r){return O?$t(23,1,t,e,n,r):52}function ne(t,e,n,r,a){return O?$t(24,1,t,e,n,r,a):70}var re=[null,[],[]];function ae(t,e){var n=re[t];0===e||10===e?((1===t?C:x)(z(n,0)),n.length=0):n.push(e)}function ie(t,e,n,a){if(O)return $t(25,1,t,e,n,a);for(var o=0,u=0;u>2>>>0],s=i()[e+4>>2>>>0];e+=8;for(var l=0;l>>0]);o+=s}return i()[a>>2>>>0]=o,0}var oe=0;function ue(t){return 0==t%4&&(0!=t%100||0==t%400)}var ce=[31,29,31,30,31,30,31,31,30,31,30,31],se=[31,28,31,30,31,30,31,31,30,31,30,31];function le(t,n,r,i){function o(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.lengtht?-1:0r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=s(new Date(t.getFullYear(),0,4)),n=s(n),0>=c(e,t)?0>=c(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var f=a()[i+40>>2>>>0];for(var p in i={Lc:a()[i>>2>>>0],Kc:a()[i+4>>2>>>0],dc:a()[i+8>>2>>>0],jc:a()[i+12>>2>>>0],ec:a()[i+16>>2>>>0],Xb:a()[i+20>>2>>>0],Tb:a()[i+24>>2>>>0],Wb:a()[i+28>>2>>>0],Rc:a()[i+32>>2>>>0],Jc:a()[i+36>>2>>>0],Mc:f?Y(f):""},r=Y(r),f={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})r=r.replace(new RegExp(p,"g"),f[p]);var h="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),d="January February March April May June July August September October November December".split(" ");for(p in f={"%a":function(t){return h[t.Tb].substring(0,3)},"%A":function(t){return h[t.Tb]},"%b":function(t){return d[t.ec].substring(0,3)},"%B":function(t){return d[t.ec]},"%C":function(t){return u((t.Xb+1900)/100|0,2)},"%d":function(t){return u(t.jc,2)},"%e":function(t){return o(t.jc,2," ")},"%g":function(t){return l(t).toString().substring(2)},"%G":function(t){return l(t)},"%H":function(t){return u(t.dc,2)},"%I":function(t){return 0==(t=t.dc)?t=12:12t.dc?"AM":"PM"},"%S":function(t){return u(t.Lc,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Tb||7},"%U":function(t){return u(Math.floor((t.Wb+7-t.Tb)/7),2)},"%V":function(t){var e=Math.floor((t.Wb+7-(t.Tb+6)%7)/7);if(2>=(t.Tb+371-t.Wb-2)%7&&e++,e)53==e&&(4==(n=(t.Tb+371-t.Wb)%7)||3==n&&ue(t.Xb)||(e=1));else{e=52;var n=(t.Tb+7-t.Wb-1)%7;(4==n||5==n&&ue(t.Xb%400-1))&&e++}return u(e,2)},"%w":function(t){return t.Tb},"%W":function(t){return u(Math.floor((t.Wb+7-(t.Tb+6)%7)/7),2)},"%y":function(t){return(t.Xb+1900).toString().substring(2)},"%Y":function(t){return t.Xb+1900},"%z":function(t){var e=0<=(t=t.Jc);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.Mc},"%%":function(){return"%"}},r=r.replace(/%%/g,"\\0\\0"),f)r.includes(p)&&(r=r.replace(new RegExp(p,"g"),f[p](i)));return p=function(t){var e=Array(G(t)+1);return B(t,e,0,e.length),e}(r=r.replace(/\\0\\0/g,"%")),p.length>n?0:(function(t,n){e().set(t,n>>>0)}(p,t),p.length-1)}ht.fc();var fe=[null,ft,bt,Et,Ct,xt,Rt,jt,kt,Dt,Pt,Ut,Ft,It,Wt,Ht,Lt,Bt,Gt,Vt,Qt,Kt,te,ee,ne,ie],pe={b:function(t){return de(t+24)+24},n:function(t){return(t=new St(t)).uc()||(t.hc(!0),Ot--),t.ic(!1),_t.push(t),t.sc(),t.vc()},ma:function(t){throw x("Unexpected exception thrown, this is not properly supported - aborting"),H=!0,t},x:function(){Se(0);var t=_t.pop();if(t.Hc()&&!t.kc()){var e=t.Dc();e&>(e)(t.Zb),Tt(t.Zb)}At=0},e:function(){var t=At;if(!t)return oe=0;var e=new St(t);e.cc(t);var n=e.bc();if(!n)return oe=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;azt(r)));else if(O)postMessage({targetThread:t,cmd:"processProxyingQueue",queue:r});else{if(!(t=ht.Vb[t]))return;t.postMessage({cmd:"processProxyingQueue",queue:r})}return 1},Ea:function(){return-1},Pa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getUTCSeconds(),a()[e+4>>2>>>0]=t.getUTCMinutes(),a()[e+8>>2>>>0]=t.getUTCHours(),a()[e+12>>2>>>0]=t.getUTCDate(),a()[e+16>>2>>>0]=t.getUTCMonth(),a()[e+20>>2>>>0]=t.getUTCFullYear()-1900,a()[e+24>>2>>>0]=t.getUTCDay(),t=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,a()[e+28>>2>>>0]=t},Qa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getSeconds(),a()[e+4>>2>>>0]=t.getMinutes(),a()[e+8>>2>>>0]=t.getHours(),a()[e+12>>2>>>0]=t.getDate(),a()[e+16>>2>>>0]=t.getMonth(),a()[e+20>>2>>>0]=t.getFullYear()-1900,a()[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1),r=(t.getTime()-n.getTime())/864e5|0;a()[e+28>>2>>>0]=r,a()[e+36>>2>>>0]=-60*t.getTimezoneOffset(),r=new Date(t.getFullYear(),6,1).getTimezoneOffset(),t=0|(r!=(n=n.getTimezoneOffset())&&t.getTimezoneOffset()==Math.min(n,r)),a()[e+32>>2>>>0]=t},Ra:function(t){var e=new Date(a()[t+20>>2>>>0]+1900,a()[t+16>>2>>>0],a()[t+12>>2>>>0],a()[t+8>>2>>>0],a()[t+4>>2>>>0],a()[t>>2>>>0],0),n=a()[t+32>>2>>>0],r=e.getTimezoneOffset(),i=new Date(e.getFullYear(),0,1),o=new Date(e.getFullYear(),6,1).getTimezoneOffset(),u=i.getTimezoneOffset(),c=Math.min(u,o);return 0>n?a()[t+32>>2>>>0]=Number(o!=u&&c==r):0>2>>>0]=e.getDay(),n=(e.getTime()-i.getTime())/864e5|0,a()[t+28>>2>>>0]=n,a()[t>>2>>>0]=e.getSeconds(),a()[t+4>>2>>>0]=e.getMinutes(),a()[t+8>>2>>>0]=e.getHours(),a()[t+12>>2>>>0]=e.getDate(),a()[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},Aa:Bt,Ba:Gt,Sa:function t(e,n,r){t.Ac||(t.Ac=!0,Vt(e,n,r))},y:function(){at("")},U:function(){if(!_&&!w){var t="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";vt||(vt={}),vt[t]||(vt[t]=1,_&&(t="warning: "+t),x(t))}},ra:function(){return 4294901760},B:wt,Ia:function(t,e,n){r().copyWithin(t>>>0,e>>>0,e+n>>>0)},F:function(){return _?n(993).cpus().length:navigator.hardwareConcurrency},Da:function(t,e,n){Xt.length=e,n>>=3;for(var r=0;r>>0];return(0>t?ut[-t-1]:fe[t]).apply(null,Xt)},qa:function(t){var e=r().length;if((t>>>=0)<=e||4294901760=n;n*=2){var a=e*(1+.2/n);a=Math.min(a,t+100663296);var i=Math;a=Math.max(t,a),i=i.min.call(i,4294901760,a+(65536-a%65536)%65536);t:{try{j.grow(i-D.byteLength+65535>>>16),N(j.buffer);var o=1;break t}catch(t){}o=void 0}if(o)return!0}return!1},Na:function(){throw"unwind"},Ga:Qt,Ha:Kt,J:pt,I:te,S:ee,ga:ne,R:ie,d:function(){return oe},na:function t(r,a){t.lc||(t.lc=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(_)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>at("randomDevice")}());for(var i=0;i>0>>>0]=t.lc();return 0},ia:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ja:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},K:function(t){var e=Ee();try{return gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},f:function(t,e){var n=Ee();try{return gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},P:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},Q:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},k:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},p:function(t,e,n,r){var a=Ee();try{return gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},q:function(t,e,n,r,a){var i=Ee();try{return gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},N:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},s:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},w:function(t,e,n,r,a,i,o){var u=Ee();try{return gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},L:function(t,e,n,r,a,i,o,u){var c=Ee();try{return gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},E:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{return gt(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=Ee();try{return He(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},_:function(t,e,n,r,a,i,o){var u=Ee();try{return ke(t,e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},Z:function(t,e,n,r,a){var i=Ee();try{return Le(t,e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},ca:function(t,e,n,r){var a=Ee();try{return Ie(t,e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},$:function(t){var e=Ee();try{return je(t)}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},ba:function(t,e){var n=Ee();try{return We(t,e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},Y:function(t,e,n){var r=Ee();try{return De(t,e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},g:function(t){var e=Ee();try{gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},r:function(t,e){var n=Ee();try{gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},i:function(t,e,n){var r=Ee();try{gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ha:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},m:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},v:function(t,e,n,r,a){var i=Ee();try{gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},u:function(t,e,n,r,a,i){var o=Ee();try{gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},O:function(t,e,n,r,a,i,o){var u=Ee();try{gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},A:function(t,e,n,r,a,i,o,u){var c=Ee();try{gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},ka:function(t,e,n,r,a,i,o,u,c){var s=Ee();try{gt(t)(e,n,r,a,i,o,u,c)}catch(t){if(Me(s),t!==t+0)throw t;Se(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l){var f=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(Me(f),t!==t+0)throw t;Se(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(Me(b),t!==t+0)throw t;Se(1,0)}},fa:function(t,e,n,r,a,i,o,u){var c=Ee();try{Pe(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},da:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{Fe(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},ea:function(t,e,n,r,a,i){var o=Ee();try{Ue(t,e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},o:function(t){return t},a:j||u.wasmMemory,G:function(t){oe=t},la:le,z:function(t,e,n,r){return le(t,e,n,r)}};!function(){function t(t,e){u.asm=t.exports,ht.qc.push(u.asm.sb),$=u.asm.ub,X.unshift(u.asm.Va),k=e,O||(et--,u.monitorRunDependencies&&u.monitorRunDependencies(et),0==et&&(null!==nt&&(clearInterval(nt),nt=null),rt&&(t=rt,rt=null,t())))}function e(e){t(e.instance,e.module)}function n(t){return function(){if(!M&&(v||w)){if("function"==typeof fetch&&!tt.startsWith("file://"))return fetch(tt,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+tt+"\'";return t.arrayBuffer()})).catch((function(){return ot()}));if(f)return new Promise((function(t,e){f(tt,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return ot()}))}().then((function(t){return WebAssembly.instantiate(t,r)})).then((function(t){return t})).then(t,(function(t){x("failed to asynchronously prepare wasm: "+t),at(t)}))}var r={a:pe};if(O||(et++,u.monitorRunDependencies&&u.monitorRunDependencies(et)),u.instantiateWasm)try{return u.instantiateWasm(r,t)}catch(t){return x("Module.instantiateWasm callback failed with error: "+t),!1}(M||"function"!=typeof WebAssembly.instantiateStreaming||it()||tt.startsWith("file://")||_||"function"!=typeof fetch?n(e):fetch(tt,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,r).then(e,(function(t){return x("wasm streaming compile failed: "+t),x("falling back to ArrayBuffer instantiation"),n(e)}))}))).catch(s)}(),u.___wasm_call_ctors=function(){return(u.___wasm_call_ctors=u.asm.Va).apply(null,arguments)},u._OrtInit=function(){return(u._OrtInit=u.asm.Wa).apply(null,arguments)},u._OrtCreateSessionOptions=function(){return(u._OrtCreateSessionOptions=u.asm.Xa).apply(null,arguments)},u._OrtAppendExecutionProvider=function(){return(u._OrtAppendExecutionProvider=u.asm.Ya).apply(null,arguments)},u._OrtAddSessionConfigEntry=function(){return(u._OrtAddSessionConfigEntry=u.asm.Za).apply(null,arguments)},u._OrtReleaseSessionOptions=function(){return(u._OrtReleaseSessionOptions=u.asm._a).apply(null,arguments)},u._OrtCreateSession=function(){return(u._OrtCreateSession=u.asm.$a).apply(null,arguments)},u._OrtReleaseSession=function(){return(u._OrtReleaseSession=u.asm.ab).apply(null,arguments)},u._OrtGetInputCount=function(){return(u._OrtGetInputCount=u.asm.bb).apply(null,arguments)},u._OrtGetOutputCount=function(){return(u._OrtGetOutputCount=u.asm.cb).apply(null,arguments)},u._OrtGetInputName=function(){return(u._OrtGetInputName=u.asm.db).apply(null,arguments)},u._OrtGetOutputName=function(){return(u._OrtGetOutputName=u.asm.eb).apply(null,arguments)},u._OrtFree=function(){return(u._OrtFree=u.asm.fb).apply(null,arguments)},u._OrtCreateTensor=function(){return(u._OrtCreateTensor=u.asm.gb).apply(null,arguments)},u._OrtGetTensorData=function(){return(u._OrtGetTensorData=u.asm.hb).apply(null,arguments)},u._OrtReleaseTensor=function(){return(u._OrtReleaseTensor=u.asm.ib).apply(null,arguments)},u._OrtCreateRunOptions=function(){return(u._OrtCreateRunOptions=u.asm.jb).apply(null,arguments)},u._OrtAddRunConfigEntry=function(){return(u._OrtAddRunConfigEntry=u.asm.kb).apply(null,arguments)},u._OrtReleaseRunOptions=function(){return(u._OrtReleaseRunOptions=u.asm.lb).apply(null,arguments)},u._OrtRun=function(){return(u._OrtRun=u.asm.mb).apply(null,arguments)},u._OrtEndProfiling=function(){return(u._OrtEndProfiling=u.asm.nb).apply(null,arguments)};var he=u._pthread_self=function(){return(he=u._pthread_self=u.asm.ob).apply(null,arguments)},de=u._malloc=function(){return(de=u._malloc=u.asm.pb).apply(null,arguments)},ye=u._free=function(){return(ye=u._free=u.asm.qb).apply(null,arguments)},be=u._fflush=function(){return(be=u._fflush=u.asm.rb).apply(null,arguments)};u.__emscripten_tls_init=function(){return(u.__emscripten_tls_init=u.asm.sb).apply(null,arguments)};var me=u.___funcs_on_exit=function(){return(me=u.___funcs_on_exit=u.asm.tb).apply(null,arguments)},ge=u.__emscripten_thread_init=function(){return(ge=u.__emscripten_thread_init=u.asm.vb).apply(null,arguments)};u.__emscripten_thread_crashed=function(){return(u.__emscripten_thread_crashed=u.asm.wb).apply(null,arguments)};var ve,we=u._emscripten_run_in_main_runtime_thread_js=function(){return(we=u._emscripten_run_in_main_runtime_thread_js=u.asm.xb).apply(null,arguments)},_e=u.__emscripten_proxy_execute_task_queue=function(){return(_e=u.__emscripten_proxy_execute_task_queue=u.asm.yb).apply(null,arguments)},Oe=u.__emscripten_thread_free_data=function(){return(Oe=u.__emscripten_thread_free_data=u.asm.zb).apply(null,arguments)},Ae=u.__emscripten_thread_exit=function(){return(Ae=u.__emscripten_thread_exit=u.asm.Ab).apply(null,arguments)},Se=u._setThrew=function(){return(Se=u._setThrew=u.asm.Bb).apply(null,arguments)},Te=u._emscripten_stack_set_limits=function(){return(Te=u._emscripten_stack_set_limits=u.asm.Cb).apply(null,arguments)},Ee=u.stackSave=function(){return(Ee=u.stackSave=u.asm.Db).apply(null,arguments)},Me=u.stackRestore=function(){return(Me=u.stackRestore=u.asm.Eb).apply(null,arguments)},Ce=u.stackAlloc=function(){return(Ce=u.stackAlloc=u.asm.Fb).apply(null,arguments)},xe=u.___cxa_can_catch=function(){return(xe=u.___cxa_can_catch=u.asm.Gb).apply(null,arguments)},Re=u.___cxa_is_pointer_type=function(){return(Re=u.___cxa_is_pointer_type=u.asm.Hb).apply(null,arguments)},je=u.dynCall_j=function(){return(je=u.dynCall_j=u.asm.Ib).apply(null,arguments)},ke=u.dynCall_iiiiij=function(){return(ke=u.dynCall_iiiiij=u.asm.Jb).apply(null,arguments)},De=u.dynCall_jii=function(){return(De=u.dynCall_jii=u.asm.Kb).apply(null,arguments)},Pe=u.dynCall_viiiiij=function(){return(Pe=u.dynCall_viiiiij=u.asm.Lb).apply(null,arguments)},Ue=u.dynCall_vjji=function(){return(Ue=u.dynCall_vjji=u.asm.Mb).apply(null,arguments)},Fe=u.dynCall_viiijjjii=function(){return(Fe=u.dynCall_viiijjjii=u.asm.Nb).apply(null,arguments)},Ie=u.dynCall_iij=function(){return(Ie=u.dynCall_iij=u.asm.Ob).apply(null,arguments)},We=u.dynCall_ji=function(){return(We=u.dynCall_ji=u.asm.Pb).apply(null,arguments)},He=u.dynCall_iiiiiij=function(){return(He=u.dynCall_iiiiiij=u.asm.Qb).apply(null,arguments)},Le=u.dynCall_iiij=function(){return(Le=u.dynCall_iiij=u.asm.Rb).apply(null,arguments)};function ze(){function t(){if(!ve&&(ve=!0,u.calledRun=!0,!H)&&(O||dt(X),c(u),u.onRuntimeInitialized&&u.onRuntimeInitialized(),!O)){if(u.postRun)for("function"==typeof u.postRun&&(u.postRun=[u.postRun]);u.postRun.length;){var t=u.postRun.shift();Z.unshift(t)}dt(Z)}}if(!(0{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){var e,r,a;t=t||{},e||(e=void 0!==t?t:{}),e.ready=new Promise((function(t,e){r=t,a=e}));var i,o,u,c,s,l,f=Object.assign({},e),p="./this.program",h=(t,e)=>{throw e},d="object"==typeof window,y="function"==typeof importScripts,b="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,m="";b?(m=y?n(908).dirname(m)+"/":"//",l=()=>{s||(c=n(384),s=n(908))},i=function(t,e){return l(),t=s.normalize(t),c.readFileSync(t,e?void 0:"utf8")},u=t=>((t=i(t,!0)).buffer||(t=new Uint8Array(t)),t),o=(t,e,n)=>{l(),t=s.normalize(t),c.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(_||0{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},y&&(u=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),o=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)});var g,v=e.print||console.log.bind(console),w=e.printErr||console.warn.bind(console);Object.assign(e,f),f=null,e.thisProgram&&(p=e.thisProgram),e.quit&&(h=e.quit),e.wasmBinary&&(g=e.wasmBinary);var _=e.noExitRuntime||!1;"object"!=typeof WebAssembly&&V("no native wasm support detected");var O,A,S,T,E,M,C=!1,x="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function R(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function j(t,e){return(t>>>=0)?R(T,t,e):""}function k(t,e,n,r){if(!(0>>=0;r=n+r-1;for(var i=0;i=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function D(t){for(var e=0,n=0;n=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function P(){var t=O.buffer;A=t,e.HEAP8=S=new Int8Array(t),e.HEAP16=new Int16Array(t),e.HEAP32=E=new Int32Array(t),e.HEAPU8=T=new Uint8Array(t),e.HEAPU16=new Uint16Array(t),e.HEAPU32=M=new Uint32Array(t),e.HEAPF32=new Float32Array(t),e.HEAPF64=new Float64Array(t)}var U,F=[],I=[],W=[],H=[],L=0;function z(){var t=e.preRun.shift();F.unshift(t)}var Y,B=0,G=null,N=null;function V(t){throw e.onAbort&&e.onAbort(t),w(t="Aborted("+t+")"),C=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),a(t),t}function $(){return Y.startsWith("data:application/octet-stream;base64,")}if(Y="ort-wasm.wasm",!$()){var q=Y;Y=e.locateFile?e.locateFile(q,m):m+q}function X(){var t=Y;try{if(t==Y&&g)return new Uint8Array(g);if(u)return u(t);throw"both async and sync fetching of the wasm failed"}catch(t){V(t)}}function J(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function Z(t){for(;0>2>>>0]=t},this.Eb=function(){return M[this.zb+4>>2>>>0]},this.Sb=function(t){M[this.zb+8>>2>>>0]=t},this.Wb=function(){return M[this.zb+8>>2>>>0]},this.Tb=function(){E[this.zb>>2>>>0]=0},this.Ib=function(t){S[this.zb+12>>0>>>0]=t?1:0},this.Pb=function(){return 0!=S[this.zb+12>>0>>>0]},this.Jb=function(t){S[this.zb+13>>0>>>0]=t?1:0},this.Lb=function(){return 0!=S[this.zb+13>>0>>>0]},this.Rb=function(t,e){this.Fb(0),this.Ub(t),this.Sb(e),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){E[this.zb>>2>>>0]+=1},this.Xb=function(){var t=E[this.zb>>2>>>0];return E[this.zb>>2>>>0]=t-1,1===t},this.Fb=function(t){M[this.zb+16>>2>>>0]=t},this.Ob=function(){return M[this.zb+16>>2>>>0]},this.Qb=function(){if(Mt(this.Eb()))return M[this.Db>>2>>>0];var t=this.Ob();return 0!==t?t:this.Db}}function nt(t){return vt(new et(t).zb)}var rt=[];function at(t){var e=rt[t];return e||(t>=rt.length&&(rt.length=t+1),rt[t]=e=U.get(t)),e}function it(t){var e=D(t)+1,n=gt(e);return n&&k(t,S,n,e),n}var ot={};function ut(){if(!ct){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:p||"./this.program"};for(t in ot)void 0===ot[t]?delete e[t]:e[t]=ot[t];var n=[];for(t in e)n.push(t+"="+e[t]);ct=n}return ct}var ct,st=[null,[],[]];function lt(t,e){var n=st[t];0===e||10===e?((1===t?v:w)(R(n,0)),n.length=0):n.push(e)}var ft=0;function pt(t){return 0==t%4&&(0!=t%100||0==t%400)}var ht=[31,29,31,30,31,30,31,31,30,31,30,31],dt=[31,28,31,30,31,30,31,31,30,31,30,31];function yt(t,e,n,r){function a(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.lengtht?-1:0r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=u(new Date(t.getFullYear(),0,4)),n=u(n),0>=o(e,t)?0>=o(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var s=E[r+40>>2>>>0];for(var l in r={$b:E[r>>2>>>0],Zb:E[r+4>>2>>>0],Gb:E[r+8>>2>>>0],Kb:E[r+12>>2>>>0],Hb:E[r+16>>2>>>0],Cb:E[r+20>>2>>>0],Ab:E[r+24>>2>>>0],Bb:E[r+28>>2>>>0],bc:E[r+32>>2>>>0],Yb:E[r+36>>2>>>0],ac:s?j(s):""},n=j(n),s={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})n=n.replace(new RegExp(l,"g"),s[l]);var f="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),p="January February March April May June July August September October November December".split(" ");for(l in s={"%a":function(t){return f[t.Ab].substring(0,3)},"%A":function(t){return f[t.Ab]},"%b":function(t){return p[t.Hb].substring(0,3)},"%B":function(t){return p[t.Hb]},"%C":function(t){return i((t.Cb+1900)/100|0,2)},"%d":function(t){return i(t.Kb,2)},"%e":function(t){return a(t.Kb,2," ")},"%g":function(t){return c(t).toString().substring(2)},"%G":function(t){return c(t)},"%H":function(t){return i(t.Gb,2)},"%I":function(t){return 0==(t=t.Gb)?t=12:12t.Gb?"AM":"PM"},"%S":function(t){return i(t.$b,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Ab||7},"%U":function(t){return i(Math.floor((t.Bb+7-t.Ab)/7),2)},"%V":function(t){var e=Math.floor((t.Bb+7-(t.Ab+6)%7)/7);if(2>=(t.Ab+371-t.Bb-2)%7&&e++,e)53==e&&(4==(n=(t.Ab+371-t.Bb)%7)||3==n&&pt(t.Cb)||(e=1));else{e=52;var n=(t.Ab+7-t.Bb-1)%7;(4==n||5==n&&pt(t.Cb%400-1))&&e++}return i(e,2)},"%w":function(t){return t.Ab},"%W":function(t){return i(Math.floor((t.Bb+7-(t.Ab+6)%7)/7),2)},"%y":function(t){return(t.Cb+1900).toString().substring(2)},"%Y":function(t){return t.Cb+1900},"%z":function(t){var e=0<=(t=t.Yb);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.ac},"%%":function(){return"%"}},n=n.replace(/%%/g,"\\0\\0"),s)n.includes(l)&&(n=n.replace(new RegExp(l,"g"),s[l](r)));return l=function(t){var e=Array(D(t)+1);return k(t,e,0,e.length),e}(n=n.replace(/\\0\\0/g,"%")),l.length>e?0:(S.set(l,t>>>0),l.length-1)}var bt={a:function(t){return gt(t+24)+24},m:function(t){return(t=new et(t)).Pb()||(t.Ib(!0),K--),t.Jb(!1),Q.push(t),t.Nb(),t.Qb()},ia:function(t){throw w("Unexpected exception thrown, this is not properly supported - aborting"),C=!0,t},w:function(){Ot(0);var t=Q.pop();if(t.Xb()&&!t.Lb()){var e=t.Wb();e&&at(e)(t.Db),nt(t.Db)}tt=0},d:function(){var t=tt;if(!t)return ft=0;var e=new et(t);e.Fb(t);var n=e.Eb();if(!n)return ft=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getUTCSeconds(),E[e+4>>2>>>0]=t.getUTCMinutes(),E[e+8>>2>>>0]=t.getUTCHours(),E[e+12>>2>>>0]=t.getUTCDate(),E[e+16>>2>>>0]=t.getUTCMonth(),E[e+20>>2>>>0]=t.getUTCFullYear()-1900,E[e+24>>2>>>0]=t.getUTCDay(),E[e+28>>2>>>0]=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(t,e){t=new Date(1e3*(M[t>>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getSeconds(),E[e+4>>2>>>0]=t.getMinutes(),E[e+8>>2>>>0]=t.getHours(),E[e+12>>2>>>0]=t.getDate(),E[e+16>>2>>>0]=t.getMonth(),E[e+20>>2>>>0]=t.getFullYear()-1900,E[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1);E[e+28>>2>>>0]=(t.getTime()-n.getTime())/864e5|0,E[e+36>>2>>>0]=-60*t.getTimezoneOffset();var r=new Date(t.getFullYear(),6,1).getTimezoneOffset();n=n.getTimezoneOffset(),E[e+32>>2>>>0]=0|(r!=n&&t.getTimezoneOffset()==Math.min(n,r))},Fa:function(t){var e=new Date(E[t+20>>2>>>0]+1900,E[t+16>>2>>>0],E[t+12>>2>>>0],E[t+8>>2>>>0],E[t+4>>2>>>0],E[t>>2>>>0],0),n=E[t+32>>2>>>0],r=e.getTimezoneOffset(),a=new Date(e.getFullYear(),0,1),i=new Date(e.getFullYear(),6,1).getTimezoneOffset(),o=a.getTimezoneOffset(),u=Math.min(o,i);return 0>n?E[t+32>>2>>>0]=Number(i!=o&&u==r):0>2>>>0]=e.getDay(),E[t+28>>2>>>0]=(e.getTime()-a.getTime())/864e5|0,E[t>>2>>>0]=e.getSeconds(),E[t+4>>2>>>0]=e.getMinutes(),E[t+8>>2>>>0]=e.getHours(),E[t+12>>2>>>0]=e.getDate(),E[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function t(e,n,r){t.Vb||(t.Vb=!0,function(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}var a=(new Date).getFullYear(),i=new Date(a,0,1),o=new Date(a,6,1);a=i.getTimezoneOffset();var u=o.getTimezoneOffset();E[t>>2>>>0]=60*Math.max(a,u),E[e>>2>>>0]=Number(a!=u),t=r(i),e=r(o),t=it(t),e=it(e),u>2>>>0]=t,M[n+4>>2>>>0]=e):(M[n>>2>>>0]=e,M[n+4>>2>>>0]=t)}(e,n,r))},B:function(){V("")},ma:function(){return 4294901760},I:b?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:()=>performance.now(),xa:function(t,e,n){T.copyWithin(t>>>0,e>>>0,e+n>>>0)},G:function(t){var e=T.length;if(4294901760<(t>>>=0))return!1;for(var n=1;4>=n;n*=2){var r=e*(1+.2/n);r=Math.min(r,t+100663296);var a=Math;r=Math.max(t,r),a=a.min.call(a,4294901760,r+(65536-r%65536)%65536);t:{try{O.grow(a-A.byteLength+65535>>>16),P();var i=1;break t}catch(t){}i=void 0}if(i)return!0}return!1},va:function(t,e){var n=0;return ut().forEach((function(r,a){var i=e+n;for(a=M[t+4*a>>2>>>0]=i,i=0;i>0>>>0]=r.charCodeAt(i);S[a>>0>>>0]=0,n+=r.length+1})),0},wa:function(t,e){var n=ut();M[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),M[e>>2>>>0]=r,0},ba:function(t){_||0>2>>>0],u=M[e+4>>2>>>0];e+=8;for(var c=0;c>>0]);a+=u}return M[r>>2>>>0]=a,0},c:function(){return ft},ja:function t(e,r){t.Mb||(t.Mb=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(b)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>V("randomDevice")}());for(var a=0;a>0>>>0]=t.Mb();return 0},ea:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},fa:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},J:function(t){var e=At();try{return at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},e:function(t,e){var n=At();try{return at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},N:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},O:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},j:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},o:function(t,e,n,r){var a=At();try{return at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},p:function(t,e,n,r,a){var i=At();try{return at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},M:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},r:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},v:function(t,e,n,r,a,i,o){var u=At();try{return at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},K:function(t,e,n,r,a,i,o,u){var c=At();try{return at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{return at(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},X:function(t,e,n,r,a,i,o,u){var c=At();try{return Ft(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},V:function(t,e,n,r,a,i,o){var u=At();try{return xt(t,e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},U:function(t,e,n,r,a){var i=At();try{return It(t,e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},Z:function(t,e,n,r){var a=At();try{return Pt(t,e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},W:function(t){var e=At();try{return Ct(t)}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},Y:function(t,e){var n=At();try{return Ut(t,e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},T:function(t,e,n){var r=At();try{return Rt(t,e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},f:function(t){var e=At();try{at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},q:function(t,e){var n=At();try{at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},h:function(t,e,n){var r=At();try{at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},da:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},l:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},t:function(t,e,n,r,a){var i=At();try{at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},u:function(t,e,n,r,a,i){var o=At();try{at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},x:function(t,e,n,r,a,i,o){var u=At();try{at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},z:function(t,e,n,r,a,i,o,u){var c=At();try{at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},ga:function(t,e,n,r,a,i,o,u,c){var s=At();try{at(t)(e,n,r,a,i,o,u,c)}catch(t){if(St(s),t!==t+0)throw t;Ot(1,0)}},A:function(t,e,n,r,a,i,o,u,c,s,l){var f=At();try{at(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(St(f),t!==t+0)throw t;Ot(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=At();try{at(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(St(b),t!==t+0)throw t;Ot(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=At();try{jt(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},_:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{Dt(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},$:function(t,e,n,r,a,i){var o=At();try{kt(t,e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},n:function(t){return t},F:function(t){ft=t},ha:yt,y:function(t,e,n,r){return yt(t,e,n,r)}};!function(){function t(t){e.asm=t.exports,O=e.asm.Ka,P(),U=e.asm.ib,I.unshift(e.asm.La),B--,e.monitorRunDependencies&&e.monitorRunDependencies(B),0==B&&(null!==G&&(clearInterval(G),G=null),N&&(t=N,N=null,t()))}function n(e){t(e.instance)}function r(t){return function(){if(!g&&(d||y)){if("function"==typeof fetch&&!Y.startsWith("file://"))return fetch(Y,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+Y+"\'";return t.arrayBuffer()})).catch((function(){return X()}));if(o)return new Promise((function(t,e){o(Y,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return X()}))}().then((function(t){return WebAssembly.instantiate(t,i)})).then((function(t){return t})).then(t,(function(t){w("failed to asynchronously prepare wasm: "+t),V(t)}))}var i={a:bt};if(B++,e.monitorRunDependencies&&e.monitorRunDependencies(B),e.instantiateWasm)try{return e.instantiateWasm(i,t)}catch(t){return w("Module.instantiateWasm callback failed with error: "+t),!1}(g||"function"!=typeof WebAssembly.instantiateStreaming||$()||Y.startsWith("file://")||b||"function"!=typeof fetch?r(n):fetch(Y,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,i).then(n,(function(t){return w("wasm streaming compile failed: "+t),w("falling back to ArrayBuffer instantiation"),r(n)}))}))).catch(a)}(),e.___wasm_call_ctors=function(){return(e.___wasm_call_ctors=e.asm.La).apply(null,arguments)},e._OrtInit=function(){return(e._OrtInit=e.asm.Ma).apply(null,arguments)},e._OrtCreateSessionOptions=function(){return(e._OrtCreateSessionOptions=e.asm.Na).apply(null,arguments)},e._OrtAppendExecutionProvider=function(){return(e._OrtAppendExecutionProvider=e.asm.Oa).apply(null,arguments)},e._OrtAddSessionConfigEntry=function(){return(e._OrtAddSessionConfigEntry=e.asm.Pa).apply(null,arguments)},e._OrtReleaseSessionOptions=function(){return(e._OrtReleaseSessionOptions=e.asm.Qa).apply(null,arguments)},e._OrtCreateSession=function(){return(e._OrtCreateSession=e.asm.Ra).apply(null,arguments)},e._OrtReleaseSession=function(){return(e._OrtReleaseSession=e.asm.Sa).apply(null,arguments)},e._OrtGetInputCount=function(){return(e._OrtGetInputCount=e.asm.Ta).apply(null,arguments)},e._OrtGetOutputCount=function(){return(e._OrtGetOutputCount=e.asm.Ua).apply(null,arguments)},e._OrtGetInputName=function(){return(e._OrtGetInputName=e.asm.Va).apply(null,arguments)},e._OrtGetOutputName=function(){return(e._OrtGetOutputName=e.asm.Wa).apply(null,arguments)},e._OrtFree=function(){return(e._OrtFree=e.asm.Xa).apply(null,arguments)},e._OrtCreateTensor=function(){return(e._OrtCreateTensor=e.asm.Ya).apply(null,arguments)},e._OrtGetTensorData=function(){return(e._OrtGetTensorData=e.asm.Za).apply(null,arguments)},e._OrtReleaseTensor=function(){return(e._OrtReleaseTensor=e.asm._a).apply(null,arguments)},e._OrtCreateRunOptions=function(){return(e._OrtCreateRunOptions=e.asm.$a).apply(null,arguments)},e._OrtAddRunConfigEntry=function(){return(e._OrtAddRunConfigEntry=e.asm.ab).apply(null,arguments)},e._OrtReleaseRunOptions=function(){return(e._OrtReleaseRunOptions=e.asm.bb).apply(null,arguments)},e._OrtRun=function(){return(e._OrtRun=e.asm.cb).apply(null,arguments)},e._OrtEndProfiling=function(){return(e._OrtEndProfiling=e.asm.db).apply(null,arguments)};var mt,gt=e._malloc=function(){return(gt=e._malloc=e.asm.eb).apply(null,arguments)},vt=e._free=function(){return(vt=e._free=e.asm.fb).apply(null,arguments)},wt=e._fflush=function(){return(wt=e._fflush=e.asm.gb).apply(null,arguments)},_t=e.___funcs_on_exit=function(){return(_t=e.___funcs_on_exit=e.asm.hb).apply(null,arguments)},Ot=e._setThrew=function(){return(Ot=e._setThrew=e.asm.jb).apply(null,arguments)},At=e.stackSave=function(){return(At=e.stackSave=e.asm.kb).apply(null,arguments)},St=e.stackRestore=function(){return(St=e.stackRestore=e.asm.lb).apply(null,arguments)},Tt=e.stackAlloc=function(){return(Tt=e.stackAlloc=e.asm.mb).apply(null,arguments)},Et=e.___cxa_can_catch=function(){return(Et=e.___cxa_can_catch=e.asm.nb).apply(null,arguments)},Mt=e.___cxa_is_pointer_type=function(){return(Mt=e.___cxa_is_pointer_type=e.asm.ob).apply(null,arguments)},Ct=e.dynCall_j=function(){return(Ct=e.dynCall_j=e.asm.pb).apply(null,arguments)},xt=e.dynCall_iiiiij=function(){return(xt=e.dynCall_iiiiij=e.asm.qb).apply(null,arguments)},Rt=e.dynCall_jii=function(){return(Rt=e.dynCall_jii=e.asm.rb).apply(null,arguments)},jt=e.dynCall_viiiiij=function(){return(jt=e.dynCall_viiiiij=e.asm.sb).apply(null,arguments)},kt=e.dynCall_vjji=function(){return(kt=e.dynCall_vjji=e.asm.tb).apply(null,arguments)},Dt=e.dynCall_viiijjjii=function(){return(Dt=e.dynCall_viiijjjii=e.asm.ub).apply(null,arguments)},Pt=e.dynCall_iij=function(){return(Pt=e.dynCall_iij=e.asm.vb).apply(null,arguments)},Ut=e.dynCall_ji=function(){return(Ut=e.dynCall_ji=e.asm.wb).apply(null,arguments)},Ft=e.dynCall_iiiiiij=function(){return(Ft=e.dynCall_iiiiiij=e.asm.xb).apply(null,arguments)},It=e.dynCall_iiij=function(){return(It=e.dynCall_iiij=e.asm.yb).apply(null,arguments)};function Wt(){function t(){if(!mt&&(mt=!0,e.calledRun=!0,!C)){if(Z(I),r(e),e.onRuntimeInitialized&&e.onRuntimeInitialized(),e.postRun)for("function"==typeof e.postRun&&(e.postRun=[e.postRun]);e.postRun.length;){var t=e.postRun.shift();H.unshift(t)}Z(H)}}if(!(0{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.iterateExtraOptions=void 0,e.iterateExtraOptions=(t,n,r,a)=>{if("object"==typeof t&&null!==t){if(r.has(t))throw new Error("Circular reference in options");r.add(t)}Object.entries(t).forEach((([t,i])=>{const o=n?n+t:t;if("object"==typeof i)(0,e.iterateExtraOptions)(i,o+".",r,a);else if("string"==typeof i||"number"==typeof i)a(o,i.toString());else{if("boolean"!=typeof i)throw new Error("Can\'t handle extra config type: "+typeof i);a(o,i?"1":"0")}}))}},586:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setRunOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setRunOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};try{if(void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);void 0===(null==t?void 0:t.terminate)&&(u.terminate=!1);let i=0;if(void 0!==(null==t?void 0:t.tag)&&(i=(0,a.allocWasmString)(t.tag,o)),n=e._OrtCreateRunOptions(u.logSeverityLevel,u.logVerbosityLevel,!!u.terminate,i),0===n)throw new Error("Can\'t create run options");return void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddRunConfigEntry(n,i,u))throw new Error(`Can\'t set a run config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseRunOptions(n),o.forEach(e._free),t}}},919:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setSessionOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setSessionOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(u);try{void 0===(null==t?void 0:t.graphOptimizationLevel)&&(u.graphOptimizationLevel="all");const c=(t=>{switch(t){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${t}`)}})(u.graphOptimizationLevel);void 0===(null==t?void 0:t.enableCpuMemArena)&&(u.enableCpuMemArena=!0),void 0===(null==t?void 0:t.enableMemPattern)&&(u.enableMemPattern=!0),void 0===(null==t?void 0:t.executionMode)&&(u.executionMode="sequential");const s=(t=>{switch(t){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${t}`)}})(u.executionMode);let l=0;if(void 0!==(null==t?void 0:t.logId)&&(l=(0,a.allocWasmString)(t.logId,o)),void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);if(void 0===(null==t?void 0:t.enableProfiling)&&(u.enableProfiling=!1),n=e._OrtCreateSessionOptions(c,!!u.enableCpuMemArena,!!u.enableMemPattern,s,!!u.enableProfiling,0,l,u.logSeverityLevel,u.logVerbosityLevel),0===n)throw new Error("Can\'t create session options");return(null==t?void 0:t.executionProviders)&&((t,e,n)=>{for(const r of e){let e="string"==typeof r?r:r.name;switch(e){case"xnnpack":e="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${e}`)}const o=(0,a.allocWasmString)(e,n);if(0!==(0,i.getInstance)()._OrtAppendExecutionProvider(t,o))throw new Error(`Can\'t append execution provider: ${e}`)}})(n,t.executionProviders,o),void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddSessionConfigEntry(n,i,u))throw new Error(`Can\'t set a session config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseSessionOptions(n),o.forEach(e._free),t}}},983:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.allocWasmString=void 0;const r=n(361);e.allocWasmString=(t,e)=>{const n=(0,r.getInstance)(),a=n.lengthBytesUTF8(t)+1,i=n._malloc(a);return n.stringToUTF8(t,i,a),e.push(i),i}},349:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.extractTransferableBuffers=e.endProfiling=e.run=e.releaseSession=e.createSession=e.createSessionFinalize=e.createSessionAllocate=e.initOrt=void 0;const r=n(586),a=n(919),i=n(983),o=n(361);e.initOrt=(t,e)=>{const n=(0,o.getInstance)()._OrtInit(t,e);if(0!==n)throw new Error(`Can\'t initialize onnxruntime. error code = ${n}`)};const u=new Map;e.createSessionAllocate=t=>{const e=(0,o.getInstance)(),n=e._malloc(t.byteLength);return e.HEAPU8.set(t,n),[n,t.byteLength]},e.createSessionFinalize=(t,e)=>{const n=(0,o.getInstance)();let r=0,i=0,c=[];try{if([i,c]=(0,a.setSessionOptions)(e),r=n._OrtCreateSession(t[0],t[1],i),0===r)throw new Error("Can\'t create a session")}finally{n._free(t[0]),n._OrtReleaseSessionOptions(i),c.forEach(n._free)}const s=n._OrtGetInputCount(r),l=n._OrtGetOutputCount(r),f=[],p=[],h=[],d=[];for(let t=0;t{const r=(0,e.createSessionAllocate)(t);return(0,e.createSessionFinalize)(r,n)},e.releaseSession=t=>{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=n[1],i=n[2];a.forEach(e._OrtFree),i.forEach(e._OrtFree),e._OrtReleaseSession(r),u.delete(t)};const c=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},s=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},l=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};e.run=(t,e,n,a,f)=>{const p=(0,o.getInstance)(),h=u.get(t);if(!h)throw new Error("invalid session id");const d=h[0],y=h[1],b=h[2],m=e.length,g=a.length;let v=0,w=[];const _=[],O=[];try{[v,w]=(0,r.setRunOptions)(f);for(let t=0;tp.HEAP32[t++]=e));const n=p._OrtCreateTensor(c(e),o,u,l,r.length);if(0===n)throw new Error("Can\'t create a tensor");_.push(n)}finally{p.stackRestore(s)}}const t=p.stackSave(),o=p.stackAlloc(4*m),u=p.stackAlloc(4*m),h=p.stackAlloc(4*g),A=p.stackAlloc(4*g);try{let n=o/4,r=u/4,i=h/4,c=A/4;for(let t=0;tt*e));if(a=s(o),"string"===a){const t=[];let e=i/4;for(let n=0;n{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=e._OrtEndProfiling(r);if(0===a)throw new Error("Can\'t get an profile file name");e._OrtFree(a)},e.extractTransferableBuffers=t=>{const e=[];for(const n of t){const t=n[2];!Array.isArray(t)&&t.buffer&&e.push(t.buffer)}return e}},361:function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n);var a=Object.getOwnPropertyDescriptor(e,n);a&&!("get"in a?!e.__esModule:a.writable||a.configurable)||(a={enumerable:!0,get:function(){return e[n]}}),Object.defineProperty(t,r,a)}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),a=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.prototype.hasOwnProperty.call(t,n)&&r(e,t,n);return a(e,t),e},o=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0}),e.dispose=e.getInstance=e.initializeWebAssembly=void 0;const u=i(n(449)),c=o(n(932)),s=n(474);let l,f=!1,p=!1,h=!1;const d=(t,e)=>e?t?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":t?"ort-wasm-simd.wasm":"ort-wasm.wasm";e.initializeWebAssembly=async t=>{if(f)return Promise.resolve();if(p)throw new Error("multiple calls to \'initializeWebAssembly()\' detected.");if(h)throw new Error("previous call to \'initializeWebAssembly()\' failed.");p=!0;const e=t.initTimeout,r=t.numThreads,a=t.simd,i=r>1&&(()=>{try{return"undefined"!=typeof SharedArrayBuffer&&("undefined"!=typeof MessageChannel&&(new MessageChannel).port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch(t){return!1}})(),o=a&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch(t){return!1}})(),y="string"==typeof t.wasmPaths?t.wasmPaths:void 0,b=d(!1,i),m=d(o,i),g="object"==typeof t.wasmPaths?t.wasmPaths[m]:void 0;let v=!1;const w=[];if(e>0&&w.push(new Promise((t=>{setTimeout((()=>{v=!0,t()}),e)}))),w.push(new Promise(((t,e)=>{const r=i?s:c.default,a={locateFile:(t,e)=>i&&t.endsWith(".worker.js")&&"undefined"!=typeof Blob?URL.createObjectURL(new Blob([n(154)],{type:"text/javascript"})):t===b?null!=g?g:(null!=y?y:e)+m:e+t};if(i)if("undefined"==typeof Blob)a.mainScriptUrlOrBlob=u.join("/","ort-wasm-threaded.js");else{const t=`var ortWasmThreaded=(function(){var _scriptDir;return ${r.toString()}})();`;a.mainScriptUrlOrBlob=new Blob([t],{type:"text/javascript"})}r(a).then((e=>{p=!1,f=!0,l=e,t()}),(t=>{p=!1,h=!0,e(t)}))}))),await Promise.race(w),v)throw new Error(`WebAssembly backend initializing failed due to timeout: ${e}ms`)},e.getInstance=()=>{if(f&&l)return l;throw new Error("WebAssembly is not initialized yet.")},e.dispose=()=>{var t;!f||p||h||(p=!0,null===(t=l.PThread)||void 0===t||t.terminateAllThreads(),l=void 0,p=!1,f=!1,h=!0)}},154:t=>{"use strict";t.exports=\'"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}};\\n\'},384:()=>{},993:()=>{},908:()=>{},953:()=>{},925:()=>{},449:()=>{}},e={};function n(r){var a=e[r];if(void 0!==a)return a.exports;var i=e[r]={exports:{}};return t[r].call(i.exports,i,i.exports,n),i.exports}n.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(t){if("object"==typeof window)return window}}(),(()=>{"use strict";const t=n(349),e=n(361);self.onmessage=n=>{switch(n.data.type){case"init-wasm":(0,e.initializeWebAssembly)(n.data.in).then((()=>postMessage({type:"init-wasm"})),(t=>postMessage({type:"init-wasm",err:t})));break;case"init-ort":try{const{numThreads:e,loggingLevel:r}=n.data.in;(0,t.initOrt)(e,r),postMessage({type:"init-ort"})}catch(t){postMessage({type:"init-ort",err:t})}break;case"create_allocate":try{const{model:e}=n.data.in,r=(0,t.createSessionAllocate)(e);postMessage({type:"create_allocate",out:r})}catch(t){postMessage({type:"create_allocate",err:t})}break;case"create_finalize":try{const{modeldata:e,options:r}=n.data.in,a=(0,t.createSessionFinalize)(e,r);postMessage({type:"create_finalize",out:a})}catch(t){postMessage({type:"create_finalize",err:t})}break;case"create":try{const{model:e,options:r}=n.data.in,a=(0,t.createSession)(e,r);postMessage({type:"create",out:a})}catch(t){postMessage({type:"create",err:t})}break;case"release":try{const e=n.data.in;(0,t.releaseSession)(e),postMessage({type:"release"})}catch(t){postMessage({type:"release",err:t})}break;case"run":try{const{sessionId:e,inputIndices:r,inputs:a,outputIndices:i,options:o}=n.data.in,u=(0,t.run)(e,r,a,i,o);postMessage({type:"run",out:u},(0,t.extractTransferableBuffers)(u))}catch(t){postMessage({type:"run",err:t})}break;case"end-profiling":try{const e=n.data.in;(0,t.endProfiling)(e),postMessage({type:"end-profiling"})}catch(t){postMessage({type:"end-profiling",err:t})}}}})()})();\n',"Worker",void 0,void 0)}},477:y=>{y.exports=function(n,u,d,l){var p=self||window;try{try{var s;try{s=new p.Blob([n])}catch{(s=new(p.BlobBuilder||p.WebKitBlobBuilder||p.MozBlobBuilder||p.MSBlobBuilder)).append(n),s=s.getBlob()}var h=p.URL||p.webkitURL,f=h.createObjectURL(s),a=new p[u](f,d);return h.revokeObjectURL(f),a}catch{return new p[u]("data:application/javascript,".concat(encodeURIComponent(n)),d)}}catch{if(!l)throw Error("Inline worker is not supported");return new p[u](l,d)}}},4154:y=>{y.exports=`"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}}; -`},1670:y=>{y.exports=__WEBPACK_EXTERNAL_MODULE__1670__},7067:()=>{},1296:()=>{},1384:()=>{},3993:()=>{},908:()=>{},6953:()=>{},9925:()=>{},2806:()=>{},6449:()=>{},2850:()=>{},5381:()=>{},5686:(y,n,u)=>{u.r(n),u.d(n,{flatbuffers:()=>d});var d={};d.Offset,d.Table,d.SIZEOF_SHORT=2,d.SIZEOF_INT=4,d.FILE_IDENTIFIER_LENGTH=4,d.SIZE_PREFIX_LENGTH=4,d.Encoding={UTF8_BYTES:1,UTF16_STRING:2},d.int32=new Int32Array(2),d.float32=new Float32Array(d.int32.buffer),d.float64=new Float64Array(d.int32.buffer),d.isLittleEndian=new Uint16Array(new Uint8Array([1,0]).buffer)[0]===1,d.Long=function(l,p){this.low=0|l,this.high=0|p},d.Long.create=function(l,p){return l==0&&p==0?d.Long.ZERO:new d.Long(l,p)},d.Long.prototype.toFloat64=function(){return(this.low>>>0)+4294967296*this.high},d.Long.prototype.equals=function(l){return this.low==l.low&&this.high==l.high},d.Long.ZERO=new d.Long(0,0),d.Builder=function(l){if(l)p=l;else var p=1024;this.bb=d.ByteBuffer.allocate(p),this.space=p,this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},d.Builder.prototype.clear=function(){this.bb.clear(),this.space=this.bb.capacity(),this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},d.Builder.prototype.forceDefaults=function(l){this.force_defaults=l},d.Builder.prototype.dataBuffer=function(){return this.bb},d.Builder.prototype.asUint8Array=function(){return this.bb.bytes().subarray(this.bb.position(),this.bb.position()+this.offset())},d.Builder.prototype.prep=function(l,p){l>this.minalign&&(this.minalign=l);for(var s=1+~(this.bb.capacity()-this.space+p)&l-1;this.space=0&&this.vtable[p]==0;p--);for(var s=p+1;p>=0;p--)this.addInt16(this.vtable[p]!=0?l-this.vtable[p]:0);this.addInt16(l-this.object_start);var h=(s+2)*d.SIZEOF_SHORT;this.addInt16(h);var f=0,a=this.space;e:for(p=0;p=0;a--)this.writeInt8(f.charCodeAt(a))}this.prep(this.minalign,d.SIZEOF_INT+h),this.addOffset(l),h&&this.addInt32(this.bb.capacity()-this.space),this.bb.setPosition(this.space)},d.Builder.prototype.finishSizePrefixed=function(l,p){this.finish(l,p,!0)},d.Builder.prototype.requiredField=function(l,p){var s=this.bb.capacity()-l,h=s-this.bb.readInt32(s);if(this.bb.readInt16(h+p)==0)throw new Error("FlatBuffers: field "+p+" must be set")},d.Builder.prototype.startVector=function(l,p,s){this.notNested(),this.vector_num_elems=p,this.prep(d.SIZEOF_INT,l*p),this.prep(s,l*p)},d.Builder.prototype.endVector=function(){return this.writeInt32(this.vector_num_elems),this.offset()},d.Builder.prototype.createString=function(l){if(l instanceof Uint8Array)var p=l;else{p=[];for(var s=0;s=56320?f:(f<<10)+l.charCodeAt(s++)+-56613888)<128?p.push(h):(h<2048?p.push(h>>6&31|192):(h<65536?p.push(h>>12&15|224):p.push(h>>18&7|240,h>>12&63|128),p.push(h>>6&63|128)),p.push(63&h|128))}}this.addInt8(0),this.startVector(1,p.length,1),this.bb.setPosition(this.space-=p.length),s=0;for(var a=this.space,o=this.bb.bytes();s>24},d.ByteBuffer.prototype.readUint8=function(l){return this.bytes_[l]},d.ByteBuffer.prototype.readInt16=function(l){return this.readUint16(l)<<16>>16},d.ByteBuffer.prototype.readUint16=function(l){return this.bytes_[l]|this.bytes_[l+1]<<8},d.ByteBuffer.prototype.readInt32=function(l){return this.bytes_[l]|this.bytes_[l+1]<<8|this.bytes_[l+2]<<16|this.bytes_[l+3]<<24},d.ByteBuffer.prototype.readUint32=function(l){return this.readInt32(l)>>>0},d.ByteBuffer.prototype.readInt64=function(l){return new d.Long(this.readInt32(l),this.readInt32(l+4))},d.ByteBuffer.prototype.readUint64=function(l){return new d.Long(this.readUint32(l),this.readUint32(l+4))},d.ByteBuffer.prototype.readFloat32=function(l){return d.int32[0]=this.readInt32(l),d.float32[0]},d.ByteBuffer.prototype.readFloat64=function(l){return d.int32[d.isLittleEndian?0:1]=this.readInt32(l),d.int32[d.isLittleEndian?1:0]=this.readInt32(l+4),d.float64[0]},d.ByteBuffer.prototype.writeInt8=function(l,p){this.bytes_[l]=p},d.ByteBuffer.prototype.writeUint8=function(l,p){this.bytes_[l]=p},d.ByteBuffer.prototype.writeInt16=function(l,p){this.bytes_[l]=p,this.bytes_[l+1]=p>>8},d.ByteBuffer.prototype.writeUint16=function(l,p){this.bytes_[l]=p,this.bytes_[l+1]=p>>8},d.ByteBuffer.prototype.writeInt32=function(l,p){this.bytes_[l]=p,this.bytes_[l+1]=p>>8,this.bytes_[l+2]=p>>16,this.bytes_[l+3]=p>>24},d.ByteBuffer.prototype.writeUint32=function(l,p){this.bytes_[l]=p,this.bytes_[l+1]=p>>8,this.bytes_[l+2]=p>>16,this.bytes_[l+3]=p>>24},d.ByteBuffer.prototype.writeInt64=function(l,p){this.writeInt32(l,p.low),this.writeInt32(l+4,p.high)},d.ByteBuffer.prototype.writeUint64=function(l,p){this.writeUint32(l,p.low),this.writeUint32(l+4,p.high)},d.ByteBuffer.prototype.writeFloat32=function(l,p){d.float32[0]=p,this.writeInt32(l,d.int32[0])},d.ByteBuffer.prototype.writeFloat64=function(l,p){d.float64[0]=p,this.writeInt32(l,d.int32[d.isLittleEndian?0:1]),this.writeInt32(l+4,d.int32[d.isLittleEndian?1:0])},d.ByteBuffer.prototype.getBufferIdentifier=function(){if(this.bytes_.length>10),56320+(1023&a)))}return h},d.ByteBuffer.prototype.__indirect=function(l){return l+this.readInt32(l)},d.ByteBuffer.prototype.__vector=function(l){return l+this.readInt32(l)+d.SIZEOF_INT},d.ByteBuffer.prototype.__vector_len=function(l){return this.readInt32(l+this.readInt32(l))},d.ByteBuffer.prototype.__has_identifier=function(l){if(l.length!=d.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: file identifier must be length "+d.FILE_IDENTIFIER_LENGTH);for(var p=0;p{var n=y&&y.__esModule?()=>y.default:()=>y;return __webpack_require__.d(n,{a:n}),n},__webpack_require__.d=(y,n)=>{for(var u in n)__webpack_require__.o(n,u)&&!__webpack_require__.o(y,u)&&Object.defineProperty(y,u,{enumerable:!0,get:n[u]})},__webpack_require__.g=function(){if(typeof globalThis=="object")return globalThis;try{return this||new Function("return this")()}catch{if(typeof window=="object")return window}}(),__webpack_require__.o=(y,n)=>Object.prototype.hasOwnProperty.call(y,n),__webpack_require__.r=y=>{typeof Symbol<"u"&&Symbol.toStringTag&&Object.defineProperty(y,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(y,"__esModule",{value:!0})};var __webpack_exports__=__webpack_require__(6018);return __webpack_exports__})())})(ortWeb_min$1);var ortWeb_minExports=ortWeb_min$1.exports,ortWeb_min=getDefaultExportFromCjs(ortWeb_minExports),ONNX_WEB=_mergeNamespaces({__proto__:null,default:ortWeb_min},[ortWeb_minExports]);let ONNX;const executionProviders=["wasm"];typeof process<"u"&&((tt=process==null?void 0:process.release)==null?void 0:tt.name)==="node"?(ONNX=sharp??ONNX_NODE,executionProviders.unshift("cpu")):(ONNX=ortWeb_min??ONNX_WEB,typeof navigator<"u"&&/iP(hone|od|ad).+16_4.+AppleWebKit/.test(navigator.userAgent)&&(ONNX.env.wasm.simd=!1));const{env:onnx_env}=ONNX,VERSION="2.5.0",WEB_CACHE_AVAILABLE=typeof self<"u"&&"caches"in self,FS_AVAILABLE=!isEmpty(sharp),PATH_AVAILABLE=!isEmpty(sharp),RUNNING_LOCALLY=FS_AVAILABLE&&PATH_AVAILABLE,__dirname=RUNNING_LOCALLY?sharp.dirname(sharp.dirname(sharp.fileURLToPath(self.location.href))):"./",DEFAULT_CACHE_DIR=RUNNING_LOCALLY?sharp.join(__dirname,"/.cache/"):null,DEFAULT_LOCAL_MODEL_PATH="/models/",localModelPath=RUNNING_LOCALLY?sharp.join(__dirname,DEFAULT_LOCAL_MODEL_PATH):DEFAULT_LOCAL_MODEL_PATH;onnx_env.wasm.wasmPaths=RUNNING_LOCALLY?sharp.join(__dirname,"/dist/"):`https://cdn.jsdelivr.net/npm/@xenova/transformers@${VERSION}/dist/`;const env={backends:{onnx:onnx_env,tfjs:{}},__dirname,version:VERSION,allowRemoteModels:!0,remoteHost:"https://huggingface.co/",remotePathTemplate:"{model}/resolve/{revision}/",allowLocalModels:!0,localModelPath,useFS:FS_AVAILABLE,useBrowserCache:WEB_CACHE_AVAILABLE,useFSCache:FS_AVAILABLE,cacheDir:DEFAULT_CACHE_DIR,useCustomCache:!1,customCache:null};function isEmpty(y){return Object.keys(y).length===0}globalThis.ReadableStream||(globalThis.ReadableStream=sharp.ReadableStream);class FileResponse{constructor(n){jt(this,"_CONTENT_TYPE_MAP",{txt:"text/plain",html:"text/html",css:"text/css",js:"text/javascript",json:"application/json",png:"image/png",jpg:"image/jpeg",jpeg:"image/jpeg",gif:"image/gif"});if(this.filePath=n,this.headers=new Headers,this.exists=sharp.existsSync(n),this.exists){this.status=200,this.statusText="OK";let u=sharp.statSync(n);this.headers.set("content-length",u.size.toString()),this.updateContentType();let d=this;this.body=new ReadableStream({start(l){d.arrayBuffer().then(p=>{l.enqueue(new Uint8Array(p)),l.close()})}})}else this.status=404,this.statusText="Not Found",this.body=null}updateContentType(){const n=this.filePath.toString().split(".").pop().toLowerCase();this.headers.set("content-type",this._CONTENT_TYPE_MAP[n]??"application/octet-stream")}clone(){let n=new FileResponse(this.filePath);return n.exists=this.exists,n.status=this.status,n.statusText=this.statusText,n.headers=new Headers(this.headers),n}async arrayBuffer(){return(await sharp.promises.readFile(this.filePath)).buffer}async blob(){const n=await sharp.promises.readFile(this.filePath);return new Blob([n],{type:this.headers.get("content-type")})}async text(){return await sharp.promises.readFile(this.filePath,"utf8")}async json(){return JSON.parse(await this.text())}}function isValidHttpUrl(y,n=null){let u;try{u=new URL(y)}catch{return!1}return n&&!n.includes(u.hostname)?!1:u.protocol==="http:"||u.protocol==="https:"}async function getFile(y){var n,u,d;if(env.useFS&&!isValidHttpUrl(y))return new FileResponse(y);if(typeof process<"u"&&((n=process==null?void 0:process.release)==null?void 0:n.name)==="node"){const l=!!((u=process.env)!=null&&u.TESTING_REMOTELY),p=env.version,s=new Headers;if(s.set("User-Agent",`transformers.js/${p}; is_ci/${l};`),isValidHttpUrl(y,["huggingface.co","hf.co"])){const f=(d=process.env)==null?void 0:d.HF_ACCESS_TOKEN;f&&s.set("Authorization",`Bearer ${f}`)}return fetch(y,{headers:s})}else return fetch(y)}const ERROR_MAPPING={400:"Bad request error occurred while trying to load file",401:"Unauthorized access to file",403:"Forbidden access to file",404:"Could not locate file",408:"Request timeout error occurred while trying to load file",500:"Internal server error error occurred while trying to load file",502:"Bad gateway error occurred while trying to load file",503:"Service unavailable error occurred while trying to load file",504:"Gateway timeout error occurred while trying to load file"};function handleError(y,n,u){if(!u)return null;const d=ERROR_MAPPING[y]??`Error (${y}) occurred while trying to load file`;throw Error(`${d}: "${n}".`)}class FileCache{constructor(n){this.path=n}async match(n){let u=sharp.join(this.path,n),d=new FileResponse(u);if(d.exists)return d}async put(n,u){const d=Buffer.from(await u.arrayBuffer());let l=sharp.join(this.path,n);try{await sharp.promises.mkdir(sharp.dirname(l),{recursive:!0}),await sharp.promises.writeFile(l,d)}catch(p){console.warn("An error occurred while writing the file to cache:",p)}}}async function tryCache(y,...n){for(let u of n)try{let d=await y.match(u);if(d)return d}catch{continue}}async function getModelFile(y,n,u=!0,d={}){if(!env.allowLocalModels&&d.local_files_only)throw Error("Invalid configuration detected: local models are disabled (`env.allowLocalModels=false`) but you have requested to only use local models (`local_files_only=true`).");dispatchCallback(d.progress_callback,{status:"initiate",name:y,file:n});let l;if(!l&&env.useBrowserCache){if(typeof caches>"u")throw Error("Browser cache is not available in this environment.");try{l=await caches.open("transformers-cache")}catch(c){console.warn("An error occurred while opening the browser cache:",c)}}if(!l&&env.useFSCache&&(l=new FileCache(d.cache_dir??env.cacheDir)),!l&&env.useCustomCache)throw Error("`env.useCustomCache=true`, but `env.customCache` is not defined.");const p=d.revision??"main";let s=pathJoin(y,n),h=pathJoin(env.localModelPath,s),f=pathJoin(env.remoteHost,env.remotePathTemplate.replaceAll("{model}",y).replaceAll("{revision}",p),n),a=p==="main"?s:pathJoin(y,p,n),o,t=l instanceof FileCache?a:f,e,r;if(l&&(r=await tryCache(l,h,t)),r===void 0){if(env.allowLocalModels)if(isValidHttpUrl(s)){if(d.local_files_only)throw new Error(`\`local_files_only=true\`, but attempted to load a remote file from: ${s}.`)}else try{r=await getFile(h),o=h}catch(g){console.warn(`Unable to load from local path "${h}": "${g}"`)}if(r===void 0||r.status===404){if(d.local_files_only||!env.allowRemoteModels){if(u)throw Error(`\`local_files_only=true\` or \`env.allowRemoteModels=false\` and file was not found locally at "${h}".`);return null}if(r=await getFile(f),r.status!==200)return handleError(r.status,f,u);o=t}l&&r instanceof Response&&r.status===200&&(e=r.clone())}dispatchCallback(d.progress_callback,{status:"download",name:y,file:n});const i=await readResponse(r,c=>{dispatchCallback(d.progress_callback,{status:"progress",...c,name:y,file:n})});return e&&o&&await l.match(o)===void 0&&await l.put(o,e).catch(c=>{console.warn(`Unable to add response to browser cache: ${c}.`)}),dispatchCallback(d.progress_callback,{status:"done",name:y,file:n}),i}async function getModelJSON(y,n,u=!0,d={}){let l=await getModelFile(y,n,u,d);if(l===null)return{};let s=new TextDecoder("utf-8").decode(l);return JSON.parse(s)}async function readResponse(y,n){const u=y.headers.get("Content-Length");u===null&&console.warn("Unable to determine content-length from response headers. Will expand buffer when needed.");let d=parseInt(u??"0"),l=new Uint8Array(d),p=0;const s=y.body.getReader();async function h(){const{done:f,value:a}=await s.read();if(f)return;let o=p+a.length;if(o>d){d=o;let e=new Uint8Array(d);e.set(l),l=e}l.set(a,p),p=o;const t=p/d*100;return n({progress:t,loaded:p,total:d}),h()}return await h(),l}function pathJoin(...y){return y=y.map((n,u)=>(u&&(n=n.replace(new RegExp("^/"),"")),u!==y.length-1&&(n=n.replace(new RegExp("/$"),"")),n)),y.join("/")}function transpose_data(y,n,u){const d=new Array(u.length),l=new Array(u.length);for(let h=u.length-1,f=1;h>=0;--h)l[h]=f,d[h]=n[u[h]],f*=d[h];const p=u.map((h,f)=>l[u.indexOf(f)]),s=new y.constructor(y.length);for(let h=0;h=0;--a)f+=o%n[a]*p[a],o=Math.floor(o/n[a]);s[f]=y[h]}return[s,d]}function softmax(y){const n=max(y)[0],u=y.map(p=>Math.exp(p-n)),d=u.reduce((p,s)=>p+s,0);return u.map(p=>p/d)}function log_softmax(y){return softmax(y).map(d=>Math.log(d))}function getTopItems(y,n=0){return y=Array.from(y).map((u,d)=>[d,u]).sort((u,d)=>d[1]-u[1]),n>0&&(y=y.slice(0,n)),y}function min(y){if(y.length===0)throw Error("Array must not be empty");let n=y[0],u=0;for(let d=1;dn&&(n=y[d],u=d);return[n,u]}function medianFilter(y,n){if(n%2===0||n<=0)throw new Error("Window size must be a positive odd number");const u=new y.constructor(y.length),d=new y.constructor(n),l=Math.floor(n/2);for(let p=0;p=y.length&&(f=2*(y.length-1)-f),d[s++]=y[f]}d.sort(),u[p]=d[l]}return u}function round(y,n){const u=Math.pow(10,n);return Math.round(y*u)/u}const ONNXTensor$1=ONNX.Tensor;class Tensor extends ONNXTensor$1{constructor(...n){return n[0]instanceof ONNX.Tensor?super(n[0].type,n[0].data,n[0].dims):super(...n),new Proxy(this,{get:(u,d)=>{if(typeof d=="string"){let l=Number(d);if(Number.isInteger(l))return u._getitem(l)}return u[d]},set:(u,d,l)=>u[d]=l})}*[Symbol.iterator](){const[n,...u]=this.dims;if(u.length>0){const d=u.reduce((l,p)=>l*p);for(let l=0;l0){const l=d.reduce((p,s)=>p*s);return this._subarray(n,l,d)}else return new Tensor(this.type,[this.data[n]],d)}indexOf(n){for(let u=0;ua[1])throw new Error(`Invalid slice: ${a}`);let o=[Math.max(a[0],0),Math.min(a[1],this.dims[f])];d.push(o),u.push(o[1]-o[0])}else throw new Error(`Invalid slice: ${a}`)}let l=d.map(([f,a])=>a-f),p=l.reduce((f,a)=>f*a),s=new this.data.constructor(p);const h=this.stride();for(let f=0;f=0;--o){const e=l[o];a+=(t%e+d[o][0])*h[o],t=Math.floor(t/e)}s[f]=this.data[a]}return new Tensor(this.type,s,u)}transpose(...n){return transpose(this,n)}sum(n=null,u=!1){return this.norm(1,n,u)}norm(n="fro",u=null,d=!1){if(n==="fro")n=2;else if(typeof n=="string")throw Error(`Unsupported norm: ${n}`);if(u===null){let s=this.data.reduce((h,f)=>h+f**n,0)**(1/n);return new Tensor(this.type,[s],[])}u=safeIndex(u,this.dims.length);const l=this.dims.slice();l[u]=1;const p=new this.data.constructor(this.data.length/this.dims[u]);for(let s=0;s=0;--f){const t=this.dims[f];if(f!==u){const e=a%t;h+=e*o,o*=l[f]}a=Math.floor(a/t)}p[h]+=this.data[s]**n}if(n!==1)for(let s=0;s=0;--s){const a=this.dims[s];if(s!==u){const o=h%a;p+=o*f,f*=this.dims[s]}h=Math.floor(h/a)}this.data[l]/=d.data[p]}return this}normalize(n=2,u=1){return this.clone().normalize_(n,u)}stride(){return dimsToStride(this.dims)}squeeze(n=null){return new Tensor(this.type,this.data,calc_squeeze_dims(this.dims,n))}squeeze_(n=null){return this.dims=calc_squeeze_dims(this.dims,n),this}unsqueeze(n=null){return new Tensor(this.type,this.data,calc_unsqueeze_dims(this.dims,n))}unsqueeze_(n=null){return this.dims=calc_unsqueeze_dims(this.dims,n),this}flatten_(n=0,u=-1){u=(u+this.dims.length)%this.dims.length;let d=this.dims.slice(0,n),l=this.dims.slice(n,u+1),p=this.dims.slice(u+1);return this.dims=[...d,l.reduce((s,h)=>s*h,1),...p],this}flatten(n=0,u=-1){return this.clone().flatten_(n,u)}view(...n){let u=-1;for(let d=0;ds!==u?l*p:l,1);n[u]=this.data.length/d}return new Tensor(this.type,this.data,n)}neg_(){for(let n=0;np*s);if(u!==d)throw Error(`cannot reshape array of size ${u} into shape (${n})`);let l=y;for(let p=n.length-1;p>=0;p--)l=l.reduce((s,h)=>{let f=s[s.length-1];return f.lengthu!==1):typeof n=="number"?y[n]===1&&y.splice(n,1):Array.isArray(n)&&(y=y.filter((u,d)=>u!==1||!n.includes(d))),y}function calc_unsqueeze_dims(y,n){return n=safeIndex(n,y.length+1),y=y.slice(),y.splice(n,0,1),y}function safeIndex(y,n,u=null){if(y<-n||y>=n)throw new Error(`IndexError: index ${y} is out of bounds for dimension${u===null?"":" "+u} with size ${n}`);return y<0&&(y=(y%n+n)%n),y}function cat(y,n=0){n=safeIndex(n,y[0].dims.length);const u=y[0].dims.slice();u[n]=y.reduce((s,h)=>s+h.dims[n],0);const d=u.reduce((s,h)=>s*h,1),l=new y[0].data.constructor(d),p=y[0].type;if(n===0){let s=0;for(let h of y)l.set(h.data,s),s+=h.data.length}else{let s=0;for(let h=0;h=0;--t){const i=f.dims[t];let c=e%i;t===n&&(c+=s),o+=c*r,r*=u[t],e=Math.floor(e/i)}l[o]=f.data[a]}s+=f.dims[n]}}return new Tensor(p,l,u)}function stack(y,n=0){return cat(y.map(u=>u.unsqueeze(n)),n)}function std_mean(y,n=null,u=1,d=!1){if(n===null){const a=y.data.reduce((r,i)=>r+i,0)/y.data.length,o=Math.sqrt(y.data.reduce((r,i)=>r+(i-a)**2,0)/(y.data.length-u)),t=new Tensor(y.type,[a],[]);return[new Tensor(y.type,[o],[]),t]}n=safeIndex(n,y.dims.length);const l=mean(y,n,d),p=y.dims.slice();p[n]=1;const s=new y.data.constructor(y.data.length/y.dims[n]);for(let f=0;f=0;--o){const r=y.dims[o];if(o!==n){const i=t%r;a+=i*e,e*=p[o]}t=Math.floor(t/r)}s[a]+=(y.data[f]-l.data[a])**2}for(let f=0;fs+h,0);return new Tensor(y.type,[p/y.data.length],[])}n=safeIndex(n,y.dims.length);const d=y.dims.slice();d[n]=1;const l=new y.data.constructor(y.data.length/y.dims[n]);for(let p=0;p=0;--h){const o=y.dims[h];if(h!==n){const t=f%o;s+=t*a,a*=d[h]}f=Math.floor(f/o)}l[s]+=y.data[p]}if(y.dims[n]!==1)for(let p=0;p0||h>0;)switch(f.push(s-1),a.push(h-1),p[s][h].item()){case 0:--s,--h;break;case 1:--s;break;case 2:--h;break;default:throw new Error(`Internal error in dynamic time warping. Unexpected trace[${s}, ${h}]. Please file a bug report.`)}return f.reverse(),a.reverse(),[f,a]}function dimsToStride(y){const n=new Array(y.length);for(let u=y.length-1,d=1;u>=0;--u)n[u]=d,d*=y[u];return n}async function loadTokenizer(y,n){return await Promise.all([getModelJSON(y,"tokenizer.json",!0,n),getModelJSON(y,"tokenizer_config.json",!0,n)])}function createPattern(y,n=!0){return y.Regex?new RegExp(n?y.Regex:`(${y.Regex})`,"gu"):y.String?y.String:(console.warn("Unknown pattern type:",y),null)}function clean_up_tokenization(y){return y.replace(/ \./g,".").replace(/ \?/g,"?").replace(/ \!/g,"!").replace(/ ,/g,",").replace(/ \' /g,"'").replace(/ n\'t/g,"n't").replace(/ \'m/g,"'m").replace(/ \'s/g,"'s").replace(/ \'ve/g,"'ve").replace(/ \'re/g,"'re")}function fuse(y,n){let u=[],d=0;for(;dthis.tokens_to_ids.get(d)??this.unk_token_id);return this.fuse_unk&&(u=fuse(u,this.unk_token_id)),u}convert_ids_to_tokens(n){return n.map(u=>this.vocab[u]??this.unk_token)}}class WordPieceTokenizer extends TokenizerModel{constructor(n){super(n),this.tokens_to_ids=n.vocab,this.unk_token_id=this.tokens_to_ids.get(n.unk_token),this.unk_token=n.unk_token,this.vocab=new Array(this.tokens_to_ids.size);for(const[u,d]of this.tokens_to_ids)this.vocab[d]=u}encode(n){let u=[];for(let d of n){let l=[...d],p=!1,s=0,h=[];for(;s0&&(o=this.config.continuing_subword_prefix+o),this.tokens_to_ids.has(o)){a=o;break}--f}if(a===null){p=!0;break}h.push(a),s=f}p?u.push(this.unk_token):u.push(...h)}return u}}class Unigram extends TokenizerModel{constructor(n,u){super(n),this.vocab=new Array(n.vocab.size),this.scores=new Array(n.vocab.size);let d=0;n.vocab.forEach((l,p)=>{this.vocab[d]=p,this.scores[d]=l,++d}),this.unk_token_id=n.unk_id,this.unk_token=this.vocab[n.unk_id],this.tokens_to_ids=new Map(this.vocab.map((l,p)=>[l,p])),this.bosToken=" ",this.bosTokenId=this.tokens_to_ids.get(this.bosToken),this.eosToken=u.eos_token,this.eosTokenId=this.tokens_to_ids.get(this.eosToken),this.unkToken=this.vocab[this.unk_token_id],this.minScore=min(this.scores)[0],this.unkScore=this.minScore-10,this.scores[this.unk_token_id]=this.unkScore,this.trie=new CharTrie,this.trie.extend(this.vocab),this.fuse_unk=!0}populateNodes(n){const u=n.sentence,d=u.length;let l=0;for(;l{const y=[...Array.from({length:"~".charCodeAt(0)-"!".charCodeAt(0)+1},(l,p)=>p+"!".charCodeAt(0)),...Array.from({length:"¬".charCodeAt(0)-"¡".charCodeAt(0)+1},(l,p)=>p+"¡".charCodeAt(0)),...Array.from({length:"ÿ".charCodeAt(0)-"®".charCodeAt(0)+1},(l,p)=>p+"®".charCodeAt(0))];let n=y.slice(),u=0;for(let l=0;l<256;++l)y.includes(l)||(y.push(l),n.push(256+u),u+=1);let d=n.map(l=>String.fromCharCode(l));return Object.fromEntries(y.map((l,p)=>[l,d[p]]))})(),UNICODE_TO_BYTES=reverseDictionary(BYTES_TO_UNICODE);class BPE extends TokenizerModel{constructor(n){super(n),this.BPE_SPLIT_TOKEN=" ",this.tokens_to_ids=n.vocab,this.unk_token_id=this.tokens_to_ids.get(n.unk_token),this.unk_token=n.unk_token,this.vocab=new Array(this.tokens_to_ids.size);for(const[u,d]of this.tokens_to_ids)this.vocab[d]=u;this.bpe_ranks=new Map(n.merges.map((u,d)=>[u,d])),this.merges=n.merges.map(u=>u.split(this.BPE_SPLIT_TOKEN)),this.end_of_word_suffix=n.end_of_word_suffix,this.byte_fallback=this.config.byte_fallback??!1,this.byte_fallback&&(this.text_encoder=new TextEncoder),this.cache=Object.create(null),this.fuse_unk??(this.fuse_unk=this.config.fuse_unk)}get_pairs(n){let u=new Set,d=n[0];for(let l=1;l`<0x${s.toString(16).toUpperCase().padStart(2,"0")}>`)):u.push(this.unk_token)}return u}}class Normalizer extends Callable{constructor(n){super(),this.config=n}static fromConfig(n){if(n===null)return null;switch(n.type){case"BertNormalizer":return new BertNormalizer(n);case"Precompiled":return new Precompiled(n);case"Sequence":return new NormalizerSequence(n);case"Replace":return new Replace(n);case"NFC":return new NFC(n);case"NFKD":return new NFKD(n);case"StripAccents":return new StripAccents(n);case"Lowercase":return new Lowercase(n);case"Prepend":return new Prepend(n);default:throw new Error(`Unknown Normalizer type: ${n.type}`)}}normalize(n){throw Error("normalize should be implemented in subclass.")}_call(n){return this.normalize(n)}}class Replace extends Normalizer{normalize(n){let u=createPattern(this.config.pattern);return u===null||(n=n.replaceAll(u,this.config.content)),n}}class NFC extends Normalizer{normalize(n){return n=n.normalize("NFC"),n}}class NFKD extends Normalizer{normalize(n){return n=n.normalize("NFKD"),n}}class StripAccents extends Normalizer{normalize(n){return n=n.replace(/[\u0300-\u036f]/g,""),n}}class Lowercase extends Normalizer{normalize(n){return n=n.toLowerCase(),n}}class Prepend extends Normalizer{normalize(n){return n=this.config.prepend+n,n}}class NormalizerSequence extends Normalizer{constructor(n){super(n),this.normalizers=n.normalizers.map(u=>Normalizer.fromConfig(u))}normalize(n){return this.normalizers.reduce((u,d)=>d.normalize(u),n)}}class BertNormalizer extends Normalizer{_tokenize_chinese_chars(n){let u=[];for(let d=0;d=19968&&n<=40959||n>=13312&&n<=19903||n>=131072&&n<=173791||n>=173824&&n<=177983||n>=177984&&n<=178207||n>=178208&&n<=183983||n>=63744&&n<=64255||n>=194560&&n<=195103}stripAccents(n){return n.normalize("NFD").replace(/[\u0300-\u036f]/g,"")}normalize(n){return this.config.handle_chinese_chars&&(n=this._tokenize_chinese_chars(n)),this.config.lowercase?(n=n.toLowerCase(),this.config.strip_accents!==!1&&(n=this.stripAccents(n))):this.config.strip_accents&&(n=this.stripAccents(n)),n}}class PreTokenizer extends Callable{static fromConfig(n){if(n===null)return null;switch(n.type){case"BertPreTokenizer":return new BertPreTokenizer(n);case"Sequence":return new PreTokenizerSequence(n);case"WhitespaceSplit":return new WhitespaceSplit(n);case"Metaspace":return new MetaspacePreTokenizer(n);case"ByteLevel":return new ByteLevelPreTokenizer(n);case"Split":return new SplitPreTokenizer(n);case"Punctuation":return new PunctuationPreTokenizer(n);case"Digits":return new DigitsPreTokenizer(n);default:throw new Error(`Unknown PreTokenizer type: ${n.type}`)}}pre_tokenize_text(n){throw Error("pre_tokenize_text should be implemented in subclass.")}pre_tokenize(n){let u=[];return Array.isArray(n)?u=n.map(d=>this.pre_tokenize_text(d)):u=this.pre_tokenize_text(n),u.flat()}_call(n){return this.pre_tokenize(n)}}class BertPreTokenizer extends PreTokenizer{constructor(n){super(),this.pattern=new RegExp(`[^\\s${PUNCTUATION_REGEX}]+|[${PUNCTUATION_REGEX}]`,"gu")}pre_tokenize_text(n){return n.trim().match(this.pattern)||[]}}class ByteLevelPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.add_prefix_space=this.config.add_prefix_space,this.trim_offsets=this.config.trim_offsets,this.use_regex=this.config.use_regex??!0,this.pattern=/'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+/gu,this.byte_encoder=BYTES_TO_UNICODE,this.text_encoder=new TextEncoder}pre_tokenize_text(n){return(this.use_regex?n.match(this.pattern)||[]:[n]).map(d=>(this.add_prefix_space&&!d.startsWith(" ")&&(d=" "+d),d=Array.from(this.text_encoder.encode(d),l=>this.byte_encoder[l]).join(""),d))}}class SplitPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=createPattern(this.config.pattern,this.config.invert)}pre_tokenize_text(n){return this.pattern===null?[]:this.config.invert?n.match(this.pattern)||[]:n.split(this.pattern).filter(u=>u)}}class PunctuationPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=new RegExp(`[^${PUNCTUATION_REGEX}]+|[${PUNCTUATION_REGEX}]+`,"gu")}pre_tokenize_text(n){return n.match(this.pattern)||[]}}class DigitsPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n;const u=`[^\\d]+|\\d${this.config.individual_digits?"":"+"}`;this.pattern=new RegExp(u,"gu")}pre_tokenize_text(n){return n.match(this.pattern)||[]}}class PostProcessor extends Callable{constructor(n){super(),this.config=n}static fromConfig(n){if(n===null)return null;switch(n.type){case"TemplateProcessing":return new TemplateProcessing(n);case"ByteLevel":return new ByteLevelPostProcessor(n);case"RobertaProcessing":return new RobertaProcessing(n);default:throw new Error(`Unknown PostProcessor type: ${n.type}`)}}post_process(n,...u){throw Error("post_process should be implemented in subclass.")}_call(n,...u){return this.post_process(n,...u)}}class RobertaProcessing extends PostProcessor{constructor(n){super(n),this.cls=n.cls[0],this.sep=n.sep[0]}post_process(n,u=null){return n=mergeArrays([this.cls],n,[this.sep]),u!==null&&(n=mergeArrays(n,[this.sep],u,[this.sep])),n}}class TemplateProcessing extends PostProcessor{constructor(n){super(n),this.single=n.single,this.pair=n.pair}post_process(n,u=null){let d=u===null?this.single:this.pair,l=[];for(let p of d)"SpecialToken"in p?l.push(p.SpecialToken.id):"Sequence"in p&&(p.Sequence.id==="A"?l=mergeArrays(l,n):p.Sequence.id==="B"&&(l=mergeArrays(l,u)));return l}}class ByteLevelPostProcessor extends PostProcessor{post_process(n){return n}}class Decoder extends Callable{constructor(n){super(),this.config=n,this.added_tokens=[],this.end_of_word_suffix=null,this.trim_offsets=n.trim_offsets}static fromConfig(n){switch(n.type){case"WordPiece":return new WordPieceDecoder(n);case"Metaspace":return new MetaspaceDecoder(n);case"ByteLevel":return new ByteLevelDecoder(n);case"Replace":return new ReplaceDecoder(n);case"ByteFallback":return new ByteFallback(n);case"Fuse":return new FuseDecoder(n);case"Strip":return new StripDecoder(n);case"Sequence":return new DecoderSequence(n);default:throw new Error(`Unknown Decoder type: ${n.type}`)}}_call(n){return this.decode(n)}decode(n){return this.decode_chain(n).join("")}decode_chain(n){throw Error("`decode_chain` should be implemented in subclass.")}}class ReplaceDecoder extends Decoder{constructor(n){super(n)}decode_chain(n){let u=createPattern(this.config.pattern);return u===null?n:n.map(d=>d.replaceAll(u,this.config.content))}}class ByteFallback extends Decoder{constructor(n){super(n),this.text_decoder=new TextDecoder}decode_chain(n){let u=[],d=[];for(let l of n){let p=null;if(l.length===6&&l.startsWith("<0x")&&l.endsWith(">")){let s=parseInt(l.slice(3,5),16);isNaN(s)||(p=s)}if(p!==null)d.push(p);else{if(d.length>0){let s=this.text_decoder.decode(Uint8Array.from(d));u.push(s),d=[]}u.push(l)}}if(d.length>0){let l=this.text_decoder.decode(Uint8Array.from(d));u.push(l),d=[]}return u}}class FuseDecoder extends Decoder{constructor(n){super(n)}decode_chain(n){return[n.join("")]}}class StripDecoder extends Decoder{constructor(n){super(n),this.content=this.config.content,this.start=this.config.start,this.stop=this.config.stop}decode_chain(n){return n.map(u=>{let d=0;for(let p=0;p(d!==0&&(u.startsWith(this.config.prefix)?u=u.replace(this.config.prefix,""):u=" "+u),this.cleanup&&(u=clean_up_tokenization(u)),u))}}class ByteLevelDecoder extends Decoder{constructor(n){super(n),this.byte_decoder=UNICODE_TO_BYTES,this.text_decoder=new TextDecoder("utf-8",{fatal:!1,ignoreBOM:!0}),this.end_of_word_suffix=null}convert_tokens_to_string(n){let u=n.join(""),d=new Uint8Array([...u].map(p=>this.byte_decoder[p]));return this.text_decoder.decode(d)}decode_chain(n){let u=[],d=[];for(let l of n)this.added_tokens.includes(l)?(d.length>0&&(u.push(this.convert_tokens_to_string(d)),d=[]),u.push(l)):d.push(l);return d.length>0&&u.push(this.convert_tokens_to_string(d)),u}}class DecoderSequence extends Decoder{constructor(n){super(n),this.decoders=n.decoders.map(u=>Decoder.fromConfig(u))}decode_chain(n){return this.decoders.reduce((u,d)=>d.decode_chain(u),n)}}class MetaspacePreTokenizer extends PreTokenizer{constructor(n){super(),this.addPrefixSpace=n.add_prefix_space,this.replacement=n.replacement,this.strRep=n.str_rep||this.replacement}pre_tokenize(n){typeof n=="string"&&(n=n.trimStart().split(/\s+/));const u=[];for(let d of n){let l=d.replaceAll(" ",this.strRep);this.addPrefixSpace&&!l.startsWith(this.replacement)&&(l=this.strRep+l),u.push(l)}return u}}class MetaspaceDecoder extends Decoder{constructor(n){super(n),this.addPrefixSpace=n.add_prefix_space,this.replacement=n.replacement}decode_chain(n){let u=[];for(let d=0;dPreTokenizer.fromConfig(u))}pre_tokenize_text(n){return typeof n=="string"&&(n=[n]),this.tokenizers.reduce((u,d)=>d.pre_tokenize(u),n)}}class WhitespaceSplit extends PreTokenizer{constructor(n){super()}pre_tokenize_text(n){return whitespace_split(n)}}class PreTrainedTokenizer extends Callable{constructor(n,u){super(),this.normalizer=Normalizer.fromConfig(n.normalizer),this.pre_tokenizer=PreTokenizer.fromConfig(n.pre_tokenizer),n.model.vocab&&(Array.isArray(n.model.vocab)||(n.model.vocab=Object.entries(n.model.vocab)),n.model.vocab=new Map(n.model.vocab)),this.model=TokenizerModel.fromConfig(n.model,u),this.post_processor=PostProcessor.fromConfig(n.post_processor),this.decoder=Decoder.fromConfig(n.decoder),this.decoder.end_of_word_suffix=this.model.end_of_word_suffix,this.special_tokens=[],this.all_special_ids=[],this.added_tokens=[];for(let d of n.added_tokens){let l=d.id,p=d.content;this.added_tokens.push(p),this.model.tokens_to_ids.set(p,l),this.model.vocab[l]=p,d.special&&(this.special_tokens.push(p),this.all_special_ids.push(l))}this.decoder.added_tokens=this.added_tokens,this.added_tokens_regex=new RegExp("("+this.added_tokens.map(escapeRegExp).join("|")+")"),this.mask_token=this.getToken(u,"mask_token"),this.mask_token_id=this.model.tokens_to_ids.get(this.mask_token),this.pad_token=this.getToken(u,"pad_token","eos_token"),this.pad_token_id=this.model.tokens_to_ids.get(this.pad_token),this.sep_token=this.getToken(u,"sep_token"),this.sep_token_id=this.model.tokens_to_ids.get(this.sep_token),this.model_max_length=u.model_max_length,this.remove_space=u.remove_space,this.clean_up_tokenization_spaces=u.clean_up_tokenization_spaces??!0,this.padding_side="right"}getToken(n,...u){for(let d of u){let l=n[d];if(l)if(typeof l=="object"){if(l.__type==="AddedToken")return l.content;throw Error(`Unknown token: ${l}`)}else return l}return null}static async from_pretrained(n,{progress_callback:u=null,config:d=null,cache_dir:l=null,local_files_only:p=!1,revision:s="main"}={}){let h=await loadTokenizer(n,{progress_callback:u,config:d,cache_dir:l,local_files_only:p,revision:s});return new this(...h)}prepare_model_inputs(n){return n}_call(n,{text_pair:u=null,padding:d=!1,truncation:l=null,max_length:p=null,return_tensor:s=!0}={}){let h;if(Array.isArray(n)){if(n.length===0)throw Error("text array must be non-empty");if(u!==null){if(Array.isArray(u)){if(n.length!==u.length)throw Error("text and text_pair must have the same length")}else throw Error("text_pair must also be an array");h=n.map((t,e)=>this.encode(t,u[e]))}else h=n.map(t=>this.encode(t))}else{if(n===null)throw Error("text may not be null");if(Array.isArray(u))throw Error("When specifying `text_pair`, since `text` is a string, `text_pair` must also be a string (i.e., not an array).");h=[this.encode(n,u)]}let f=max(h.map(t=>t.length))[0];p===null&&(p=f),p=Math.min(p,this.model_max_length);let a=[];if(d||l)for(let t=0;tp)l&&(h[t]=h[t].slice(0,p)),a.push(new Array(h[t].length).fill(1));else if(d){let e=p-h[t].length;this.padding_side==="right"?(a.push(new Array(h[t].length).fill(1).concat(new Array(e).fill(0))),h[t].push(...new Array(e).fill(this.pad_token_id))):(a.push(new Array(e).fill(0).concat(new Array(h[t].length).fill(1))),h[t].unshift(...new Array(e).fill(this.pad_token_id)))}else a.push(new Array(h[t].length).fill(1));else a=h.map(t=>new Array(t.length).fill(1));if(s){if(!(d&&l)&&h.some(e=>e.length!==h[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=true' and 'truncation=true' to have batched tensors with the same length.");let t=[h.length,h[0].length];h=new Tensor("int64",BigInt64Array.from(h.flat().map(BigInt)),t),a=new Tensor("int64",BigInt64Array.from(a.flat().map(BigInt)),t)}else Array.isArray(n)||(h=h[0],a=a[0]);let o={input_ids:h,attention_mask:a};return o=this.prepare_model_inputs(o),o}_encode_text(n){return n===null?null:n.split(this.added_tokens_regex).filter(l=>l).map(l=>{if(this.added_tokens.includes(l))return l;{this.remove_space===!0&&(l=l.trim().split(/\s+/).join(" ")),this.normalizer!==null&&(l=this.normalizer(l));let p=this.pre_tokenizer!==null?this.pre_tokenizer(l):[l];return this.model(p)}}).flat()}encode(n,u=null){let d=this._encode_text(n),l=this._encode_text(u),p=this.post_processor!==null?this.post_processor(d,l):mergeArrays(d??[],l??[]);return this.model.convert_tokens_to_ids(p)}batch_decode(n,u={}){return n.map(d=>this.decode(d,u))}decode(n,u={}){if(!Array.isArray(n)||n.length===0||!isIntegralNumber(n[0]))throw Error("token_ids must be a non-empty array of integers.");return this.decode_single(n,u)}decode_single(n,{skip_special_tokens:u=!1,clean_up_tokenization_spaces:d=null}){let l=this.model.convert_ids_to_tokens(n);u&&(l=l.filter(s=>!this.special_tokens.includes(s)));let p=this.decoder(l);return this.decoder.end_of_word_suffix&&(p=p.replaceAll(this.decoder.end_of_word_suffix," "),u&&(p=p.trim())),(d??this.clean_up_tokenization_spaces)&&(p=clean_up_tokenization(p)),p}}function add_token_types(y){if(y.input_ids instanceof Tensor)y.token_type_ids=new Tensor("int64",new BigInt64Array(y.input_ids.data.length),y.input_ids.dims);else if(Array.isArray(y.input_ids))Array.isArray(y.input_ids[0])?y.token_type_ids=y.input_ids.map(n=>new Array(n.length).fill(0)):y.token_type_ids=new Array(y.input_ids.length).fill(0);else throw new Error("Input ids must be a Tensor or an Array");return y}class BertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class AlbertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class MobileBertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class SqueezeBertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class DistilBertTokenizer extends PreTrainedTokenizer{}class T5Tokenizer extends PreTrainedTokenizer{}class GPT2Tokenizer extends PreTrainedTokenizer{}class BartTokenizer extends PreTrainedTokenizer{}class RobertaTokenizer extends PreTrainedTokenizer{}class BloomTokenizer extends PreTrainedTokenizer{}class LlamaTokenizer extends PreTrainedTokenizer{}class XLMRobertaTokenizer extends PreTrainedTokenizer{}class MPNetTokenizer extends PreTrainedTokenizer{}class FalconTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class GPTNeoXTokenizer extends PreTrainedTokenizer{}class NllbTokenizer extends PreTrainedTokenizer{constructor(n,u){super(n,u),this.languageRegex=/^[a-z]{3}_[A-Z][a-z]{3}$/,this.language_codes=this.special_tokens.filter(d=>this.languageRegex.test(d))}_build_translation_inputs(n,u,d){if(!this.language_codes.includes(d.tgt_lang))throw new Error(`Target language code "${d.tgt_lang}" is not valid. Must be one of: {${this.language_codes.join(", ")}}`);if(d.src_lang!==void 0){if(!this.language_codes.includes(d.src_lang))throw new Error(`Source language code "${d.src_lang}" is not valid. Must be one of: {${this.language_codes.join(", ")}}`);for(let l of this.post_processor.config.single)if("SpecialToken"in l&&this.languageRegex.test(l.SpecialToken.id)){l.SpecialToken.id=d.src_lang;break}}return d.forced_bos_token_id=this.model.convert_tokens_to_ids([d.tgt_lang])[0],this._call(n,u)}}const WHISPER_LANGUAGES=[["en","english"],["zh","chinese"],["de","german"],["es","spanish"],["ru","russian"],["ko","korean"],["fr","french"],["ja","japanese"],["pt","portuguese"],["tr","turkish"],["pl","polish"],["ca","catalan"],["nl","dutch"],["ar","arabic"],["sv","swedish"],["it","italian"],["id","indonesian"],["hi","hindi"],["fi","finnish"],["vi","vietnamese"],["he","hebrew"],["uk","ukrainian"],["el","greek"],["ms","malay"],["cs","czech"],["ro","romanian"],["da","danish"],["hu","hungarian"],["ta","tamil"],["no","norwegian"],["th","thai"],["ur","urdu"],["hr","croatian"],["bg","bulgarian"],["lt","lithuanian"],["la","latin"],["mi","maori"],["ml","malayalam"],["cy","welsh"],["sk","slovak"],["te","telugu"],["fa","persian"],["lv","latvian"],["bn","bengali"],["sr","serbian"],["az","azerbaijani"],["sl","slovenian"],["kn","kannada"],["et","estonian"],["mk","macedonian"],["br","breton"],["eu","basque"],["is","icelandic"],["hy","armenian"],["ne","nepali"],["mn","mongolian"],["bs","bosnian"],["kk","kazakh"],["sq","albanian"],["sw","swahili"],["gl","galician"],["mr","marathi"],["pa","punjabi"],["si","sinhala"],["km","khmer"],["sn","shona"],["yo","yoruba"],["so","somali"],["af","afrikaans"],["oc","occitan"],["ka","georgian"],["be","belarusian"],["tg","tajik"],["sd","sindhi"],["gu","gujarati"],["am","amharic"],["yi","yiddish"],["lo","lao"],["uz","uzbek"],["fo","faroese"],["ht","haitian creole"],["ps","pashto"],["tk","turkmen"],["nn","nynorsk"],["mt","maltese"],["sa","sanskrit"],["lb","luxembourgish"],["my","myanmar"],["bo","tibetan"],["tl","tagalog"],["mg","malagasy"],["as","assamese"],["tt","tatar"],["haw","hawaiian"],["ln","lingala"],["ha","hausa"],["ba","bashkir"],["jw","javanese"],["su","sundanese"]],WHISPER_LANGUAGE_MAPPING=new Map(WHISPER_LANGUAGES),WHISPER_TO_LANGUAGE_CODE_MAPPING=new Map([...WHISPER_LANGUAGES.map(([y,n])=>[n,y]),["burmese","my"],["valencian","ca"],["flemish","nl"],["haitian","ht"],["letzeburgesch","lb"],["pushto","ps"],["panjabi","pa"],["moldavian","ro"],["moldovan","ro"],["sinhalese","si"],["castilian","es"]]);class WhisperTokenizer extends PreTrainedTokenizer{_decode_asr(n,{return_timestamps:u=!1,return_language:d=!1,time_precision:l=null,force_full_sequences:p=!0}={}){if(l===null)throw Error("Must specify time_precision");let s=null;const h=u==="word";function f(){return{language:s,timestamp:[null,null],text:""}}const a=[];let o=f(),t=0;const e=this.model.convert_tokens_to_ids(["<|notimestamps|>"])[0]+1;let r=[],i=[],c=!1,g=null;const m=new Set(this.all_special_ids);for(let w of n){const v=w.tokens,S=h?w.token_timestamps:null;let O=null,E=e;if("stride"in w){const[C,B,F]=w.stride;if(t-=B,g=C-F,B&&(E=B/l+e),F)for(let N=v.length-1;N>=0;--N){const H=v[N];if(H>=e){if(O!==null&&(H-e)*l=e){const F=(B-e)*l+t,N=round(F,2);if(O!==null&&B>=O)c=!0;else if(c||r.length>0&&B0?(r.push(T),h&&i.push(I)):r.every(C=>C.length===0)&&(o=f(),r=[],T=[],i=[],I=[])}if(r.length>0){if(p&&u)throw new Error("Whisper did not predict an ending timestamp, which can happen if audio is cut off in the middle of a word. Also make sure WhisperTimeStampLogitsProcessor was used during generation.");const[w,v]=this.findLongestCommonSequence(r,i),S=this.decode(w);o.text=S,h&&(o.words=this.collateWordTimestamps(w,v,s)),a.push(o)}let b=Object.create(null);const _=a.map(w=>w.text).join("");if(u||d){for(let w=0;w0;let h=s?[]:null,f=s?u[0]:null;for(let a=1;aN===C[H]).length,F=B/w+v;B>1&&F>t&&(t=F,e=[S,O,T,I])}const[i,c,g,m]=e,b=Math.floor((c+i)/2),_=Math.floor((m+g)/2);p.push(...d.slice(0,b)),d=o.slice(_),l=d.length,s&&(h.push(...f.slice(0,b)),f=u[a].slice(_))}return p.push(...d),s?(h.push(...f),[p,h]):[p,[]]}collateWordTimestamps(n,u,d){let[l,p,s]=this.combineTokensIntoWords(n,d),h=[];for(let f=0;f=l){let h=(s-l)*d;h=round(h,2),p.push(`<|${h}|>`),p.push([])}else p[p.length-1].push(s);return p=p.map(s=>typeof s=="string"?s:super.decode(s,u)),p.join("")}splitTokensOnUnicode(n){const u=this.decode(n,{decode_with_timestamps:!0}),d="�";let l=[],p=[],s=[],h=[],f=[],a=0;for(let o=0;o=this.model.tokens_to_ids.get("<|endoftext|>"),i=o.startsWith(" "),c=o.trim(),g=f.test(c);if(r||i||g||p.length===0)p.push(o),s.push(t),h.push(e);else{const m=p.length-1;p[m]+=o,s[m].push(...t),h[m].push(...e)}}return[p,s,h]}mergePunctuations(n,u,d,l,p){let s=structuredClone(n),h=structuredClone(u),f=structuredClone(d),a=s.length-2,o=s.length-1;for(;a>=0;)s[a].startsWith(" ")&&l.includes(s[a].trim())?(s[o]=s[a]+s[o],h[o]=mergeArrays(h[a],h[o]),f[o]=mergeArrays(f[a],f[o]),s[a]="",h[a]=[],f[a]=[]):o=a,--a;for(a=0,o=1;ot),h.filter(t=>t.length>0),f.filter(t=>t.length>0)]}get_decoder_prompt_ids({language:n=null,task:u=null,no_timestamps:d=!0}={}){let l=[];if(n){n=n.toLowerCase();let p=WHISPER_TO_LANGUAGE_CODE_MAPPING.get(n);if(p===void 0)if(WHISPER_LANGUAGE_MAPPING.has(n))p=n;else{const f=n.length===2?WHISPER_LANGUAGE_MAPPING.keys():WHISPER_LANGUAGE_MAPPING.values();throw new Error(`Language "${n}" is not supported. Must be one of: ${JSON.stringify(f)}`)}let s=this.model.tokens_to_ids.get(`<|${p}|>`);if(s===void 0)throw new Error(`Unable to find language "${p}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);l.push(s)}else l.push(null);if(u){if(u=u.toLowerCase(),u!=="transcribe"&&u!=="translate")throw new Error(`Task "${u}" is not supported. Must be one of: ["transcribe", "translate"]`);let p=this.model.tokens_to_ids.get(`<|${u}|>`);if(p===void 0)throw new Error(`Unable to find task "${u}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);l.push(p)}else l.push(null);if(d){let p=this.model.tokens_to_ids.get("<|notimestamps|>");if(p===void 0)throw new Error('Unable to find "<|notimestamps|>" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.');l.push(p)}return l.map((p,s)=>[s+1,p]).filter(p=>p[1]!==null)}}class CodeGenTokenizer extends PreTrainedTokenizer{}class CLIPTokenizer extends PreTrainedTokenizer{}class MarianTokenizer extends PreTrainedTokenizer{constructor(n,u){super(n,u),this.languageRegex=/^(>>\w+<<)\s*/g,this.supported_language_codes=this.model.vocab.filter(d=>this.languageRegex.test(d)),console.warn('WARNING: `MarianTokenizer` is not yet supported by Hugging Face\'s "fast" tokenizers library. Therefore, you may experience slightly inaccurate results.')}_encode_text(n){if(n===null)return null;let[u,...d]=n.trim().split(this.languageRegex);if(d.length===0)return super._encode_text(u);if(d.length===2){let[l,p]=d;return this.supported_language_codes.includes(l)||console.warn(`Unsupported language code "${l}" detected, which may lead to unexpected behavior. Should be one of: ${JSON.stringify(this.supported_language_codes)}`),mergeArrays([l],super._encode_text(p))}}}class CharTrie{constructor(){this.root=CharTrieNode.default()}extend(n){for(let u of n)this.push(u)}push(n){let u=this.root;for(let d of n){let l=u.children.get(d);l===void 0&&(l=CharTrieNode.default(),u.children.set(d,l)),u=l}u.isLeaf=!0}*commonPrefixSearch(n){let u=this.root,d="";for(let l=0;lf)&&(a=o.clone(),f=t)}if(a!==null)h.prev=a,h.backtraceScore=f;else return[]}++u}const d=[],p=this.beginNodes[n][0].prev;if(p===null)return[];let s=p.clone();for(;s.prev!==null;)d.push(s.clone()),s=s.clone().prev.clone();return d.reverse(),d}piece(n){return this.sentence.slice(n.pos,n.pos+n.length)}tokens(){return this.viterbi().map(u=>this.piece(u))}tokenIds(){return this.viterbi().map(u=>u.tokenId)}}class TokenLatticeNode{constructor(n,u,d,l,p){this.tokenId=n,this.nodeId=u,this.pos=d,this.length=l,this.score=p,this.prev=null,this.backtraceScore=0}clone(){const n=new TokenLatticeNode(this.tokenId,this.nodeId,this.pos,this.length,this.score);return n.prev=this.prev,n.backtraceScore=this.backtraceScore,n}}class AutoTokenizer{static async from_pretrained(n,{quantized:u=!0,progress_callback:d=null,config:l=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main"}={}){let[f,a]=await loadTokenizer(n,{quantized:u,progress_callback:d,config:l,cache_dir:p,local_files_only:s,revision:h}),o=a.tokenizer_class.replace(/Fast$/,""),t=this.TOKENIZER_CLASS_MAPPING[o];return t||(console.warn(`Unknown tokenizer class "${o}", attempting to construct from base class.`),t=PreTrainedTokenizer),new t(f,a)}}jt(AutoTokenizer,"TOKENIZER_CLASS_MAPPING",{T5Tokenizer,DistilBertTokenizer,BertTokenizer,MobileBertTokenizer,SqueezeBertTokenizer,AlbertTokenizer,GPT2Tokenizer,BartTokenizer,RobertaTokenizer,WhisperTokenizer,CodeGenTokenizer,CLIPTokenizer,MarianTokenizer,BloomTokenizer,NllbTokenizer,LlamaTokenizer,XLMRobertaTokenizer,MPNetTokenizer,FalconTokenizer,GPTNeoXTokenizer,PreTrainedTokenizer});async function loadConfig(y,n){return await getModelJSON(y,"config.json",!0,n)}class PretrainedConfig{constructor(n){this.model_type=null,this.is_encoder_decoder=!1,Object.assign(this,n)}static async from_pretrained(n,{progress_callback:u=null,config:d=null,cache_dir:l=null,local_files_only:p=!1,revision:s="main"}={}){let h=d??await loadConfig(n,{progress_callback:u,config:d,cache_dir:l,local_files_only:p,revision:s});return new this(h)}}class AutoConfig{static async from_pretrained(...n){return PretrainedConfig.from_pretrained(...n)}}class LogitsProcessorList extends Callable{constructor(){super(),this.processors=[]}push(n){this.processors.push(n)}extend(n){this.processors.push(...n)}_call(n,u){for(let d of u)this.processors.forEach(l=>l(n,d))}[Symbol.iterator](){return this.processors.values()}}class LogitsProcessor extends Callable{_call(n,u){throw Error("`_call` should be implemented in a subclass")}}class ForceTokensLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.force_token_map=Object.fromEntries(n??[])}_call(n,u){let d=this.force_token_map[n.length];return exists(d)&&(u.data.fill(-1/0),u.data[d]=0),u}}class ForcedBOSTokenLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.bos_token_id=n}_call(n,u){return n.length===1&&(u.data.fill(-1/0),u.data[this.bos_token_id]=0),u}}class ForcedEOSTokenLogitsProcessor extends LogitsProcessor{constructor(n,u){super(),this.max_length=n,this.forced_eos_token_id=u}_call(n,u){}}class SuppressTokensAtBeginLogitsProcessor extends LogitsProcessor{constructor(n,u){super(),this.begin_suppress_tokens=n,this.begin_index=u}_call(n,u){if(n.length===this.begin_index)for(let d of this.begin_suppress_tokens)u.data[d]=-1/0;return u}}class WhisperTimeStampLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.eos_token_id=n.eos_token_id,this.no_timestamps_token_id=n.no_timestamps_token_id,this.timestamp_begin=this.no_timestamps_token_id+1,this.begin_index=(n.forced_decoder_ids||[]).length+2,n.forced_decoder_ids.slice(-1)[0][1]===this.no_timestamps_token_id&&(this.begin_index-=1),this.max_initial_timestamp_index=n.max_initial_timestamp_index}_call(n,u){if(u.data[this.no_timestamps_token_id]=-1/0,n.length===this.begin_index-1)return u.data.fill(-1/0),u.data[this.timestamp_begin]=0,u;const d=n.slice(this.begin_index),l=d.length>=1&&d[d.length-1]>=this.timestamp_begin,p=d.length<2||d[d.length-2]>=this.timestamp_begin;if(l&&(p?u.data.subarray(this.timestamp_begin).fill(-1/0):u.data.subarray(0,this.eos_token_id).fill(-1/0)),n.length===this.begin_index&&this.max_initial_timestamp_index!==null){const a=this.timestamp_begin+this.max_initial_timestamp_index;u.data.subarray(a+1).fill(-1/0)}const s=log_softmax(u.data),h=Math.log(s.subarray(this.timestamp_begin).map(Math.exp).reduce((a,o)=>a+o)),f=max(s.subarray(0,this.timestamp_begin))[0];return h>f&&u.data.subarray(0,this.timestamp_begin).fill(-1/0),u}}class NoRepeatNGramLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.no_repeat_ngram_size=n}getNgrams(n){const u=n.length,d=[];for(let p=0;p0&&(l=l.map(p=>p/this.generation_config.temperature)),l}randomSelect(n){let u=n.reduce((l,p)=>l+p,0),d=Math.random()*u;for(let l=0;l1)return new BeamSearchSampler(n);if(n.num_return_sequences>1)throw Error(`num_return_sequences has to be 1 when doing greedy search, but is ${n.num_return_sequences}.`);return new GreedySampler(n)}}class GreedySampler extends Sampler{sample(n,u=-1){let d=this.getLogits(n,u);return[[max(d)[1],0]]}}class MultinomialSampler extends Sampler{sample(n,u=-1){let d=n.dims.at(-1);this.generation_config.top_k>0&&(d=Math.min(this.generation_config.top_k,d));const l=this.getLogits(n,u),p=getTopItems(l,d),s=softmax(p.map(h=>h[1]));return Array.from({length:this.generation_config.num_beams},()=>{const h=this.randomSelect(s);return[p[h][0],Math.log(s[h])]})}}class BeamSearchSampler extends Sampler{sample(n,u=-1){let d=n.dims.at(-1);this.generation_config.top_k>0&&(d=Math.min(this.generation_config.top_k,d));const l=this.getLogits(n,u),p=getTopItems(l,d),s=softmax(p.map(h=>h[1]));return Array.from({length:this.generation_config.num_beams},(h,f)=>[p[f][0],Math.log(s[f])])}}const{InferenceSession,Tensor:ONNXTensor}=ONNX;class ModelType{}class EncoderOnlyModelType extends ModelType{}class EncoderDecoderModelType extends ModelType{}class Seq2SeqModelType extends EncoderDecoderModelType{}class DecoderOnlyModelType extends ModelType{}const MODEL_TYPE_MAPPING=new Map([["CLIPTextModelWithProjection",EncoderOnlyModelType],["CLIPVisionModelWithProjection",EncoderOnlyModelType]]);async function forward(y,n){return MODEL_TYPE_MAPPING.get(y.constructor.name)===DecoderOnlyModelType?await decoderForward(y,n):await encoderForward(y,n)}async function constructSession(y,n,u){let d=`onnx/${n}${u.quantized?"_quantized":""}.onnx`,l=await getModelFile(y,d,!0,u);try{return await InferenceSession.create(l,{executionProviders})}catch(p){if(executionProviders.length===1&&executionProviders[0]==="wasm")throw p;return console.warn(p),console.warn("Something went wrong during model construction (most likely a missing operation). Using `wasm` as a fallback. "),await InferenceSession.create(l,{executionProviders:["wasm"]})}}async function validateInputs(y,n){const u={},d=[];for(let s of y.inputNames)n[s]===void 0?d.push(s):u[s]=n[s];if(d.length>0)throw new Error(`An error occurred during model execution: "Missing the following inputs: ${d.join(", ")}.`);const l=Object.keys(n).length,p=y.inputNames.length;if(l>p){let s=Object.keys(n).filter(h=>!y.inputNames.includes(h));console.warn(`WARNING: Too many inputs were provided (${l} > ${p}). The following inputs will be ignored: "${s.join(", ")}".`)}return u}async function sessionRun(y,n){const u=await validateInputs(y,n);try{let d=await y.run(u);return d=replaceTensors(d),d}catch(d){throw console.error(`An error occurred during model execution: "${d}".`),console.error("Inputs given to model:",u),d}}function replaceTensors(y){for(let n in y)y[n]instanceof ONNXTensor?y[n]=new Tensor(y[n]):typeof y[n]=="object"&&replaceTensors(y[n]);return y}function toI64Tensor(y){if(y instanceof Tensor)return y;if(y.length===0)throw Error("items must be non-empty");if(Array.isArray(y[0])){if(y.some(n=>n.length!==y[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' and/or 'truncation=True' to have batched tensors with the same length.");return new Tensor("int64",BigInt64Array.from(y.flat().map(n=>BigInt(n))),[y.length,y[0].length])}else return new Tensor("int64",BigInt64Array.from(y.map(n=>BigInt(n))),[1,y.length])}function prepareAttentionMask(y,n){let u=y.config.pad_token_id??null,d=y.config.eos_token_id??null;isIntegralNumber(d)&&(d=[d]);let l=n.indexOf(u)!==-1,p=d===null||!d.includes(u);if(l&&p){let s=BigInt64Array.from(n.data.map(h=>h!=u));return new Tensor("int64",s,n.dims)}else return new Tensor("int64",new BigInt64Array(n.data.length).fill(1n),n.dims)}function boolTensor(y){return new Tensor("bool",[y],[1])}async function seq2seqForward(y,n,{add_decoder_pkv:u=!0}={}){let{encoder_outputs:d,past_key_values:l}=n;d||(d=(await encoderForward(y,n)).last_hidden_state);let p={input_ids:n.decoder_input_ids,encoder_hidden_states:d,use_cache_branch:boolTensor(!!l)};y.decoder_merged_session.inputNames.includes("encoder_attention_mask")&&(p.encoder_attention_mask=n.attention_mask),y.addPastKeyValues(p,l,u);const s=await sessionRun(y.decoder_merged_session,p);let h=s.logits;l=y.getPastKeyValues(s,l);const f=y.getAttentions(s);return new Seq2SeqLMOutput({logits:h,past_key_values:l,encoder_outputs:d,...f})}function seq2seqStartBeams(y,n,u,d=!0){let l=[],p=0,s=y.config.decoder_start_token_id;Array.isArray(s)||(s=[s]);for(let h of n){h.dims=[1,...h.dims];let f={inputs:h,encoder_outputs:null,prev_model_outputs:null,output_token_ids:s,done:!1,score:0,id:p++};d&&(f.attention_mask=prepareAttentionMask(y,h)),l.push(f)}return l}async function seq2seqRunBeam(y,n,{input_name:u="input_ids"}={}){var p;let d={[u]:n.inputs,decoder_input_ids:toI64Tensor(n.output_token_ids.slice(-1)),encoder_outputs:n.encoder_outputs,past_key_values:(p=n.prev_model_outputs)==null?void 0:p.past_key_values};n.attention_mask&&(d.attention_mask=n.attention_mask);let l=await y.forward(d);return n.prev_model_outputs=l,n.encoder_outputs=l.encoder_outputs,l}async function encoderForward(y,n){let u={};for(let d of y.session.inputNames)u[d]=n[d];return await sessionRun(y.session,u)}async function decoderForward(y,n){let{input_ids:u,past_key_values:d,attention_mask:l}=n,p={input_ids:u,attention_mask:l??prepareAttentionMask(y,u),use_cache_branch:boolTensor(d!==null)};y.addPastKeyValues(p,d);let s=await sessionRun(y.session,p),h=s.logits;return d=y.getPastKeyValues(s,d),{logits:h,past_key_values:d}}function decoderStartBeams(y,n,u,d){let l=[],p=0;for(let s of n){let h=s.tolist().map(Number);s.dims=[1,...s.dims];let f;d?(f=d[p],f.dims=[1,...f.dims]):f=prepareAttentionMask(y,s);let a={input:s,model_input_ids:s,attention_mask:f,prev_model_outputs:null,output_token_ids:h,num_output_tokens:u,done:!1,score:0,id:p++};l.push(a)}return l}async function decoderRunBeam(y,n){var p;let u=new BigInt64Array(n.output_token_ids.length).fill(1n),d={input_ids:n.model_input_ids,attention_mask:new Tensor("int64",u,[1,u.length]),past_key_values:(p=n.prev_model_outputs)==null?void 0:p.past_key_values},l=await y.forward(d);return n.prev_model_outputs=l,l}function decoderUpdatebeam(y,n){y.output_token_ids=[...y.output_token_ids,n],y.model_input_ids=new Tensor("int64",[BigInt(n)],[1,1])}class PreTrainedModel extends Callable{constructor(n,u){super(),this.config=n,this.session=u}async dispose(){let n=[];for(let u of Object.keys(this)){let d=this[u];d instanceof InferenceSession&&n.push(d.handler.dispose())}return await Promise.all(n)}static async from_pretrained(n,{quantized:u=!0,progress_callback:d=null,config:l=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main",model_file_name:f=null}={}){let a={quantized:u,progress_callback:d,config:l,cache_dir:p,local_files_only:s,revision:h,model_file_name:f},o=MODEL_TYPE_MAPPING.get(this.name),t;if(o===DecoderOnlyModelType)t=await Promise.all([AutoConfig.from_pretrained(n,a),constructSession(n,a.model_file_name??"decoder_model_merged",a)]);else if(o===Seq2SeqModelType)t=await Promise.all([AutoConfig.from_pretrained(n,a),constructSession(n,"encoder_model",a),constructSession(n,"decoder_model_merged",a),getModelJSON(n,"generation_config.json",!1,a)]);else if(o===EncoderDecoderModelType)t=await Promise.all([AutoConfig.from_pretrained(n,a),constructSession(n,"encoder_model",a),constructSession(n,"decoder_model_merged",a)]);else if(o===EncoderOnlyModelType)t=await Promise.all([AutoConfig.from_pretrained(n,a),constructSession(n,a.model_file_name??"model",a)]);else throw console.warn("Malformed class definition.",this),Error(`Unable to load model: ${n}. Please report this bug at https://github.com/xenova/transformers.js/issues/new/choose.`);return new this(...t)}async _call(n){return await this.forward(n)}async forward(n){return await forward(this,n)}_get_logits_processor(n,u,d=null){const l=new LogitsProcessorList;if(n.repetition_penalty!==null&&n.repetition_penalty!==1&&l.push(new RepetitionPenaltyLogitsProcessor(n.repetition_penalty)),n.no_repeat_ngram_size!==null&&n.no_repeat_ngram_size>0&&l.push(new NoRepeatNGramLogitsProcessor(n.no_repeat_ngram_size)),n.forced_bos_token_id!==null&&l.push(new ForcedBOSTokenLogitsProcessor(n.forced_bos_token_id)),n.forced_eos_token_id!==null&&l.push(new ForcedEOSTokenLogitsProcessor(n.max_length,n.forced_eos_token_id)),n.begin_suppress_tokens!==null){let p=u>1||n.forced_bos_token_id===null?u:u+1;n.forced_decoder_ids!==null&&(p+=n.forced_decoder_ids[n.forced_decoder_ids.length-1][0]),l.push(new SuppressTokensAtBeginLogitsProcessor(n.begin_suppress_tokens,p))}return n.forced_decoder_ids!==null&&l.push(new ForceTokensLogitsProcessor(n.forced_decoder_ids)),d!==null&&l.extend(d),l}_get_generation_config(n){let u=new GenerationConfig;return"generation_config"in this&&Object.assign(u,this.generation_config),n!==null&&Object.assign(u,n),u}async generate(n,u=null,d=null,{inputs_attention_mask:l=null}={}){if(!(n instanceof Tensor)&&!isTypedArray(n)&&!Array.isArray(n))throw Error(`\`inputs\` must be a Tensor, TypedArray, or Array, but is "${n.constructor.name}".`);let p;if(this.config.is_encoder_decoder)p=0;else if(p=n instanceof Tensor?n.dims[0]:n.length,p===0)throw Error("Must supply a non-empty array of input token ids.");u=this._get_generation_config(u),d=d??new LogitsProcessorList,d=this._get_logits_processor(u,p,d);let s=1;const h=s+(u.max_new_tokens??1/0),f=Number.isInteger(u.max_length)&&(u.max_new_tokens??null)===null;let a=Sampler.getSampler(u),o=this.getStartBeams(n,s,l);for(;o.some(i=>!i.done)&&s=u.max_length){c.done=!0,i.push(c);continue}let g=await this.runBeam(c);u.output_attentions&&this.addAttentionsToBeam(c,g),u.output_scores;let m=g.logits.slice(null,-1,null);d(c.output_token_ids,m);let b=a(m);for(let[_,w]of b){let v={...c};this.updateBeam(v,_),v.score+=w,_===this.config.eos_token_id&&(v.done=!0),i.push(v)}}++s,i=this.groupBeams(i).map(c=>c.sort((g,m)=>m.score-g.score).slice(0,u.num_beams)),o=i.flat(),u.callback_function&&u.callback_function(o)}const t=this.groupBeams(o),e=i=>t.map(c=>u.num_return_sequences>1?c.slice(0,u.num_return_sequences).map(g=>g[i]):[c[0][i]]).flat(),r=e("output_token_ids");if(u.return_dict_in_generate){const i=e("decoder_attentions"),c=e("cross_attentions");return{sequences:r,decoder_attentions:i,cross_attentions:c}}else return r}addAttentionsToBeam(n,u){if(this.config.is_encoder_decoder){if(!u.cross_attentions||u.cross_attentions.length===0)throw Error("`output_attentions` is true, but the model did not produce cross-attentions. This is most likely because the model was not exported with `output_attentions=True`.");n.cross_attentions||(n.cross_attentions=[]),n.cross_attentions.push(u.cross_attentions)}if(!u.decoder_attentions||u.decoder_attentions.length===0)throw Error("`output_attentions` is true, but the model did not produce decoder-attentions. This is most likely because the model was not exported with `output_attentions=True`.");n.decoder_attentions||(n.decoder_attentions=[]),n.decoder_attentions.push(u.decoder_attentions)}groupBeams(n){const u=Object.create(null);for(const d of n)u[d.id]===void 0?u[d.id]=[d]:u[d.id].push(d);return Object.values(u)}getPastKeyValues(n,u){const d=Object.create(null);for(const l in n)if(l.startsWith("present")){let p=l.replace("present","past_key_values");u&&l.includes("encoder")?d[p]=u[p]:d[p]=n[l]}return d}getAttentions(n){const u=Object.create(null);for(const d of["cross_attentions","decoder_attentions"]){const l=[];for(const p in n)if(p.startsWith(d)){const s=p.split(".").pop();l[s]=n[p]}u[d]=l}return u}addPastKeyValues(n,u,d=!1){if(u)Object.assign(n,u);else if(d){let l=[1,this.num_encoder_heads,0,this.encoder_dim_kv];for(let s=0;s{let a=Array.from({length:this.config.decoder_layers},(c,g)=>cat(f.map(m=>m[g]),2)),o=stack(u.map(([c,g])=>a[c].slice(null,g)));o=o.transpose(1,0,2,3);let[t,e]=std_mean(o,-2,0,!0),r=o.clone();for(let c=0;co[g+1]-o[g]),r=mergeArrays([1],e).map(c=>!!c),i=[];for(let c=0;c{let n=TOKENIZER_MAPPINGS.get(y.data.model_id);n||(n=AutoTokenizer.from_pretrained(y.data.model_id),TOKENIZER_MAPPINGS.set(y.data.model_id,new Promise(a=>{n.then(o=>{switch(o.constructor.name){case"LlamaTokenizer":o.decoder.decoders.pop();break;case"T5Tokenizer":o.decoder.addPrefixSpace=!1;break}a(o)})})));const u=await n,d=y.data.text,l=performance.now(),p=u.encode(d),s=performance.now();console.log("[INFO]",`Tokenized ${d.length} characters in ${(s-l).toFixed(2)}ms`);let h=p.map(a=>u.decode([a])),f=[];switch(u.constructor.name){case"BertTokenizer":f=h.map((a,o)=>o===0||a.startsWith("##")?0:8),h=h.map(a=>a.replace("##",""));break;case"T5Tokenizer":h.length>0&&h.length!==" "&&(h[0]=h[0].replace(/^ /,""));break}self.postMessage({token_ids:p,decoded:h,margins:f})})})(); diff --git a/spaces/XzJosh/TianDou-Bert-VITS2/resample.py b/spaces/XzJosh/TianDou-Bert-VITS2/resample.py deleted file mode 100644 index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/TianDou-Bert-VITS2/resample.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/data_utils.py b/spaces/XzJosh/yoyo-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/yoyo-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/YanzBotz/YanzBotz-Models/app.py b/spaces/YanzBotz/YanzBotz-Models/app.py deleted file mode 100644 index 3c20380efe56273cd11e0cd600783768195595bf..0000000000000000000000000000000000000000 --- a/spaces/YanzBotz/YanzBotz-Models/app.py +++ /dev/null @@ -1,504 +0,0 @@ -import os -import glob -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -audio_mode = [] -f0method_mode = [] -f0method_info = "" - -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better). (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better), and Crepe effect is good but requires GPU (Default: PM)" - -if os.path.isfile("rmvpe.pt"): - f0method_mode.insert(2, "rmvpe") - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect - ): - try: - print(f"Converting using {model_name}...") - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(info) - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(f"{model_name} | {info}") - return info, (tgt_sr, audio_opt) - return vc_fn - -def load_model(): - models = [] - with open(f"weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{character_name}/{info['cover']}" - model_index = f"weights/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index))) - return models - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - models = load_model() - tts_voice_list = asyncio.new_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "#
    Combined Genshin Impact RVC Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "###
    It is recommended to use google colab for more features. \n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Tgr6q9kKiB5P37rUitrB3CsNl8JP9iQZ?usp=sharing)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - ) - with gr.Tabs(): - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - f'
    RVC {model_version} Model
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/Yarumo/whisper/README.md b/spaces/Yarumo/whisper/README.md deleted file mode 100644 index b9f0c4d111d7db6010cefee476710dbaf1dc6b65..0000000000000000000000000000000000000000 --- a/spaces/Yarumo/whisper/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper -emoji: 📉 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -duplicated_from: openai/whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Yassine/Stego/common.cpp b/spaces/Yassine/Stego/common.cpp deleted file mode 100644 index f95ebd16483e46bd474f34db7ace35eefcbbd6a3..0000000000000000000000000000000000000000 --- a/spaces/Yassine/Stego/common.cpp +++ /dev/null @@ -1,177 +0,0 @@ -#include -#include -#include -#include "common.h" - -#include -#include -#include - -u32 mats[] = { -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -109, 71, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -109, 79, 83, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -89, 127, 99, 69, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -95, 75, 121, 71, 109, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -71, 117, 127, 75, 89, 109, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -111, 83, 127, 97, 77, 117, 89, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -113, 111, 87, 93, 99, 73, 117, 123, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -89, 97, 115, 81, 77, 117, 87, 127, 123, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -95, 107, 109, 79, 117, 67, 121, 123, 103, 81, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -117, 71, 109, 79, 101, 115, 123, 81, 77, 95, 87, 0, 0, 0, 0, 0, 0, 0, 0, 0, -119, 73, 81, 125, 123, 103, 99, 127, 109, 69, 89, 107, 0, 0, 0, 0, 0, 0, 0, 0, -87, 127, 117, 81, 97, 67, 101, 93, 105, 109, 75, 115, 123, 0, 0, 0, 0, 0, 0, 0, -93, 107, 115, 95, 121, 81, 75, 99, 111, 85, 79, 119, 105, 65, 0, 0, 0, 0, 0, 0, -123, 85, 79, 87, 127, 65, 115, 93, 101, 111, 73, 119, 105, 99, 91, 0, 0, 0, 0, 0, -127, 99, 121, 111, 71, 109, 103, 117, 113, 65, 105, 87, 101, 75, 93, 123, 0, 0, 0, 0, -89, 93, 111, 117, 103, 127, 77, 95, 85, 105, 67, 69, 113, 123, 99, 75, 119, 0, 0, 0, -65, 99, 77, 85, 101, 91, 125, 103, 127, 111, 69, 93, 75, 95, 119, 113, 105, 115, 0, 0, -91, 117, 77, 107, 101, 127, 115, 83, 85, 119, 105, 113, 93, 71, 111, 121, 97, 73, 81, 0, -95, 111, 117, 83, 97, 75, 87, 127, 85, 93, 105, 115, 77, 101, 99, 89, 71, 121, 67, 123, -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -247, 149, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -143, 187, 233, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -235, 141, 161, 207, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -219, 185, 151, 255, 197, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -251, 159, 217, 167, 221, 133, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -201, 143, 231, 251, 189, 169, 155, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -143, 245, 177, 253, 217, 163, 155, 197, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -233, 145, 219, 185, 231, 215, 173, 129, 243, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -139, 201, 177, 167, 213, 253, 227, 199, 185, 159, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -183, 145, 223, 199, 245, 139, 187, 157, 217, 237, 163, 0, 0, 0, 0, 0, 0, 0, 0, 0, -223, 145, 137, 219, 197, 243, 247, 189, 135, 181, 207, 235, 0, 0, 0, 0, 0, 0, 0, 0, -229, 205, 237, 187, 135, 241, 183, 163, 151, 243, 213, 137, 159, 0, 0, 0, 0, 0, 0, 0, -205, 165, 239, 211, 231, 247, 133, 227, 219, 189, 249, 185, 149, 129, 0, 0, 0, 0, 0, 0, -131, 213, 255, 207, 227, 221, 173, 185, 197, 147, 235, 247, 217, 143, 229, 0, 0, 0, 0, 0, -247, 139, 157, 223, 187, 147, 177, 249, 165, 153, 161, 227, 237, 255, 207, 197, 0, 0, 0, 0, -205, 139, 239, 183, 147, 187, 249, 225, 253, 163, 173, 233, 209, 159, 255, 149, 197, 0, 0, 0, -177, 173, 195, 137, 211, 249, 191, 135, 175, 155, 229, 215, 203, 225, 247, 237, 221, 227, 0, 0, -159, 189, 195, 163, 255, 147, 219, 247, 231, 157, 139, 173, 185, 197, 207, 245, 193, 241, 233, 0, -235, 179, 219, 253, 241, 131, 213, 231, 247, 223, 201, 193, 191, 249, 145, 237, 155, 165, 141, 173, -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -339, 489, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -469, 441, 379, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -371, 439, 277, 479, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -413, 489, 443, 327, 357, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -509, 453, 363, 409, 425, 303, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -377, 337, 443, 487, 467, 421, 299, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -497, 349, 279, 395, 365, 427, 399, 297, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -435, 373, 395, 507, 441, 325, 279, 289, 319, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -301, 379, 509, 411, 293, 467, 455, 261, 343, 447, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -367, 289, 445, 397, 491, 279, 373, 315, 435, 473, 327, 0, 0, 0, 0, 0, 0, 0, 0, 0, -465, 379, 319, 275, 293, 407, 373, 427, 445, 497, 347, 417, 0, 0, 0, 0, 0, 0, 0, 0, -473, 401, 267, 311, 359, 347, 333, 441, 405, 381, 497, 463, 269, 0, 0, 0, 0, 0, 0, 0, -467, 283, 405, 303, 269, 337, 385, 441, 511, 361, 455, 355, 353, 311, 0, 0, 0, 0, 0, 0, -489, 311, 259, 287, 445, 471, 419, 345, 289, 391, 405, 411, 371, 457, 331, 0, 0, 0, 0, 0, -493, 427, 305, 309, 339, 447, 381, 335, 323, 423, 453, 457, 443, 313, 371, 353, 0, 0, 0, 0, -271, 301, 483, 401, 369, 367, 435, 329, 319, 473, 441, 491, 325, 455, 389, 341, 317, 0, 0, 0, -333, 311, 509, 319, 391, 441, 279, 467, 263, 487, 393, 405, 473, 303, 353, 337, 451, 365, 0, 0, -301, 477, 361, 445, 505, 363, 375, 277, 271, 353, 337, 503, 457, 357, 287, 323, 435, 345, 497, 0, -281, 361, 413, 287, 475, 359, 483, 351, 337, 425, 453, 423, 301, 309, 331, 499, 507, 277, 375, 471, -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -519, 885, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -579, 943, 781, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -685, 663, 947, 805, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -959, 729, 679, 609, 843, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -959, 973, 793, 747, 573, 659, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -631, 559, 1023, 805, 709, 913, 979, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -607, 867, 731, 1013, 625, 973, 825, 925, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -743, 727, 851, 961, 813, 605, 527, 563, 867, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -863, 921, 943, 523, 653, 969, 563, 597, 753, 621, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -729, 747, 901, 839, 815, 935, 777, 641, 1011, 603, 973, 0, 0, 0, 0, 0, 0, 0, 0, 0, -581, 831, 659, 877, 781, 929, 1003, 1021, 655, 729, 983, 611, 0, 0, 0, 0, 0, 0, 0, 0, -873, 1013, 859, 887, 579, 697, 769, 927, 679, 683, 911, 753, 733, 0, 0, 0, 0, 0, 0, 0, -991, 767, 845, 977, 923, 609, 633, 769, 533, 829, 859, 759, 687, 657, 0, 0, 0, 0, 0, 0, -781, 663, 731, 829, 851, 941, 601, 997, 719, 675, 947, 939, 657, 549, 647, 0, 0, 0, 0, 0, -619, 879, 681, 601, 1015, 797, 737, 841, 839, 869, 931, 789, 767, 547, 823, 635, 0, 0, 0, 0, -855, 567, 591, 1019, 745, 945, 769, 671, 803, 799, 925, 701, 517, 653, 885, 731, 581, 0, 0, 0, -887, 643, 785, 611, 905, 669, 703, 1017, 575, 763, 625, 869, 731, 861, 847, 941, 933, 577, 0, 0, -867, 991, 1021, 709, 599, 741, 933, 921, 619, 789, 957, 791, 969, 525, 591, 763, 657, 683, 829, 0, -1009, 1003, 901, 715, 643, 803, 805, 975, 667, 619, 569, 769, 685, 767, 853, 671, 881, 907, 955, 523, -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1655, 1493, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1859, 1481, 1119, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1395, 1737, 1973, 1259, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1339, 1067, 1679, 1641, 2021, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1657, 1331, 1783, 2043, 1097, 1485, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1611, 1141, 1849, 2001, 1511, 1359, 1245, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1215, 1733, 1461, 2025, 1251, 1945, 1649, 1851, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1275, 1373, 1841, 1509, 1631, 1737, 1055, 1891, 1041, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1715, 1117, 1503, 2025, 1027, 1959, 1365, 1739, 1301, 1233, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1101, 1127, 1145, 1157, 1195, 1747, 1885, 1527, 1325, 2033, 1935, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1369, 1255, 1809, 1889, 1183, 1495, 1223, 1781, 2029, 1327, 1075, 1065, 0, 0, 0, 0, 0, 0, 0, 0, -1157, 1499, 1871, 1365, 1559, 1149, 1293, 1571, 1641, 1971, 1807, 1673, 2023, 0, 0, 0, 0, 0, 0, 0, -1929, 1533, 1135, 1359, 1547, 1723, 1529, 1107, 1273, 1879, 1709, 1141, 1897, 1161, 0, 0, 0, 0, 0, 0, -1861, 1801, 1675, 1699, 1103, 1665, 1657, 1287, 1459, 2047, 1181, 1835, 1085, 1377, 1511, 0, 0, 0, 0, 0, -1915, 1753, 1945, 1391, 1205, 1867, 1895, 1439, 1719, 1185, 1685, 1139, 1229, 1791, 1821, 1295, 0, 0, 0, 0, -1193, 1951, 1469, 1737, 1047, 1227, 1989, 1717, 1735, 1643, 1857, 1965, 1405, 1575, 1907, 1173, 1299, 0, 0, 0, -1641, 1887, 1129, 1357, 1543, 1279, 1687, 1975, 1839, 1775, 1109, 1337, 1081, 1435, 1603, 2037, 1249, 1153, 0, 0, -1999, 1065, 1387, 1977, 1555, 1915, 1219, 1469, 1889, 1933, 1819, 1315, 1319, 1693, 1143, 1361, 1815, 1109, 1631, 0, -1253, 1051, 1827, 1871, 1613, 1759, 2015, 1229, 1585, 1057, 1409, 1831, 1943, 1491, 1557, 1195, 1339, 1449, 1675, 1679, -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -3475, 2685, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -3865, 2883, 2519, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -4019, 3383, 3029, 2397, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2725, 3703, 3391, 2235, 2669, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2489, 3151, 2695, 3353, 4029, 3867, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2467, 2137, 3047, 3881, 3125, 2683, 3631, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2739, 3163, 2137, 4031, 2967, 3413, 3749, 2301, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -3443, 2305, 3365, 2231, 2127, 3697, 3535, 4041, 2621, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -3641, 2777, 2789, 2357, 3003, 2729, 3229, 2925, 3443, 2291, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -3567, 2361, 2061, 2219, 3905, 2285, 2871, 3187, 2455, 2783, 2685, 0, 0, 0, 0, 0, 0, 0, 0, 0, -4043, 2615, 2385, 3911, 3267, 2871, 3667, 3037, 2905, 2921, 2129, 2299, 0, 0, 0, 0, 0, 0, 0, 0, -2315, 2997, 3743, 2729, 3117, 2297, 2585, 3141, 3283, 3943, 3613, 3345, 4047, 0, 0, 0, 0, 0, 0, 0, -3967, 3069, 3377, 3909, 3691, 2439, 2533, 3075, 2129, 3319, 3433, 3035, 2745, 2631, 0, 0, 0, 0, 0, 0, -3023, 3349, 2111, 2385, 3907, 3959, 3425, 3801, 2135, 2671, 2637, 2977, 2999, 3107, 2277, 0, 0, 0, 0, 0, -2713, 2695, 3447, 2537, 2685, 3755, 3953, 3901, 3193, 3107, 2407, 3485, 2097, 3091, 2139, 2261, 0, 0, 0, 0, -3065, 4059, 2813, 3043, 2849, 3477, 3205, 3381, 2747, 3203, 3937, 3603, 3625, 3559, 3831, 2243, 2343, 0, 0, 0, -3999, 3183, 2717, 2307, 2103, 3353, 2761, 2541, 2375, 2327, 3277, 2607, 3867, 3037, 2163, 2261, 3649, 2929, 0, 0, -2543, 2415, 3867, 3709, 3161, 2369, 4087, 2205, 3785, 2515, 2133, 2913, 3941, 3371, 2605, 3269, 3385, 3025, 2323, 0, -2939, 2775, 3663, 2413, 2573, 2205, 3821, 3513, 2699, 3379, 2479, 2663, 2367, 2517, 3027, 3201, 3177, 3281, 4069, 2069, -}; - -u32 *getMatrix(int width, int height) { - u32 *cols; - cols = (u32*)malloc(width * sizeof(u32)); - - if(width >= 2 && width <= 20 && height >= 7 && height <= 12) { // get it from the array - memcpy(cols, &mats[(height - 7) * 400 + (width - 1) * 20], width * sizeof(u32)); - } else { // generate a random one - int i, j; - u32 r, mask, bop; - - /* This was here because random submatrices designed with the same columns are known to be bad. But sometimes the - * payload is so small that there is no other way. - * - * Modified by Tomas Filler. - */ - - boost::mt19937 generator( 1 ); - boost::variate_generator< boost::mt19937&, boost::uniform_int< > > rng( generator, boost::uniform_int< >( 0, RAND_MAX ) ); - - mask = (1 << (height - 2)) - 1; - bop = (1 << (height - 1)) + 1; - if((1 << (height - 2)) < width) { - // fprintf(stderr, "Cannot generate matrix for this payload. Choose a higher constraint height.\n"); - // generate the columns randomly but let first and last row be full of 1s. - // I know, there will be identical columns. - for(i = 0; i < width; i++) { - r = ((rng() & mask) << 1) + bop; - cols[i] = r; - } - } else { - for(i = 0; i < width; i++) { - for(j = -1; j < i;) { - r = ((rng() & mask) << 1) + bop; - for(j = 0; j < i; j++) { - if(cols[j] == r) - break; - } - } - cols[i] = r; - } - } - - } - return cols; -} diff --git a/spaces/Yuelili/RealNagrse/realesrgan/models/__init__.py b/spaces/Yuelili/RealNagrse/realesrgan/models/__init__.py deleted file mode 100644 index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000 --- a/spaces/Yuelili/RealNagrse/realesrgan/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import model modules for registry -# scan all the files that end with '_model.py' under the model folder -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames] diff --git a/spaces/Zaixi/ICLR_FLAG/utils/chem.py b/spaces/Zaixi/ICLR_FLAG/utils/chem.py deleted file mode 100644 index 64c279fe507ee337e259e62bfd25e6d07e6a2003..0000000000000000000000000000000000000000 --- a/spaces/Zaixi/ICLR_FLAG/utils/chem.py +++ /dev/null @@ -1,119 +0,0 @@ -import copy -import torch -from io import BytesIO -from openbabel import openbabel -from torch_geometric.utils import to_networkx -from torch_geometric.data import Data -from torch_scatter import scatter -from rdkit import Chem -from rdkit.Chem.rdchem import Mol, HybridizationType, BondType -from rdkit.Chem.rdchem import BondType as BT - - -BOND_TYPES = {t: i for i, t in enumerate(BT.names.values())} -BOND_NAMES = {i: t for i, t in enumerate(BT.names.keys())} - - - -def rdmol_to_data(mol, smiles=None): - assert mol.GetNumConformers() == 1 - N = mol.GetNumAtoms() - - pos = torch.tensor(mol.GetConformer(0).GetPositions(), dtype=torch.float32) - - atomic_number = [] - aromatic = [] - sp = [] - sp2 = [] - sp3 = [] - num_hs = [] - for atom in mol.GetAtoms(): - atomic_number.append(atom.GetAtomicNum()) - aromatic.append(1 if atom.GetIsAromatic() else 0) - hybridization = atom.GetHybridization() - sp.append(1 if hybridization == HybridizationType.SP else 0) - sp2.append(1 if hybridization == HybridizationType.SP2 else 0) - sp3.append(1 if hybridization == HybridizationType.SP3 else 0) - - z = torch.tensor(atomic_number, dtype=torch.long) - - row, col, edge_type = [], [], [] - for bond in mol.GetBonds(): - start, end = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx() - row += [start, end] - col += [end, start] - edge_type += 2 * [BOND_TYPES[bond.GetBondType()]] - - edge_index = torch.tensor([row, col], dtype=torch.long) - edge_type = torch.tensor(edge_type) - - perm = (edge_index[0] * N + edge_index[1]).argsort() - edge_index = edge_index[:, perm] - edge_type = edge_type[perm] - - row, col = edge_index - hs = (z == 1).to(torch.float32) - - num_hs = scatter(hs[row], col, dim_size=N, reduce='sum').tolist() - - if smiles is None: - smiles = Chem.MolToSmiles(Chem.RemoveHs(mol)) - - data = Data(atom_type=z, pos=pos, edge_index=edge_index, edge_type=edge_type, - rdmol=copy.deepcopy(mol), smiles=smiles) - data.nx = to_networkx(data, to_undirected=True) - - return data - - -def generated_to_xyz(data): - ptable = Chem.GetPeriodicTable() - - num_atoms = data.ligand_context_element.size(0) - xyz = "%d\n\n" % (num_atoms, ) - for i in range(num_atoms): - symb = ptable.GetElementSymbol(data.ligand_context_element[i].item()) - x, y, z = data.ligand_context_pos[i].clone().cpu().tolist() - xyz += "%s %.8f %.8f %.8f\n" % (symb, x, y, z) - - return xyz - - -def generated_to_sdf(data): - xyz = generated_to_xyz(data) - obConversion = openbabel.OBConversion() - obConversion.SetInAndOutFormats("xyz", "sdf") - - mol = openbabel.OBMol() - obConversion.ReadString(mol, xyz) - sdf = obConversion.WriteString(mol) - return sdf - - -def sdf_to_rdmol(sdf): - stream = BytesIO(sdf.encode()) - suppl = Chem.ForwardSDMolSupplier(stream) - for mol in suppl: - return mol - return None - -def generated_to_rdmol(data): - sdf = generated_to_sdf(data) - return sdf_to_rdmol(sdf) - - -def filter_rd_mol(rdmol): - ring_info = rdmol.GetRingInfo() - ring_info.AtomRings() - rings = [set(r) for r in ring_info.AtomRings()] - - # 3-3 ring intersection - for i, ring_a in enumerate(rings): - if len(ring_a) != 3:continue - for j, ring_b in enumerate(rings): - if i <= j: continue - inter = ring_a.intersection(ring_b) - if (len(ring_b) == 3) and (len(inter) > 0): - return False - - return True \ No newline at end of file diff --git a/spaces/Zengyf-CVer/watermarking_lab/style.css b/spaces/Zengyf-CVer/watermarking_lab/style.css deleted file mode 100644 index ec3ee34e87dd302756e8746fe264d70f4f454454..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/watermarking_lab/style.css +++ /dev/null @@ -1,7 +0,0 @@ -h1 { - text-align: center; -} - -#content_align { - text-align: center; -} diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/post_processing/merge_augs.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/post_processing/merge_augs.py deleted file mode 100644 index dbcf79d1ac20ddc32cb1605e06d253803250c855..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/post_processing/merge_augs.py +++ /dev/null @@ -1,150 +0,0 @@ -import copy -import warnings - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..bbox import bbox_mapping_back - - -def merge_aug_proposals(aug_proposals, img_metas, cfg): - """Merge augmented proposals (multiscale, flip, etc.) - - Args: - aug_proposals (list[Tensor]): proposals from different testing - schemes, shape (n, 5). Note that they are not rescaled to the - original image size. - - img_metas (list[dict]): list of image info dict where each dict has: - 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - cfg (dict): rpn test config. - - Returns: - Tensor: shape (n, 4), proposals corresponding to original image scale. - """ - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - f'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - recovered_proposals = [] - for proposals, img_info in zip(aug_proposals, img_metas): - img_shape = img_info['img_shape'] - scale_factor = img_info['scale_factor'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - _proposals = proposals.clone() - _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - recovered_proposals.append(_proposals) - aug_proposals = torch.cat(recovered_proposals, dim=0) - merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(), - aug_proposals[:, -1].contiguous(), - cfg.nms.iou_threshold) - scores = merged_proposals[:, 4] - _, order = scores.sort(0, descending=True) - num = min(cfg.max_per_img, merged_proposals.shape[0]) - order = order[:num] - merged_proposals = merged_proposals[order, :] - return merged_proposals - - -def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.stack(recovered_bboxes).mean(dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.stack(aug_scores).mean(dim=0) - return bboxes, scores - - -def merge_aug_scores(aug_scores): - """Merge augmented bbox scores.""" - if isinstance(aug_scores[0], torch.Tensor): - return torch.mean(torch.stack(aug_scores), dim=0) - else: - return np.mean(aug_scores, axis=0) - - -def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None): - """Merge augmented mask prediction. - - Args: - aug_masks (list[ndarray]): shape (n, #class, h, w) - img_shapes (list[ndarray]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_masks = [] - for mask, img_info in zip(aug_masks, img_metas): - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - if flip: - if flip_direction == 'horizontal': - mask = mask[:, :, :, ::-1] - elif flip_direction == 'vertical': - mask = mask[:, :, ::-1, :] - else: - raise ValueError( - f"Invalid flipping direction '{flip_direction}'") - recovered_masks.append(mask) - - if weights is None: - merged_masks = np.mean(recovered_masks, axis=0) - else: - merged_masks = np.average( - np.array(recovered_masks), axis=0, weights=np.array(weights)) - return merged_masks diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/detectors_resnet.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/detectors_resnet.py deleted file mode 100644 index 519db464493c7c7b60fc34be1d21add2235ec341..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/detectors_resnet.py +++ /dev/null @@ -1,305 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, constant_init - -from ..builder import BACKBONES -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - r"""Bottleneck for the ResNet backbone in `DetectoRS - `_. - - This bottleneck allows the users to specify whether to use - SAC (Switchable Atrous Convolution) and RFP (Recursive Feature Pyramid). - - Args: - inplanes (int): The number of input channels. - planes (int): The number of output channels before expansion. - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - sac (dict, optional): Dictionary to construct SAC. Default: None. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - rfp_inplanes=None, - sac=None, - **kwargs): - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - assert sac is None or isinstance(sac, dict) - self.sac = sac - self.with_sac = sac is not None - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False) - - self.rfp_inplanes = rfp_inplanes - if self.rfp_inplanes: - self.rfp_conv = build_conv_layer( - None, - self.rfp_inplanes, - planes * self.expansion, - 1, - stride=1, - bias=True) - self.init_weights() - - def init_weights(self): - """Initialize the weights.""" - if self.rfp_inplanes: - constant_init(self.rfp_conv, 0) - - def rfp_forward(self, x, rfp_feat): - """The forward function that also takes the RFP features as input.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - if self.rfp_inplanes: - rfp_feat = self.rfp_conv(rfp_feat) - out = out + rfp_feat - - out = self.relu(out) - - return out - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone for RPF in detectoRS. - - The difference between this module and base class is that we pass - ``rfp_inplanes`` to the first block. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - rfp_inplanes=None, - **kwargs): - self.block = block - assert downsample_first, f'downsample_first={downsample_first} is ' \ - 'not supported in DetectoRS' - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down and stride != 1: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - rfp_inplanes=rfp_inplanes, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - super(ResLayer, self).__init__(*layers) - - -@BACKBONES.register_module() -class DetectoRS_ResNet(ResNet): - """ResNet backbone for DetectoRS. - - Args: - sac (dict, optional): Dictionary to construct SAC (Switchable Atrous - Convolution). Default: None. - stage_with_sac (list): Which stage to use sac. Default: (False, False, - False, False). - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - output_img (bool): If ``True``, the input image will be inserted into - the starting position of output. Default: False. - pretrained (str, optional): The pretrained model to load. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - sac=None, - stage_with_sac=(False, False, False, False), - rfp_inplanes=None, - output_img=False, - pretrained=None, - **kwargs): - self.sac = sac - self.stage_with_sac = stage_with_sac - self.rfp_inplanes = rfp_inplanes - self.output_img = output_img - self.pretrained = pretrained - super(DetectoRS_ResNet, self).__init__(**kwargs) - - self.inplanes = self.stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - sac = self.sac if self.stage_with_sac[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - planes = self.base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - sac=sac, - rfp_inplanes=rfp_inplanes if i > 0 else None, - plugins=stage_plugins) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer`` for DetectoRS.""" - return ResLayer(**kwargs) - - def forward(self, x): - """Forward function.""" - outs = list(super(DetectoRS_ResNet, self).forward(x)) - if self.output_img: - outs.insert(0, x) - return tuple(outs) - - def rfp_forward(self, x, rfp_feats): - """Forward function for RFP.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - rfp_feat = rfp_feats[i] if i > 0 else None - for layer in res_layer: - x = layer.rfp_forward(x, rfp_feat) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/cascade_encoder_decoder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/cascade_encoder_decoder.py deleted file mode 100644 index 873957d8d6468147c994493d92ff5c1b15bfb703..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/cascade_encoder_decoder.py +++ /dev/null @@ -1,98 +0,0 @@ -from torch import nn - -from annotator.uniformer.mmseg.core import add_prefix -from annotator.uniformer.mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .encoder_decoder import EncoderDecoder - - -@SEGMENTORS.register_module() -class CascadeEncoderDecoder(EncoderDecoder): - """Cascade Encoder Decoder segmentors. - - CascadeEncoderDecoder almost the same as EncoderDecoder, while decoders of - CascadeEncoderDecoder are cascaded. The output of previous decoder_head - will be the input of next decoder_head. - """ - - def __init__(self, - num_stages, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - self.num_stages = num_stages - super(CascadeEncoderDecoder, self).__init__( - backbone=backbone, - decode_head=decode_head, - neck=neck, - auxiliary_head=auxiliary_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - assert isinstance(decode_head, list) - assert len(decode_head) == self.num_stages - self.decode_head = nn.ModuleList() - for i in range(self.num_stages): - self.decode_head.append(builder.build_head(decode_head[i])) - self.align_corners = self.decode_head[-1].align_corners - self.num_classes = self.decode_head[-1].num_classes - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone and heads. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - self.backbone.init_weights(pretrained=pretrained) - for i in range(self.num_stages): - self.decode_head[i].init_weights() - if self.with_auxiliary_head: - if isinstance(self.auxiliary_head, nn.ModuleList): - for aux_head in self.auxiliary_head: - aux_head.init_weights() - else: - self.auxiliary_head.init_weights() - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self.decode_head[0].forward_test(x, img_metas, self.test_cfg) - for i in range(1, self.num_stages): - out = self.decode_head[i].forward_test(x, out, img_metas, - self.test_cfg) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - - loss_decode = self.decode_head[0].forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode_0')) - - for i in range(1, self.num_stages): - # forward test again, maybe unnecessary for most methods. - prev_outputs = self.decode_head[i - 1].forward_test( - x, img_metas, self.test_cfg) - loss_decode = self.decode_head[i].forward_train( - x, prev_outputs, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_decode, f'decode_{i}')) - - return losses diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/roi_align_rotated.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, features, rois, out_size, spatial_scale, sample_num, - aligned, clockwise): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - features, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sample_num, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - ctx.spatial_scale = spatial_scale - ctx.sample_num = sample_num - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, data_height, data_width = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - ext_module.roi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - aligned = ctx.aligned - clockwise = ctx.clockwise - sample_num = ctx.sample_num - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - out_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sample_num (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - def __init__(self, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - super(RoIAlignRotated, self).__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.sample_num = int(sample_num) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, features, rois): - return RoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.sample_num, self.aligned, - self.clockwise) diff --git a/spaces/airus/ss/README.md b/spaces/airus/ss/README.md deleted file mode 100644 index a1d8f85510b85899dc7d22770ab7859484075847..0000000000000000000000000000000000000000 --- a/spaces/airus/ss/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Real CUGAN -emoji: 🔥 -colorFrom: pink -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation/README.md b/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation/README.md deleted file mode 100644 index c8b524f6113cf7b889806541822ae8941115bf0a..0000000000000000000000000000000000000000 --- a/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Qualitative Pretrained Model Evaluation -emoji: 🌍 -colorFrom: green -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ajitrajasekharan/self-supervised-ner-biomedical/README.md b/spaces/ajitrajasekharan/self-supervised-ner-biomedical/README.md deleted file mode 100644 index 35af7c494bda23c8bf05665c8ddd773c06d54aa7..0000000000000000000000000000000000000000 --- a/spaces/ajitrajasekharan/self-supervised-ner-biomedical/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Self Supervised Ner Biomedical -emoji: 📈 -colorFrom: blue -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/DialoGPT-small/app.py b/spaces/akhaliq/DialoGPT-small/app.py deleted file mode 100644 index a8356b553029fcda635e18f9bdfad6327aeeb2ea..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/DialoGPT-small/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import os -os.system('pip install gradio==2.3.5b0') -from transformers import AutoModelForCausalLM, AutoTokenizer -import torch -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small") -model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small") - - - -def dialogpt(text): - history = gr.get_state() or [] - # encode the new user input, add the eos_token and return a tensor in Pytorch - for step in range(50000): - - new_user_input_ids = tokenizer.encode(text + tokenizer.eos_token, return_tensors='pt') - - # append the new user input tokens to the chat history - bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids - - # generated a response while limiting the total chat history to 1000 tokens, - chat_history_ids = model.generate(bot_input_ids, max_length=100, pad_token_id=tokenizer.eos_token_id) - response = tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True) - history.append((text, response)) - gr.set_state(history) - # pretty print last ouput tokens from bot - html = "
    " - for user_msg, resp_msg in history: - html += f"
    {user_msg}
    " - html += f"
    {resp_msg}
    " - html += "
    " - return html - - -inputs = gr.inputs.Textbox(lines=1, label="Input Text") -outputs = gr.outputs.Textbox(label="DialoGPT") - -title = "DialoGPT" -description = "Gradio demo for Microsoft DialoGPT: A State-of-the-Art Large-scale Pretrained Response Generation Model. To use it, simply input text or click one of the examples text to load them. Read more at the links below." -article = "

    DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation | Github Repo

    " -examples = [ - ["Hi, how are you?"], - ["How far away is the moon?"], -] - -gr.Interface(dialogpt, inputs, "html", title=title, description=description, article=article, examples=examples,css=""" - .chatbox {display:flex;flex-direction:column} - .user_msg, .resp_msg {padding:4px;margin-bottom:4px;border-radius:4px;width:80%} - .user_msg {background-color:cornflowerblue;color:white;align-self:start} - .resp_msg {background-color:lightgray;align-self:self-end} -""").launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/Kapao/utils/downloads.py b/spaces/akhaliq/Kapao/utils/downloads.py deleted file mode 100644 index 13ab159015d2f2094adc0dab342c3118b2e5f263..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Kapao/utils/downloads.py +++ /dev/null @@ -1,149 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Download utils -""" - -import os -import platform -import subprocess -import time -import urllib -from pathlib import Path - -import requests -import torch - - -def gsutil_getsize(url=''): - # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du - s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') - return eval(s.split(' ')[0]) if len(s) else 0 # bytes - - -def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''): - # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes - file = Path(file) - assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}" - try: # url1 - print(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, str(file)) - assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check - except Exception as e: # url2 - file.unlink(missing_ok=True) # remove partial downloads - print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...') - os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail - finally: - if not file.exists() or file.stat().st_size < min_bytes: # check - file.unlink(missing_ok=True) # remove partial downloads - print(f"ERROR: {assert_msg}\n{error_msg}") - print('') - - -def attempt_download(file, repo='ultralytics/yolov5'): # from utils.downloads import *; attempt_download() - # Attempt file download if does not exist - file = Path(str(file).strip().replace("'", '')) - - if not file.exists(): - # URL specified - name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc. - if str(file).startswith(('http:/', 'https:/')): # download - url = str(file).replace(':/', '://') # Pathlib turns :// -> :/ - name = name.split('?')[0] # parse authentication https://url.com/file.txt?auth... - safe_download(file=name, url=url, min_bytes=1E5) - return name - - # GitHub assets - file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required) - try: - response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api - assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...] - tag = response['tag_name'] # i.e. 'v1.0' - except: # fallback plan - assets = ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', - 'yolov5s6.pt', 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt'] - try: - tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1] - except: - tag = 'v5.0' # current release - tag = 'v5.0' # download v5.0 models - if name in assets: - safe_download(file, - url=f'https://github.com/{repo}/releases/download/{tag}/{name}', - # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional) - min_bytes=1E5, - error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/') - - return str(file) - - -def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'): - # Downloads a file from Google Drive. from yolov5.utils.downloads import *; gdrive_download() - t = time.time() - file = Path(file) - cookie = Path('cookie') # gdrive cookie - print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='') - file.unlink(missing_ok=True) # remove existing file - cookie.unlink(missing_ok=True) # remove existing cookie - - # Attempt file download - out = "NUL" if platform.system() == "Windows" else "/dev/null" - os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}') - if os.path.exists('cookie'): # large file - s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}' - else: # small file - s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"' - r = os.system(s) # execute, capture return - cookie.unlink(missing_ok=True) # remove existing cookie - - # Error check - if r != 0: - file.unlink(missing_ok=True) # remove partial - print('Download error ') # raise Exception('Download error') - return r - - # Unzip if archive - if file.suffix == '.zip': - print('unzipping... ', end='') - os.system(f'unzip -q {file}') # unzip - file.unlink() # remove zip to free space - - print(f'Done ({time.time() - t:.1f}s)') - return r - - -def get_token(cookie="./cookie"): - with open(cookie) as f: - for line in f: - if "download" in line: - return line.split()[-1] - return "" - -# Google utils: https://cloud.google.com/storage/docs/reference/libraries ---------------------------------------------- -# -# -# def upload_blob(bucket_name, source_file_name, destination_blob_name): -# # Uploads a file to a bucket -# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python -# -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(destination_blob_name) -# -# blob.upload_from_filename(source_file_name) -# -# print('File {} uploaded to {}.'.format( -# source_file_name, -# destination_blob_name)) -# -# -# def download_blob(bucket_name, source_blob_name, destination_file_name): -# # Uploads a blob from a bucket -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(source_blob_name) -# -# blob.download_to_filename(destination_file_name) -# -# print('Blob {} downloaded to {}.'.format( -# source_blob_name, -# destination_file_name)) diff --git a/spaces/akhaliq/lama/saicinpainting/training/losses/feature_matching.py b/spaces/akhaliq/lama/saicinpainting/training/losses/feature_matching.py deleted file mode 100644 index c019895c9178817837d1a6773367b178a861dc61..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/training/losses/feature_matching.py +++ /dev/null @@ -1,33 +0,0 @@ -from typing import List - -import torch -import torch.nn.functional as F - - -def masked_l2_loss(pred, target, mask, weight_known, weight_missing): - per_pixel_l2 = F.mse_loss(pred, target, reduction='none') - pixel_weights = mask * weight_missing + (1 - mask) * weight_known - return (pixel_weights * per_pixel_l2).mean() - - -def masked_l1_loss(pred, target, mask, weight_known, weight_missing): - per_pixel_l1 = F.l1_loss(pred, target, reduction='none') - pixel_weights = mask * weight_missing + (1 - mask) * weight_known - return (pixel_weights * per_pixel_l1).mean() - - -def feature_matching_loss(fake_features: List[torch.Tensor], target_features: List[torch.Tensor], mask=None): - if mask is None: - res = torch.stack([F.mse_loss(fake_feat, target_feat) - for fake_feat, target_feat in zip(fake_features, target_features)]).mean() - else: - res = 0 - norm = 0 - for fake_feat, target_feat in zip(fake_features, target_features): - cur_mask = F.interpolate(mask, size=fake_feat.shape[-2:], mode='bilinear', align_corners=False) - error_weights = 1 - cur_mask - cur_val = ((fake_feat - target_feat).pow(2) * error_weights).mean() - res = res + cur_val - norm += 1 - res = res / norm - return res diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/msgpack/exceptions.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/msgpack/exceptions.py deleted file mode 100644 index d6d2615cfdd0b914d064cdf7eecd45761e4bcaf6..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/msgpack/exceptions.py +++ /dev/null @@ -1,48 +0,0 @@ -class UnpackException(Exception): - """Base class for some exceptions raised while unpacking. - - NOTE: unpack may raise exception other than subclass of - UnpackException. If you want to catch all error, catch - Exception instead. - """ - - -class BufferFull(UnpackException): - pass - - -class OutOfData(UnpackException): - pass - - -class FormatError(ValueError, UnpackException): - """Invalid msgpack format""" - - -class StackError(ValueError, UnpackException): - """Too nested""" - - -# Deprecated. Use ValueError instead -UnpackValueError = ValueError - - -class ExtraData(UnpackValueError): - """ExtraData is raised when there is trailing data. - - This exception is raised while only one-shot (not streaming) - unpack. - """ - - def __init__(self, unpacked, extra): - self.unpacked = unpacked - self.extra = extra - - def __str__(self): - return "unpack(b) received extra data." - - -# Deprecated. Use Exception instead to catch all exception during packing. -PackException = Exception -PackValueError = ValueError -PackOverflowError = OverflowError diff --git a/spaces/am4nsolanki/hateful-memes/demo_data_generator.py b/spaces/am4nsolanki/hateful-memes/demo_data_generator.py deleted file mode 100644 index 43656edbf5026c74fb7b269a1d6aba5fbd87e1f3..0000000000000000000000000000000000000000 --- a/spaces/am4nsolanki/hateful-memes/demo_data_generator.py +++ /dev/null @@ -1,17 +0,0 @@ -import pandas as pd -from PIL import Image -import PIL - -image_path = '/Users/amansolanki/datasets/hateful-memes-images/' -image_save_path = '/Users/amansolanki/hateful-memes/images/' -test_seen_original = pd.read_csv('/Users/amansolanki/PycharmProjects/hateful-memes-challenge/data/test_seen.csv') - -demo_data = test_seen_original.sample(50, random_state=7) - -# Save Images -for image in demo_data['image_id']: - picture = Image.open(image_path+image) - picture = picture.save(image_save_path+image) - -# Save CSV -demo_data.to_csv('demo_data.csv', index=False) diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_record_file.c b/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_record_file.c deleted file mode 100644 index 562a8e9038761a2b6ce8f6b468fcfc906f7740ed..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_record_file.c +++ /dev/null @@ -1,457 +0,0 @@ -/** @file paex_record_file.c - @ingroup examples_src - @brief Record input into a file, then playback recorded data from file (Windows only at the moment) - @author Robert Bielik -*/ -/* - * $Id: paex_record_file.c 1752 2011-09-08 03:21:55Z philburk $ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" -#include "pa_ringbuffer.h" -#include "pa_util.h" - -#ifdef _WIN32 -#include -#include -#endif - -static ring_buffer_size_t rbs_min(ring_buffer_size_t a, ring_buffer_size_t b) -{ - return (a < b) ? a : b; -} - -/* #define SAMPLE_RATE (17932) // Test failure to open with this value. */ -#define FILE_NAME "audio_data.raw" -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (512) -#define NUM_SECONDS (10) -#define NUM_CHANNELS (2) -#define NUM_WRITES_PER_BUFFER (4) -/* #define DITHER_FLAG (paDitherOff) */ -#define DITHER_FLAG (0) /**/ - - -/* Select sample format. */ -#if 1 -#define PA_SAMPLE_TYPE paFloat32 -typedef float SAMPLE; -#define SAMPLE_SILENCE (0.0f) -#define PRINTF_S_FORMAT "%.8f" -#elif 1 -#define PA_SAMPLE_TYPE paInt16 -typedef short SAMPLE; -#define SAMPLE_SILENCE (0) -#define PRINTF_S_FORMAT "%d" -#elif 0 -#define PA_SAMPLE_TYPE paInt8 -typedef char SAMPLE; -#define SAMPLE_SILENCE (0) -#define PRINTF_S_FORMAT "%d" -#else -#define PA_SAMPLE_TYPE paUInt8 -typedef unsigned char SAMPLE; -#define SAMPLE_SILENCE (128) -#define PRINTF_S_FORMAT "%d" -#endif - -typedef struct -{ - unsigned frameIndex; - int threadSyncFlag; - SAMPLE *ringBufferData; - PaUtilRingBuffer ringBuffer; - FILE *file; - void *threadHandle; -} -paTestData; - -/* This routine is run in a separate thread to write data from the ring buffer into a file (during Recording) */ -static int threadFunctionWriteToRawFile(void* ptr) -{ - paTestData* pData = (paTestData*)ptr; - - /* Mark thread started */ - pData->threadSyncFlag = 0; - - while (1) - { - ring_buffer_size_t elementsInBuffer = PaUtil_GetRingBufferReadAvailable(&pData->ringBuffer); - if ( (elementsInBuffer >= pData->ringBuffer.bufferSize / NUM_WRITES_PER_BUFFER) || - pData->threadSyncFlag ) - { - void* ptr[2] = {0}; - ring_buffer_size_t sizes[2] = {0}; - - /* By using PaUtil_GetRingBufferReadRegions, we can read directly from the ring buffer */ - ring_buffer_size_t elementsRead = PaUtil_GetRingBufferReadRegions(&pData->ringBuffer, elementsInBuffer, ptr + 0, sizes + 0, ptr + 1, sizes + 1); - if (elementsRead > 0) - { - int i; - for (i = 0; i < 2 && ptr[i] != NULL; ++i) - { - fwrite(ptr[i], pData->ringBuffer.elementSizeBytes, sizes[i], pData->file); - } - PaUtil_AdvanceRingBufferReadIndex(&pData->ringBuffer, elementsRead); - } - - if (pData->threadSyncFlag) - { - break; - } - } - - /* Sleep a little while... */ - Pa_Sleep(20); - } - - pData->threadSyncFlag = 0; - - return 0; -} - -/* This routine is run in a separate thread to read data from file into the ring buffer (during Playback). When the file - has reached EOF, a flag is set so that the play PA callback can return paComplete */ -static int threadFunctionReadFromRawFile(void* ptr) -{ - paTestData* pData = (paTestData*)ptr; - - while (1) - { - ring_buffer_size_t elementsInBuffer = PaUtil_GetRingBufferWriteAvailable(&pData->ringBuffer); - - if (elementsInBuffer >= pData->ringBuffer.bufferSize / NUM_WRITES_PER_BUFFER) - { - void* ptr[2] = {0}; - ring_buffer_size_t sizes[2] = {0}; - - /* By using PaUtil_GetRingBufferWriteRegions, we can write directly into the ring buffer */ - PaUtil_GetRingBufferWriteRegions(&pData->ringBuffer, elementsInBuffer, ptr + 0, sizes + 0, ptr + 1, sizes + 1); - - if (!feof(pData->file)) - { - ring_buffer_size_t itemsReadFromFile = 0; - int i; - for (i = 0; i < 2 && ptr[i] != NULL; ++i) - { - itemsReadFromFile += (ring_buffer_size_t)fread(ptr[i], pData->ringBuffer.elementSizeBytes, sizes[i], pData->file); - } - PaUtil_AdvanceRingBufferWriteIndex(&pData->ringBuffer, itemsReadFromFile); - - /* Mark thread started here, that way we "prime" the ring buffer before playback */ - pData->threadSyncFlag = 0; - } - else - { - /* No more data to read */ - pData->threadSyncFlag = 1; - break; - } - } - - /* Sleep a little while... */ - Pa_Sleep(20); - } - - return 0; -} - -typedef int (*ThreadFunctionType)(void*); - -/* Start up a new thread in the given function, at the moment only Windows, but should be very easy to extend - to posix type OSs (Linux/Mac) */ -static PaError startThread( paTestData* pData, ThreadFunctionType fn ) -{ -#ifdef _WIN32 - typedef unsigned (__stdcall* WinThreadFunctionType)(void*); - pData->threadHandle = (void*)_beginthreadex(NULL, 0, (WinThreadFunctionType)fn, pData, CREATE_SUSPENDED, NULL); - if (pData->threadHandle == NULL) return paUnanticipatedHostError; - - /* Set file thread to a little higher prio than normal */ - SetThreadPriority(pData->threadHandle, THREAD_PRIORITY_ABOVE_NORMAL); - - /* Start it up */ - pData->threadSyncFlag = 1; - ResumeThread(pData->threadHandle); - -#endif - - /* Wait for thread to startup */ - while (pData->threadSyncFlag) { - Pa_Sleep(10); - } - - return paNoError; -} - -static int stopThread( paTestData* pData ) -{ - pData->threadSyncFlag = 1; - /* Wait for thread to stop */ - while (pData->threadSyncFlag) { - Pa_Sleep(10); - } -#ifdef _WIN32 - CloseHandle(pData->threadHandle); - pData->threadHandle = 0; -#endif - - return paNoError; -} - - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may be called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int recordCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - ring_buffer_size_t elementsWriteable = PaUtil_GetRingBufferWriteAvailable(&data->ringBuffer); - ring_buffer_size_t elementsToWrite = rbs_min(elementsWriteable, (ring_buffer_size_t)(framesPerBuffer * NUM_CHANNELS)); - const SAMPLE *rptr = (const SAMPLE*)inputBuffer; - - (void) outputBuffer; /* Prevent unused variable warnings. */ - (void) timeInfo; - (void) statusFlags; - (void) userData; - - data->frameIndex += PaUtil_WriteRingBuffer(&data->ringBuffer, rptr, elementsToWrite); - - return paContinue; -} - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may be called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int playCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - ring_buffer_size_t elementsToPlay = PaUtil_GetRingBufferReadAvailable(&data->ringBuffer); - ring_buffer_size_t elementsToRead = rbs_min(elementsToPlay, (ring_buffer_size_t)(framesPerBuffer * NUM_CHANNELS)); - SAMPLE* wptr = (SAMPLE*)outputBuffer; - - (void) inputBuffer; /* Prevent unused variable warnings. */ - (void) timeInfo; - (void) statusFlags; - (void) userData; - - data->frameIndex += PaUtil_ReadRingBuffer(&data->ringBuffer, wptr, elementsToRead); - - return data->threadSyncFlag ? paComplete : paContinue; -} - -static unsigned NextPowerOf2(unsigned val) -{ - val--; - val = (val >> 1) | val; - val = (val >> 2) | val; - val = (val >> 4) | val; - val = (val >> 8) | val; - val = (val >> 16) | val; - return ++val; -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStreamParameters inputParameters, - outputParameters; - PaStream* stream; - PaError err = paNoError; - paTestData data = {0}; - unsigned delayCntr; - unsigned numSamples; - unsigned numBytes; - - printf("patest_record.c\n"); fflush(stdout); - - /* We set the ring buffer size to about 500 ms */ - numSamples = NextPowerOf2((unsigned)(SAMPLE_RATE * 0.5 * NUM_CHANNELS)); - numBytes = numSamples * sizeof(SAMPLE); - data.ringBufferData = (SAMPLE *) PaUtil_AllocateMemory( numBytes ); - if( data.ringBufferData == NULL ) - { - printf("Could not allocate ring buffer data.\n"); - goto done; - } - - if (PaUtil_InitializeRingBuffer(&data.ringBuffer, sizeof(SAMPLE), numSamples, data.ringBufferData) < 0) - { - printf("Failed to initialize ring buffer. Size is not power of 2 ??\n"); - goto done; - } - - err = Pa_Initialize(); - if( err != paNoError ) goto done; - - inputParameters.device = Pa_GetDefaultInputDevice(); /* default input device */ - if (inputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default input device.\n"); - goto done; - } - inputParameters.channelCount = 2; /* stereo input */ - inputParameters.sampleFormat = PA_SAMPLE_TYPE; - inputParameters.suggestedLatency = Pa_GetDeviceInfo( inputParameters.device )->defaultLowInputLatency; - inputParameters.hostApiSpecificStreamInfo = NULL; - - /* Record some audio. -------------------------------------------- */ - err = Pa_OpenStream( - &stream, - &inputParameters, - NULL, /* &outputParameters, */ - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - recordCallback, - &data ); - if( err != paNoError ) goto done; - - /* Open the raw audio 'cache' file... */ - data.file = fopen(FILE_NAME, "wb"); - if (data.file == 0) goto done; - - /* Start the file writing thread */ - err = startThread(&data, threadFunctionWriteToRawFile); - if( err != paNoError ) goto done; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto done; - printf("\n=== Now recording to '" FILE_NAME "' for %d seconds!! Please speak into the microphone. ===\n", NUM_SECONDS); fflush(stdout); - - /* Note that the RECORDING part is limited with TIME, not size of the file and/or buffer, so you can - increase NUM_SECONDS until you run out of disk */ - delayCntr = 0; - while( delayCntr++ < NUM_SECONDS ) - { - printf("index = %d\n", data.frameIndex ); fflush(stdout); - Pa_Sleep(1000); - } - if( err < 0 ) goto done; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto done; - - /* Stop the thread */ - err = stopThread(&data); - if( err != paNoError ) goto done; - - /* Close file */ - fclose(data.file); - data.file = 0; - - /* Playback recorded data. -------------------------------------------- */ - data.frameIndex = 0; - - outputParameters.device = Pa_GetDefaultOutputDevice(); /* default output device */ - if (outputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default output device.\n"); - goto done; - } - outputParameters.channelCount = 2; /* stereo output */ - outputParameters.sampleFormat = PA_SAMPLE_TYPE; - outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputParameters.device )->defaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - printf("\n=== Now playing back from file '" FILE_NAME "' until end-of-file is reached ===\n"); fflush(stdout); - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - playCallback, - &data ); - if( err != paNoError ) goto done; - - if( stream ) - { - /* Open file again for reading */ - data.file = fopen(FILE_NAME, "rb"); - if (data.file != 0) - { - /* Start the file reading thread */ - err = startThread(&data, threadFunctionReadFromRawFile); - if( err != paNoError ) goto done; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto done; - - printf("Waiting for playback to finish.\n"); fflush(stdout); - - /* The playback will end when EOF is reached */ - while( ( err = Pa_IsStreamActive( stream ) ) == 1 ) { - printf("index = %d\n", data.frameIndex ); fflush(stdout); - Pa_Sleep(1000); - } - if( err < 0 ) goto done; - } - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto done; - - fclose(data.file); - - printf("Done.\n"); fflush(stdout); - } - -done: - Pa_Terminate(); - if( data.ringBufferData ) /* Sure it is NULL or valid. */ - PaUtil_FreeMemory( data.ringBufferData ); - if( err != paNoError ) - { - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - err = 1; /* Always return 0 or 1, but no other return codes. */ - } - return err; -} diff --git a/spaces/amirDev/crowd-counting-p2p/crowd_datasets/SHHA/__init__.py b/spaces/amirDev/crowd-counting-p2p/crowd_datasets/SHHA/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/antonovmaxim/text-generation-webui-space/css/html_instruct_style.css b/spaces/antonovmaxim/text-generation-webui-space/css/html_instruct_style.css deleted file mode 100644 index 2fd751d5672e6bc58d9f0e37d4ed79a501530d3d..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/css/html_instruct_style.css +++ /dev/null @@ -1,83 +0,0 @@ -.chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: calc(100vh - 306px); - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - word-break: break-word; - overflow-wrap: anywhere; -} - -.message { - display: grid; - grid-template-columns: 60px 1fr; - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; -} - -.username { - display: none; -} - -.message-body {} - -.message-body p { - font-size: 15px !important; - line-height: 1.75 !important; - margin-bottom: 1.25em !important; -} - -.message-body li { - margin-top: 0.5em !important; - margin-bottom: 0.5em !important; -} - -.message-body li > p { - display: inline !important; -} - -.message-body code { - overflow-x: auto; -} -.message-body :not(pre) > code { - white-space: normal !important; -} - -.dark .message-body p em { - color: rgb(198, 202, 214) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} - -.gradio-container .chat .assistant-message { - padding: 15px; - border-radius: 20px; - background-color: #0000000f; - margin-top: 9px !important; - margin-bottom: 18px !important; -} - -.gradio-container .chat .user-message { - padding: 15px; - border-radius: 20px; - margin-bottom: 9px !important; -} - -.dark .chat .assistant-message { - background-color: #374151; -} - -code { - background-color: white !important; -} - -.dark code { - background-color: #1a212f !important; -} \ No newline at end of file diff --git a/spaces/arbml/Ashaar/poetry_diacritizer/__init__.py b/spaces/arbml/Ashaar/poetry_diacritizer/__init__.py deleted file mode 100644 index 42dcd7aa19e499d4ac240deb5d7e68bcf33795ed..0000000000000000000000000000000000000000 --- a/spaces/arbml/Ashaar/poetry_diacritizer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from poetry_diacritizer import predict \ No newline at end of file diff --git a/spaces/aseifert/ExplaiNER/src/load.py b/spaces/aseifert/ExplaiNER/src/load.py deleted file mode 100644 index 4609782b1ed195a39d40c6aa00cdc0125287940d..0000000000000000000000000000000000000000 --- a/spaces/aseifert/ExplaiNER/src/load.py +++ /dev/null @@ -1,101 +0,0 @@ -from typing import Optional - -import pandas as pd -import streamlit as st -from datasets import Dataset # type: ignore - -from src.data import encode_dataset, get_collator, get_data, predict -from src.model import get_encoder, get_model, get_tokenizer -from src.subpages import Context -from src.utils import align_sample, device, explode_df - -_TOKENIZER_NAME = ( - "xlm-roberta-base", - "gagan3012/bert-tiny-finetuned-ner", - "distilbert-base-german-cased", -)[0] - - -def _load_models_and_tokenizer( - encoder_model_name: str, - model_name: str, - tokenizer_name: Optional[str], - device: str = "cpu", -): - sentence_encoder = get_encoder(encoder_model_name, device=device) - tokenizer = get_tokenizer(tokenizer_name if tokenizer_name else model_name) - labels = "O B-COMMA".split() if "comma" in model_name else None - model = get_model(model_name, labels=labels) - return sentence_encoder, model, tokenizer - - -@st.cache(allow_output_mutation=True) -def load_context( - encoder_model_name: str, - model_name: str, - ds_name: str, - ds_config_name: str, - ds_split_name: str, - split_sample_size: int, - randomize_sample: bool, - **kw_args, -) -> Context: - """Utility method loading (almost) everything we need for the application. - This exists just because we want to cache the results of this function. - - Args: - encoder_model_name (str): Name of the sentence encoder to load. - model_name (str): Name of the NER model to load. - ds_name (str): Dataset name or path. - ds_config_name (str): Dataset config name. - ds_split_name (str): Dataset split name. - split_sample_size (int): Number of examples to load from the split. - - Returns: - Context: An object containing everything we need for the application. - """ - - sentence_encoder, model, tokenizer = _load_models_and_tokenizer( - encoder_model_name=encoder_model_name, - model_name=model_name, - tokenizer_name=_TOKENIZER_NAME if "comma" in model_name else None, - device=str(device), - ) - collator = get_collator(tokenizer) - - # load data related stuff - split: Dataset = get_data( - ds_name, ds_config_name, ds_split_name, split_sample_size, randomize_sample - ) - tags = split.features["ner_tags"].feature - split_encoded, word_ids, ids = encode_dataset(split, tokenizer) - - # transform into dataframe - df = predict(split_encoded, model, tokenizer, collator, tags) - df["word_ids"] = word_ids - df["ids"] = ids - - # explode, clean, merge - df_tokens = explode_df(df) - df_tokens_cleaned = df_tokens.query("labels != 'IGN'") - df_merged = pd.DataFrame(df.apply(align_sample, axis=1).tolist()) - df_tokens_merged = explode_df(df_merged) - - return Context( - **{ - "model": model, - "tokenizer": tokenizer, - "sentence_encoder": sentence_encoder, - "df": df, - "df_tokens": df_tokens, - "df_tokens_cleaned": df_tokens_cleaned, - "df_tokens_merged": df_tokens_merged, - "tags": tags, - "labels": tags.names, - "split_sample_size": split_sample_size, - "ds_name": ds_name, - "ds_config_name": ds_config_name, - "ds_split_name": ds_split_name, - "split": split, - } - ) diff --git a/spaces/atsantiago/Monocular_Depth_Filter/layers.py b/spaces/atsantiago/Monocular_Depth_Filter/layers.py deleted file mode 100644 index 5d5cab6c73fc414ca0b85745382c7a77fa335996..0000000000000000000000000000000000000000 --- a/spaces/atsantiago/Monocular_Depth_Filter/layers.py +++ /dev/null @@ -1,55 +0,0 @@ -from tensorflow.keras.layers import Layer, InputSpec -import keras.utils.conv_utils as conv_utils -import tensorflow as tf -import tensorflow.keras.backend as K - - -def normalize_data_format(value): - if value is None: - value = K.image_data_format() - data_format = value.lower() - if data_format not in {'channels_first', 'channels_last'}: - raise ValueError('The `data_format` argument must be one of ' - '"channels_first", "channels_last". Received: ' + - str(value)) - return data_format - - -class BilinearUpSampling2D(Layer): - def __init__(self, size=(2, 2), data_format=None, **kwargs): - super(BilinearUpSampling2D, self).__init__(**kwargs) - self.data_format = normalize_data_format(data_format) - self.size = conv_utils.normalize_tuple(size, 2, 'size') - self.input_spec = InputSpec(ndim=4) - - def compute_output_shape(self, input_shape): - if self.data_format == 'channels_first': - height = self.size[0] * input_shape[2] if input_shape[2] is not None else None - width = self.size[1] * input_shape[3] if input_shape[3] is not None else None - return (input_shape[0], - input_shape[1], - height, - width) - elif self.data_format == 'channels_last': - height = self.size[0] * input_shape[1] if input_shape[1] is not None else None - width = self.size[1] * input_shape[2] if input_shape[2] is not None else None - return (input_shape[0], - height, - width, - input_shape[3]) - - def call(self, inputs): - input_shape = K.shape(inputs) - if self.data_format == 'channels_first': - height = self.size[0] * input_shape[2] if input_shape[2] is not None else None - width = self.size[1] * input_shape[3] if input_shape[3] is not None else None - elif self.data_format == 'channels_last': - height = self.size[0] * input_shape[1] if input_shape[1] is not None else None - width = self.size[1] * input_shape[2] if input_shape[2] is not None else None - - return tf.image.resize(inputs, [height, width], method=tf.image.ResizeMethod.BILINEAR) - - def get_config(self): - config = {'size': self.size, 'data_format': self.data_format} - base_config = super(BilinearUpSampling2D, self).get_config() - return dict(list(base_config.items()) + list(config.items())) \ No newline at end of file diff --git a/spaces/autonomousvision/projected_gan/README.md b/spaces/autonomousvision/projected_gan/README.md deleted file mode 100644 index 54df255e5bb93901c75945cf428c9c3055fe37d0..0000000000000000000000000000000000000000 --- a/spaces/autonomousvision/projected_gan/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Projected_gan -emoji: 👁 -colorFrom: purple -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/awacke1/03-AW-ChatbotBlenderbot/app.py b/spaces/awacke1/03-AW-ChatbotBlenderbot/app.py deleted file mode 100644 index 81a521248e8f7cdad40078742a14e97db5f9cc8b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/03-AW-ChatbotBlenderbot/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -import torch -import gradio as gr - - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime -DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/Carddata.csv" -DATASET_REPO_ID = "awacke1/Carddata.csv" -DATA_FILENAME = "Carddata.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") - -SCRIPT = """ - -""" - -try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) -except: - print("file not found") -repo = Repository( - local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN -) - -def generate_html() -> str: - with open(DATA_FILE) as csvfile: - reader = csv.DictReader(csvfile) - rows = [] - for row in reader: - rows.append(row) - rows.reverse() - if len(rows) == 0: - return "no messages yet" - else: - html = "
    " - for row in rows: - html += "
    " - html += f"{row['inputs']}" - html += f"{row['outputs']}" - html += "
    " - html += "
    " - return html - -def store_message(name: str, message: str): - if name and message: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())} - ) - commit_url = repo.push_to_hub() - return "" - -iface = gr.Interface( - store_message, - [ - inputs.Textbox(placeholder="Your name"), - inputs.Textbox(placeholder="Your message", lines=2), - ], - "html", - css=""" - .message {background-color:cornflowerblue;color:white; padding:4px;margin:4px;border-radius:4px; } - """, - title="Reading/writing to a HuggingFace dataset repo from Spaces", - description=f"This is a demo of how to do simple *shared data persistence* in a Gradio Space, backed by a dataset repo.", - article=f"The dataset repo is [{DATASET_REPO_URL}]({DATASET_REPO_URL})", -) - - -mname = "facebook/blenderbot-400M-distill" -model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -def take_last_tokens(inputs, note_history, history): - """Filter the last 128 tokens""" - if inputs['input_ids'].shape[1] > 128: - inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-128:].tolist()]) - inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-128:].tolist()]) - note_history = [' '.join(note_history[0].split(' ')[2:])] - history = history[1:] - return inputs, note_history, history - -def add_note_to_history(note, note_history): - """Add a note to the historical information""" - note_history.append(note) - note_history = ' '.join(note_history) - return [note_history] - -title = "Chatbot State of the Art now with Memory Saved to Dataset" -description = """Chatbot With Memory""" - -def chat(message, history): - history = history or [] - if history: - history_useful = [' '.join([str(a[0])+' '+str(a[1]) for a in history])] - else: - history_useful = [] - history_useful = add_note_to_history(message, history_useful) - inputs = tokenizer(history_useful, return_tensors="pt") - inputs, history_useful, history = take_last_tokens(inputs, history_useful, history) - reply_ids = model.generate(**inputs) - response = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0] - history_useful = add_note_to_history(response, history_useful) - list_history = history_useful[0].split(' ') - history.append((list_history[-2], list_history[-1])) - store_message(message, response) # Save to dataset - return history, history - -gr.Interface( - fn=chat, - theme="huggingface", - css=".footer {display:none !important}", - inputs=["text", "state"], - outputs=["chatbot", "state"], - title=title, - allow_flagging="never", - description=f"Gradio chatbot backed by memory in a dataset repository.", - article=f"The dataset repo is [{DATASET_REPO_URL}]({DATASET_REPO_URL})" - ).launch() \ No newline at end of file diff --git a/spaces/awacke1/ChatGPT-Genius-Assistant-4Writers/app.py b/spaces/awacke1/ChatGPT-Genius-Assistant-4Writers/app.py deleted file mode 100644 index 6e86abff95351769056a696503ff05e34c7117c9..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ChatGPT-Genius-Assistant-4Writers/app.py +++ /dev/null @@ -1,442 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -import json -import mistune -import pytz -import math -import requests -import time -import re -import textract - -from datetime import datetime -from openai import ChatCompletion -from xml.etree import ElementTree as ET -from bs4 import BeautifulSoup -from collections import deque -from audio_recorder_streamlit import audio_recorder - -from dotenv import load_dotenv -from PyPDF2 import PdfReader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chat_models import ChatOpenAI -from langchain.memory import ConversationBufferMemory -from langchain.chains import ConversationalRetrievalChain -from templates import css, bot_template, user_template - - - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") # Date and time DD-HHMM - safe_prompt = "".join(x for x in prompt if x.isalnum())[:90] # Limit file name size and trim whitespace - return f"{safe_date_time}_{safe_prompt}.{file_type}" # Return a safe file name - - -def transcribe_audio(openai_key, file_path, model): - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - chatResponse = chat_with_model(response.json().get('text'), '') # ************************************* - transcript = response.json().get('text') - #st.write('Responses:') - #st.write(chatResponse) - filename = generate_filename(transcript, 'txt') - create_file(filename, transcript, chatResponse) - return transcript - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -def create_file(filename, prompt, response): - if filename.endswith(".txt"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n{response}") - elif filename.endswith(".htm"): - with open(filename, 'w') as file: - file.write(f"{prompt} {response}") - elif filename.endswith(".md"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n\n{response}") - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - try: - data = file.read() - except: - st.write('') - return file_path - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - - start_time = time.time() - report = [] - res_box = st.empty() - collected_chunks = [] - collected_messages = [] - - for chunk in openai.ChatCompletion.create( - model='gpt-3.5-turbo', - messages=conversation, - temperature=0.5, - stream=True - ): - - collected_chunks.append(chunk) # save the event response - chunk_message = chunk['choices'][0]['delta'] # extract the message - collected_messages.append(chunk_message) # save the message - - content=chunk["choices"][0].get("delta",{}).get("content") - - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - #result = result.replace("\n", "") - res_box.markdown(f'*{result}*') - except: - st.write(' ') - - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - -def extract_mime_type(file): - # Check if the input is a string - if isinstance(file, str): - pattern = r"type='(.*?)'" - match = re.search(pattern, file) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract MIME type from {file}") - # If it's not a string, assume it's a streamlit.UploadedFile object - elif isinstance(file, streamlit.UploadedFile): - return file.type - else: - raise TypeError("Input should be a string or a streamlit.UploadedFile object") - -from io import BytesIO -import re - -def extract_file_extension(file): - # get the file name directly from the UploadedFile object - file_name = file.name - pattern = r".*?\.(.*?)$" - match = re.search(pattern, file_name) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract file extension from {file_name}") - -def pdf2txt(docs): - text = "" - for file in docs: - file_extension = extract_file_extension(file) - # print the file extension - st.write(f"File type extension: {file_extension}") - - # read the file according to its extension - try: - if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']: - text += file.getvalue().decode('utf-8') - elif file_extension.lower() == 'pdf': - from PyPDF2 import PdfReader - pdf = PdfReader(BytesIO(file.getvalue())) - for page in range(len(pdf.pages)): - text += pdf.pages[page].extract_text() # new PyPDF2 syntax - except Exception as e: - st.write(f"Error processing file {file.name}: {e}") - - return text - -def pdf2txt_old(pdf_docs): - st.write(pdf_docs) - for file in pdf_docs: - mime_type = extract_mime_type(file) - st.write(f"MIME type of file: {mime_type}") - - text = "" - for pdf in pdf_docs: - pdf_reader = PdfReader(pdf) - for page in pdf_reader.pages: - text += page.extract_text() - return text - -def txt2chunks(text): - text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len) - return text_splitter.split_text(text) - -def vector_store(text_chunks): - key = os.getenv('OPENAI_API_KEY') - embeddings = OpenAIEmbeddings(openai_api_key=key) - return FAISS.from_texts(texts=text_chunks, embedding=embeddings) - -def get_chain(vectorstore): - llm = ChatOpenAI() - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory) - -def process_user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - for i, message in enumerate(st.session_state.chat_history): - template = user_template if i % 2 == 0 else bot_template - st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True) - # Save file output from PDF query results - filename = generate_filename(user_question, 'txt') - create_file(filename, user_question, message.content) - - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -def divide_prompt(prompt, max_length): - words = prompt.split() - chunks = [] - current_chunk = [] - current_length = 0 - for word in words: - if len(word) + current_length <= max_length: - current_length += len(word) + 1 # Adding 1 to account for spaces - current_chunk.append(word) - else: - chunks.append(' '.join(current_chunk)) - current_chunk = [word] - current_length = len(word) - chunks.append(' '.join(current_chunk)) # Append the final chunk - return chunks - -def main(): - # Sidebar and global - openai.api_key = os.getenv('OPENAI_API_KEY') - st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide") - - # File type for output, model choice - menu = ["txt", "htm", "xlsx", "csv", "md", "py"] #619 - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(openai.api_key, filename, "whisper-1") - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - filename=None # since transcription is finished next time just use the saved transcript - - # prompt interfaces - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - - # file section interface for prompts against large documents as context - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx","csv","html", "htm", "md", "txt"]) - - # Document section chat - document_sections = deque() - document_responses = {} - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - if len(document_sections) > 0: - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) # ************************************* - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - - #response = chat_with_model(user_prompt, ''.join(list(document_sections,)), model_choice) # ************************************* - - # Divide the user_prompt into smaller sections - user_prompt_sections = divide_prompt(user_prompt, max_length) - full_response = '' - for prompt_section in user_prompt_sections: - # Process each section with the model - response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice) - full_response += response + '\n' # Combine the responses - - #st.write('Response:') - #st.write(full_response) - - response = full_response - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - - # sidebar of files - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, file_contents, model_choice) - filename = generate_filename(file_contents, choice) - create_file(filename, file_contents, response) - - st.experimental_rerun() - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -if __name__ == "__main__": - main() - -load_dotenv() -st.write(css, unsafe_allow_html=True) - -st.header("Chat with documents :books:") -user_question = st.text_input("Ask a question about your documents:") -if user_question: - process_user_input(user_question) - -with st.sidebar: - st.subheader("Your documents") - docs = st.file_uploader("import documents", accept_multiple_files=True) - with st.spinner("Processing"): - raw = pdf2txt(docs) - if len(raw) > 0: - length = str(len(raw)) - text_chunks = txt2chunks(raw) - vectorstore = vector_store(text_chunks) - st.session_state.conversation = get_chain(vectorstore) - st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing - filename = generate_filename(raw, 'txt') - create_file(filename, raw, '') \ No newline at end of file diff --git a/spaces/awacke1/Image-to-Text-Salesforce-blip-image-captioning-base/app.py b/spaces/awacke1/Image-to-Text-Salesforce-blip-image-captioning-base/app.py deleted file mode 100644 index 73f3a256fe9e3bc1899b0d5b5e38da4d58b6648b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Image-to-Text-Salesforce-blip-image-captioning-base/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Salesforce/blip-image-captioning-base").launch() \ No newline at end of file diff --git a/spaces/awacke1/YouTubeTranscript2Insights/README.md b/spaces/awacke1/YouTubeTranscript2Insights/README.md deleted file mode 100644 index 38ac7264b9783365083f18e64fde0bb8bc2567de..0000000000000000000000000000000000000000 --- a/spaces/awacke1/YouTubeTranscript2Insights/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YouTubeTranscript2Insights -emoji: 📺📜🔍 -colorFrom: yellow -colorTo: pink -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/baguioni/Voice-Activity-Detection/app.py b/spaces/baguioni/Voice-Activity-Detection/app.py deleted file mode 100644 index 5504ef57761e5cf5e4dec587ceeab2ce39be1456..0000000000000000000000000000000000000000 --- a/spaces/baguioni/Voice-Activity-Detection/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import gradio as gr -import ast - -model = gr.Interface.load("huggingface/pyannote/voice-activity-detection") - -def format_inference(output): - if output: - timestamps = [] - for out in output: - timestamps.append(f"Start: {out['start']}s; Stop: {out['stop']}s") - return "\n".join(timestamps) - else: - return "No voice activity detected." - -def inference(audio_file): - output = model(audio_file) - output_list = ast.literal_eval(output) - return format_inference(output_list) - -inputs = gr.inputs.Audio(label="Input Audio", type="filepath", source="upload") -outputs = gr.outputs.Textbox(label="Voice timestamps", type="auto") -title = "Voice Activity Detection" -description = "

    Upload an audio file and detected voices will be timestamped.

    " -article = "

    Model by pyannote, https://github.com/pyannote/pyannote-audio

    " -examples = [["talk.wav"], - ["talk2.wav"], - ["silence.wav"],] - -gr.Interface(inference, - inputs, - outputs, - title=title, - description=description, - article=article, - examples=examples, - theme="grass", - allow_flagging=False, - ).launch(debug=True) - - - - - diff --git a/spaces/banana-dev/demo-mistral-7b-instruct-v0.1/app.py b/spaces/banana-dev/demo-mistral-7b-instruct-v0.1/app.py deleted file mode 100644 index 41485fabc5ffd4c8ec8b2469484bab98aa4634d8..0000000000000000000000000000000000000000 --- a/spaces/banana-dev/demo-mistral-7b-instruct-v0.1/app.py +++ /dev/null @@ -1,59 +0,0 @@ -import banana_dev as client -import os -from dotenv import load_dotenv -import gradio as gr - -load_dotenv() - -BANANA_API_KEY = os.getenv("BANANA_API_KEY") -BANANA_MODEL_KEY = os.getenv("BANANA_MODEL_KEY") -BANANA_URL = os.getenv("BANANA_URL") - -def send_request(message): - - my_model = client.Client( - api_key=BANANA_API_KEY, - model_key=BANANA_MODEL_KEY, - url=BANANA_URL, - ) - inputs = { - "prompt": message, - "max_new_tokens": 256 - } - result, meta = my_model.call("/", inputs) - return result["outputs"] - -def get_response(output): - # Remove the trailing "
    " - output = output.rstrip("") - # Split the string into a list of sentences - sentences = output.split("[/INST]") - # The last message is the last element in the list - last_message = sentences[-1].strip() - return last_message - -def random_response(message, history): - message_history_string = "" - if not history: - message = "[INST] " + message + " [/INST]" - output = send_request(message) - output_final = get_response(output) - return output_final - else: - for i in history: - if history.index(i) == 0: - message_history_string += "[INST] " + i[0] + " [/INST] " + i[1] + "" - else: - message_history_string += " [INST] " + i[0] + " [/INST]" + i[1] + "" - message_history_string += " [INST] " + message + " [/INST]" - output = send_request(message_history_string) - output_final = get_response(output) - return output_final - -with gr.Blocks() as demo: - title_with_logo = gr.Markdown("# Powered by Banana ") - #!removed slider because of styling issues - #slider = gr.components.Slider(minimum=100, maximum=1000, value=256, label="Max New Tokens") - chat = gr.ChatInterface(random_response) - -demo.launch() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/misc/TextureCubeNode.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/misc/TextureCubeNode.js deleted file mode 100644 index 0d900201688f72f189efd57ec16f9e7289675c43..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/misc/TextureCubeNode.js +++ /dev/null @@ -1,64 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { TempNode } from '../core/TempNode.js'; -import { TextureCubeUVNode } from './TextureCubeUVNode.js'; - -function TextureCubeNode( value, uv ) { - - TempNode.call( this, 'v4' ); - - this.value = value; - this.uv = uv || new TextureCubeUVNode(); - -} - -TextureCubeNode.prototype = Object.create( TempNode.prototype ); -TextureCubeNode.prototype.constructor = TextureCubeNode; -TextureCubeNode.prototype.nodeType = "TextureCube"; - -TextureCubeNode.prototype.generate = function ( builder, output ) { - - if ( builder.isShader( 'fragment' ) ) { - - var uv_10 = this.uv.build( builder ) + '.uv_10', - uv_20 = this.uv.build( builder ) + '.uv_20', - t = this.uv.build( builder ) + '.t'; - - var color10 = builder.getTexelDecodingFunctionFromTexture( 'texture2D( ' + this.value.build( builder, 'sampler2D' ) + ', ' + uv_10 + ' )', this.value.value ), - color20 = builder.getTexelDecodingFunctionFromTexture( 'texture2D( ' + this.value.build( builder, 'sampler2D' ) + ', ' + uv_20 + ' )', this.value.value ); - - return builder.format( 'vec4( mix( ' + color10 + ', ' + color20 + ', ' + t + ' ).rgb, 1.0 )', this.getType( builder ), output ); - - } else { - - console.warn( "THREE.TextureCubeNode is not compatible with " + builder.shader + " shader." ); - - return builder.format( 'vec4( 0.0 )', this.getType( builder ), output ); - - } - -}; - -TextureCubeNode.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.uv = this.uv.toJSON( meta ).uuid; - data.textureSize = this.textureSize.toJSON( meta ).uuid; - data.blinnExponentToRoughness = this.blinnExponentToRoughness.toJSON( meta ).uuid; - - if ( this.roughness ) data.roughness = this.roughness.toJSON( meta ).uuid; - - } - - return data; - -}; - -export { TextureCubeNode }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/lights/LightShadow.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/lights/LightShadow.d.ts deleted file mode 100644 index d588b0b7873c6529f9d763ac58059fbbcb348281..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/lights/LightShadow.d.ts +++ /dev/null @@ -1,19 +0,0 @@ -import { Camera } from './../cameras/Camera'; -import { Vector2 } from './../math/Vector2'; -import { Matrix4 } from './../math/Matrix4'; -import { RenderTarget } from '../renderers/webgl/WebGLRenderLists'; - -export class LightShadow { - constructor(camera: Camera); - - camera: Camera; - bias: number; - radius: number; - mapSize: Vector2; - map: RenderTarget; - matrix: Matrix4; - - copy(source: LightShadow): this; - clone(recursive?: boolean): this; - toJSON(): any; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/objects/LineLoop.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/objects/LineLoop.d.ts deleted file mode 100644 index e6a5afa8a692bafce2210fe433de92356e88e185..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/objects/LineLoop.d.ts +++ /dev/null @@ -1,14 +0,0 @@ -import { Line } from './Line'; -import { Geometry } from './../core/Geometry'; -import { Material } from './../materials/Material'; -import { BufferGeometry } from '../core/BufferGeometry'; - -export class LineLoop extends Line { - constructor( - geometry?: Geometry | BufferGeometry, - material?: Material | Material[] - ); - - type: 'LineLoop'; - isLineLoop: true; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/normal_frag.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/normal_frag.glsl.js deleted file mode 100644 index 9c57a54a9e465d212e0cb0440a261201fba8e394..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/normal_frag.glsl.js +++ /dev/null @@ -1,40 +0,0 @@ -export default /* glsl */` -#define NORMAL - -uniform float opacity; - -#if defined( FLAT_SHADED ) || defined( USE_BUMPMAP ) || ( defined( USE_NORMALMAP ) && ! defined( OBJECTSPACE_NORMALMAP ) ) - - varying vec3 vViewPosition; - -#endif - -#ifndef FLAT_SHADED - - varying vec3 vNormal; - - #ifdef USE_TANGENT - - varying vec3 vTangent; - varying vec3 vBitangent; - - #endif - -#endif - -#include -#include -#include -#include -#include - -void main() { - - #include - #include - #include - - gl_FragColor = vec4( packNormalToRGB( normal ), opacity ); - -} -`; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/textures/CompressedTexture.js b/spaces/banana-projects/web3d/node_modules/three/src/textures/CompressedTexture.js deleted file mode 100644 index 9e800e59746f68833e9d4791f93387081e1a143b..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/textures/CompressedTexture.js +++ /dev/null @@ -1,32 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - */ - -import { Texture } from './Texture.js'; - -function CompressedTexture( mipmaps, width, height, format, type, mapping, wrapS, wrapT, magFilter, minFilter, anisotropy, encoding ) { - - Texture.call( this, null, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy, encoding ); - - this.image = { width: width, height: height }; - this.mipmaps = mipmaps; - - // no flipping for cube textures - // (also flipping doesn't work for compressed textures ) - - this.flipY = false; - - // can't generate mipmaps for compressed textures - // mips must be embedded in DDS files - - this.generateMipmaps = false; - -} - -CompressedTexture.prototype = Object.create( Texture.prototype ); -CompressedTexture.prototype.constructor = CompressedTexture; - -CompressedTexture.prototype.isCompressedTexture = true; - - -export { CompressedTexture }; diff --git a/spaces/bigPear/digitalWDF/examples/train_rm.sh b/spaces/bigPear/digitalWDF/examples/train_rm.sh deleted file mode 100644 index 39fc14bb08018513988aea205f6ad7354aa273c4..0000000000000000000000000000000000000000 --- a/spaces/bigPear/digitalWDF/examples/train_rm.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -CUDA_VISIBLE_DEVICES=0 python ../src/train_rm.py \ - --do_train \ - --dataset comparison_gpt4_zh \ - --dataset_dir ../data \ - --finetuning_type lora \ - --output_dir path_to_rm_checkpoint \ - --overwrite_cache \ - --per_device_train_batch_size 4 \ - --gradient_accumulation_steps 4 \ - --lr_scheduler_type cosine \ - --logging_steps 10 \ - --save_steps 1000 \ - --learning_rate 1e-5 \ - --num_train_epochs 1.0 \ - --fp16 diff --git a/spaces/bigscience/ethical-charter/README.md b/spaces/bigscience/ethical-charter/README.md deleted file mode 100644 index 9c2c70b3be8b9b3f6f8916ad2a87ffde7a37ac5b..0000000000000000000000000000000000000000 --- a/spaces/bigscience/ethical-charter/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Ethical Charter -emoji: ✒️ -colorFrom: blue -colorTo: green -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Adobe Photoshop CS6 13.0.1 Final Multilanguage Keygen.md b/spaces/bioriAsaeru/text-to-voice/Adobe Photoshop CS6 13.0.1 Final Multilanguage Keygen.md deleted file mode 100644 index 70bb4a66ce71c468a500439e13bf8a8ab6d96586..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Adobe Photoshop CS6 13.0.1 Final Multilanguage Keygen.md +++ /dev/null @@ -1,22 +0,0 @@ - -

    How to Download and Install Adobe Photoshop CS6 13.0.1 Final Multilanguage with Keygen

    -

    Adobe Photoshop CS6 is one of the most popular and powerful photo editing software in the world. It offers a wide range of features and tools to create stunning images, graphics, and designs. However, it is also quite expensive and requires a license key to activate.

    -

    If you want to use Adobe Photoshop CS6 for free, you can download and install the final multilanguage version with a keygen. A keygen is a program that generates a valid license key for a software. In this article, we will show you how to do that step by step.

    -

    Adobe Photoshop CS6 13.0.1 Final Multilanguage keygen


    Download Zip ✸✸✸ https://urloso.com/2uyOD4



    -

    Step 1: Download Adobe Photoshop CS6 13.0.1 Final Multilanguage

    -

    The first thing you need to do is to download the Adobe Photoshop CS6 13.0.1 final multilanguage setup file from a reliable source. You can use the link below to download it:

    -https://example.com/download/Adobe_Photoshop_CS6_13_0_1_Final_Multilanguage.rar -

    The file size is about 1.8 GB, so it may take some time depending on your internet speed. Once the download is complete, you need to extract the rar file using a program like WinRAR or 7-Zip.

    -

    Step 2: Install Adobe Photoshop CS6 13.0.1 Final Multilanguage

    -

    After extracting the rar file, you will see a folder named "Adobe Photoshop CS6 13.0.1 Final Multilanguage". Open it and double-click on the "Set-up.exe" file to start the installation process.

    -

    Follow the instructions on the screen and choose your preferred language and destination folder. When you reach the screen that asks for a serial number, click on "Try". Do not enter any serial number or close the window.

    -

    Step 3: Run Adobe Photoshop CS6 13.0.1 Final Multilanguage Keygen

    -

    Now, go back to the folder where you extracted the rar file and open the subfolder named "Keygen". You will see a file named "Adobe.Photoshop.CS6.v13.0.Keymaker-CORE.exe". Right-click on it and choose "Run as administrator".

    -

    A window will pop up with a list of Adobe products. Select "Adobe Photoshop CS6" from the drop-down menu and click on "Generate". A serial number will be generated for you. Copy it and paste it into the installation window where it asks for a serial number.

    -

    -

    Click on "Next" and complete the installation process. You have successfully installed Adobe Photoshop CS6 13.0.1 final multilanguage with a keygen.

    -

    Step 4: Enjoy Adobe Photoshop CS6 13.0.1 Final Multilanguage

    -

    You can now launch Adobe Photoshop CS6 from your desktop or start menu and enjoy its full features and functions. You can create and edit amazing photos, graphics, and designs with this powerful software.

    -

    Note: This method is for educational purposes only. We do not support or encourage piracy or illegal use of software. If you like Adobe Photoshop CS6, please buy it from the official website and support the developers.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Codici-Di-Attivazione-Per-Chefmate-International24.md b/spaces/bioriAsaeru/text-to-voice/Codici-Di-Attivazione-Per-Chefmate-International24.md deleted file mode 100644 index 5b05d94e57baba5a5fd23be11d7ea946e3e46b86..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Codici-Di-Attivazione-Per-Chefmate-International24.md +++ /dev/null @@ -1,108 +0,0 @@ -## codici di attivazione per chefmate international.24 - - - - - - - - - -**Download File >> [https://venemena.blogspot.com/?download=2txSWq](https://venemena.blogspot.com/?download=2txSWq)** - - - - - - - - - - - - - -# Codici di attivazione per Chefmate International.24: come ottenerli e usarli - - - -Chefmate International.24 è un software di gestione e organizzazione per la ristorazione, che ti permette di controllare le scorte, i menu, i fornitori, le prenotazioni e molto altro. Per utilizzare il software, però, hai bisogno di un codice di attivazione che ti confermi la licenza d'uso. - - - -Se hai acquistato il software online o in un negozio autorizzato, dovresti aver ricevuto il codice di attivazione via email o stampato sulla confezione. Se invece hai scaricato il software da una fonte non ufficiale, potresti non avere il codice di attivazione o avere un codice non valido. - - - -In questo caso, puoi provare a richiedere un codice di attivazione gratuito tramite il sito ufficiale di Chefmate International.24. Per farlo, devi seguire questi passaggi: - - - -1. Accedi al sito [www.chefmate-international.com](https://www.chefmate-international.com) e clicca su "Richiedi codice di attivazione". - -2. Inserisci i tuoi dati personali e il tuo indirizzo email. - -3. Seleziona il tipo di licenza che desideri: base, premium o professional. - -4. Accetta i termini e le condizioni d'uso del software. - -5. Clicca su "Invia richiesta". - - - -Dopo aver inviato la richiesta, dovresti ricevere il tuo codice di attivazione via email entro 24 ore. Se non ricevi il codice, controlla la tua casella di posta indesiderata o contatta il servizio clienti di Chefmate International.24. - - - -Una volta ottenuto il codice di attivazione, puoi inserirlo nel software per sbloccare tutte le funzionalità . Per farlo, devi seguire questi passaggi: - - - -1. Apri il software Chefmate International.24 sul tuo computer. - -2. Clicca su "Attiva licenza". - -3. Inserisci il tuo codice di attivazione nel campo apposito. - -4. Clicca su "Conferma". - - - -Ora puoi goderti il software Chefmate International.24 e gestire al meglio il tuo ristorante. Ricorda che il codice di attivazione è personale e non può essere condiviso con altri utenti. In caso di smarrimento o furto del codice, puoi richiederne uno nuovo tramite il sito ufficiale. - - - -Chefmate International.24 è un software innovativo e facile da usare, che si adatta alle esigenze di ogni tipo di ristorante. Che tu abbia un piccolo bistrot o una grande catena di ristoranti, Chefmate International.24 ti aiuta a ottimizzare il tuo lavoro e a migliorare il tuo servizio. - - - -Con Chefmate International.24 puoi: - - - -- Gestire le scorte e gli ordini dei fornitori in modo semplice e veloce. - -- Creare e modificare i menu in base alla stagionalità , alla disponibilità e alle preferenze dei clienti. - -- Monitorare le entrate e le uscite del tuo ristorante e avere una visione chiara della tua situazione finanziaria. - -- Gestire le prenotazioni e le assegnazioni dei tavoli in modo efficiente e senza errori. - -- Comunicare con il tuo staff e con i tuoi clienti in modo diretto e professionale. - -- Analizzare i dati e le statistiche del tuo ristorante e ricevere suggerimenti per migliorare il tuo rendimento. - - - -Chefmate International.24 è compatibile con tutti i principali sistemi operativi e dispositivi mobili. Puoi accedere al software da qualsiasi luogo e in qualsiasi momento, grazie alla connessione cloud sicura e affidabile. Inoltre, puoi personalizzare il software in base alle tue esigenze e preferenze, scegliendo tra le diverse opzioni di licenza disponibili. - - - -Chefmate International.24 è il software ideale per chi vuole portare il proprio ristorante al livello successivo. Non perdere tempo e richiedi subito il tuo codice di attivazione gratuito tramite il sito ufficiale. Scoprirai tutti i vantaggi di Chefmate International.24 e non potrai più farne a meno. - - dfd1c89656 - - - - - diff --git a/spaces/bioriAsaeru/text-to-voice/Hidden Object For Mac The Best Games to Test Your Observation Skills.md b/spaces/bioriAsaeru/text-to-voice/Hidden Object For Mac The Best Games to Test Your Observation Skills.md deleted file mode 100644 index 8b28eaa981546080179fbe263bd61a81695682b8..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Hidden Object For Mac The Best Games to Test Your Observation Skills.md +++ /dev/null @@ -1,32 +0,0 @@ - -

    Choose to show or hide objects from the Selection Pane. To hide an object, click the eye icon in the Selection Pane indicating that the object is Showing . The icon will change to a simple icon indicating that the object is Hidden from view. To show the object once again, simply click the Hidden icon , and the object will reappear.

    -

    Hidden Object For Mac


    Download === https://urloso.com/2uyRum



    -

    We and our partners use cookies to Store and/or access information on a device. We and our partners use data for Personalised ads and content, ad and content measurement, audience insights and product development. An example of data being processed may be a unique identifier stored in a cookie. Some of our partners may process your data as a part of their legitimate business interest without asking for consent. To view the purposes they believe they have legitimate interest for, or to object to this data processing use the vendor list link below. The consent submitted will only be used for data processing originating from this website. If you would like to change your settings or withdraw consent at any time, the link to do so is in our privacy policy accessible from our home page..

    -

    Released: August 2011 (1st chapter) to March 2021 (20th chapter).
    Available on PC, Mac, tablet and phone.For more details: Grim Tales Games.More Popular Hidden Object SeriesIf you love hidden object games. You might also like:

    -

    Group or ungroup objects listed in the Selection Pane. If you select multiple objects by using Command + Click, you can then group them or ungroup them by selecting Group Objects on the ribbon in the Format tab.

    -

    -

    Awakening is a casual hidden object puzzle adventure game series developed by Boomzap Entertainment and published by Big Fish Games. In order to progress through each game, the player must solve puzzles and find hidden objects. It is available on PC, Mac, iPhone and iPad platforms. The games are released in several languages with both Standard Editions and Collector's Editions, the latter including additional features.

    -

    As of August 2014, games from the Awakening series have been downloaded over 17 million times on PC, Mac and mobile devices.[6] It is considered "one of the oldest and most successful hidden object adventure franchises."[7]

    -

    Awakening: The Dreamless Castle was released as an exclusive game on Big Fish Games on February 14, 2010.[13] Awakening is an adventure/hidden object PC (also available for Mac) casual game set in a light fantasy setting. It debuted as the #3 game on the Big Fish Games Top 100 downloads list, quickly rising to #1 for all formats.[14] Awakening has also been released in German, Spanish, French, and Japanese - and has ranked at #1 in every language. According to a review by About.com, "Awakening: The Dreamless Castle is a superb example of a hidden object/adventure game."[15] Awakening: The Dreamless Castle was released on the iPad on December 16, 2010[16] and on the iPhone on December 18, 2010.[17]

    -

    Awakening: The Redleaf Forest: Collector's Edition is the sixth game in main Awakening series. It was released on Big Fish Games on May 31, 2014 for PC and Mac,[3] and May 6, 2015 for iPhone and iPad devices.[40] As the series finale, it received much feedback from players and is considered "the most beautiful, most challenging, most creative and most fabulous Awakening game."[7] It was also commended for being able to "reinvigorate the hidden object genre".[41] On its first week, it has reached the #1 spot in both PC and Mac charts and stayed there for two weeks.[42][43]

    -

    You can combine several objects into a groupso that they are treated as a single unit. You can then move ortransform the objects without affecting their individual positionsor attributes. For example, you might group the objects in a logodesign so that you can move and scale the logo as one unit.

    -

    When you group objects on different layers, InDesign groups the objects on the topmost layer containing at least one object in the group. However, InDesign remembers the layers to which each object belongs. This implies that, by default, if you then ungroup the objects, all the objects are restored back to their original layers.

    -

    As long asan object is locked, it cannot be moved. However, you can select lockedobjects if you turn off the Prevent Selection Of Locked Objectsoption in General preferences. When you select a locked object,you can change attributes such as color.

    -

    Youcan duplicate an object each time you change its position, orientation,or proportions. For example, you can create a flower by drawingone petal, setting its reference point at the base of the petal,and repeatedly rotating at incremental angles, simultaneously duplicatingto leave behind a new copy of the petal at each angle.

    -

    If you're looking for something to really challenge your sense of perception, you'll want to jump into one of the best hidden object games. These digital versions of Where's Waldo? offer up some awesomely complex brain teasers, testing your capacity to spy details in detailed environments.

    -

    As simple of a concept as the hidden object game is, the genre has enjoyed something of a renaissance in recent years, with some starting to introduce adventure mechanics or more robust storytelling. But, as always, there's a simple joy to be found in solving these visual puzzles. So keep on reading to find our pick of the 10 best hidden object games that you can play today.

    -

    Big Fish Studios is currently the, erm, big fish of the hidden object games market, but the majority of its titles can be discounted as nothing more than hastily thrown together pieces of shovelware. There are, however, a few exceptions to this rule - the Drawn series being one of them. The second game in the trilogy, Dark Flight, is by far the best of the bunch, picking up right where the last title left off in its whimsical fable-like tale.

    -

    To make something genuinely scary out of a hidden object game is quite the accomplishment. It is a genre that tends to be nearly devoid of animation, after all. And yet, somehow, developer Goblinz has managed to craft a deeply unsettling, impressively scream-worthy experience with True Fear: Forsaken Souls - a title that delivers more frights than some of the most expensive horror games out there.

    -

    Players must navigate the labyrinthine complex of an ancient crypt using only their wits and any nearby resources at hand, all while soaking up the devilishly moody atmosphere, laden with mystery and menace. The Room Two has more layers than a traditional hidden object game, throwing complex environmental puzzles and contextual riddles into the mix. In doing so, however, it expands the confines of the genre to devise a brain strain that is entirely its own thing, design conventions be damned.

    -

    The gameplay involves completing hidden object levels to unlock chapters. You complete quests, uncover clues about your friend's disappearance and the shadow cult, and try to explore various locations within the city before time runs out.

    -

    Hidden City is an ad-free, hidden-object, mystery adventure game that'll help you utilize your sleuthing skills. And speaking of sleuthing skills, you can practice them with friends and family through virtual murder mystery game websites.

    -

    Clockwork Tales: Of Glass and Ink is a steampunk hidden adventure game where the cities around the world are crumbling due to strange earthquakes. You play as Evangeline Glass, who must save her friend Dr. Ambrose Ink who has disappeared.

    -

    You'll investigate the town of Gottland, searching locations and interacting with characters. In addition to hidden object puzzles, there are minigames you must play to advance in the story. The game has voice-overs, giving it a realistic feel, with various difficulty levels, stunning backgrounds, and haunting music.

    -

    You can move around in the game by using the onscreen arrow keys. Click on objects to interact with them. Items you pick up go into your inventory, and it's up to you to figure out where to use them. There are no in-game guides, so you must rely on logic. You can also use hints if you're stuck.

    -

    Cradle of Empires is an adventure game of the match-3 puzzle variety. However, it's more than just a puzzle game. Your goal is to improve your empire through the resources you earn while matching objects, which you use to build structures. You also need to uncover your empire's story while protecting it from an evil sorcerer.

    -

    Alternatively, you may need to make some space on your Mac and think that deleting some of these hidden files might be a good way to do so. In that case we have a number of tips in How to free space on a Mac and How to delete Other storage on a Mac: our advice is not to delete these hidden files unless you really know what you are doing!

    -

    In Finder, you can click your hard drive under Locations, then open your Macintosh HD folder. Press Command + Shift + . (period) to make the hidden files appear. You can also do the same from inside the Documents, Applications, and Desktop folders.

    -

    Once both lines of code run, you should see your hidden files in Finder and any temporary files saved on the desktop. When you want to hide these files again, replace the value true with false, which would look like:

    -

    This method might seem less helpful than going through Finder, but Terminal can also help you hide individual files and folders on your computer. This would be most helpful if you have password-protected files or just want to prevent anyone who uses your Mac from messing around with something that's not already hidden. Open Terminal and write the following:

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/train.py b/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/train.py deleted file mode 100644 index b2f3c73ed6c344475c65ce7b2e94337bb0f2cb6b..0000000000000000000000000000000000000000 --- a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/train.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (C) 2021. Huawei Technologies Co., Ltd. All rights reserved. -# This program is free software; you can redistribute it and/or modify -# it under the terms of the MIT License. -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# MIT License for more details. - -import numpy as np -from tqdm import tqdm - -import torch -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -import params -from model import GradTTS -from data import TextMelDataset, TextMelBatchCollate -from utils import plot_tensor, save_plot -from text.symbols import symbols - - -train_filelist_path = params.train_filelist_path -valid_filelist_path = params.valid_filelist_path -cmudict_path = params.cmudict_path -add_blank = params.add_blank - -log_dir = params.log_dir -n_epochs = params.n_epochs -batch_size = params.batch_size -out_size = params.out_size -learning_rate = params.learning_rate -random_seed = params.seed -n_workers = params.n_workers - -nsymbols = len(symbols) + 1 if add_blank else len(symbols) -n_enc_channels = params.n_enc_channels -filter_channels = params.filter_channels -filter_channels_dp = params.filter_channels_dp -n_enc_layers = params.n_enc_layers -enc_kernel = params.enc_kernel -enc_dropout = params.enc_dropout -n_heads = params.n_heads -window_size = params.window_size - -n_feats = params.n_feats -n_fft = params.n_fft -sample_rate = params.sample_rate -hop_length = params.hop_length -win_length = params.win_length -f_min = params.f_min -f_max = params.f_max - -dec_dim = params.dec_dim -beta_min = params.beta_min -beta_max = params.beta_max -pe_scale = params.pe_scale - -num_workers = params.num_workers - -if __name__ == "__main__": - torch.manual_seed(random_seed) - np.random.seed(random_seed) - - print('Initializing logger...') - logger = SummaryWriter(log_dir=log_dir) - - print('Initializing data loaders...') - train_dataset = TextMelDataset(train_filelist_path, cmudict_path, add_blank, - n_fft, n_feats, sample_rate, hop_length, - win_length, f_min, f_max) - batch_collate = TextMelBatchCollate() - loader = DataLoader(dataset=train_dataset, batch_size=batch_size, - collate_fn=batch_collate, drop_last=True, - num_workers=num_workers, shuffle=False) - test_dataset = TextMelDataset(valid_filelist_path, cmudict_path, add_blank, - n_fft, n_feats, sample_rate, hop_length, - win_length, f_min, f_max) - - print('Initializing model...') - model = GradTTS(nsymbols, 1, None, n_enc_channels, filter_channels, filter_channels_dp, - n_heads, n_enc_layers, enc_kernel, enc_dropout, window_size, - n_feats, dec_dim, beta_min, beta_max, pe_scale).cuda() - print('Number of encoder + duration predictor parameters: %.2fm' % (model.encoder.nparams/1e6)) - print('Number of decoder parameters: %.2fm' % (model.decoder.nparams/1e6)) - print('Total parameters: %.2fm' % (model.nparams/1e6)) - - print('Initializing optimizer...') - optimizer = torch.optim.Adam(params=model.parameters(), lr=learning_rate) - - print('Logging test batch...') - test_batch = test_dataset.sample_test_batch(size=params.test_size) - for i, item in enumerate(test_batch): - mel = item['y'] - logger.add_image(f'image_{i}/ground_truth', plot_tensor(mel.squeeze()), - global_step=0, dataformats='HWC') - save_plot(mel.squeeze(), f'{log_dir}/original_{i}.png') - - print('Start training...') - iteration = 0 - for epoch in range(1, n_epochs + 1): - model.train() - dur_losses = [] - prior_losses = [] - diff_losses = [] - with tqdm(loader, total=len(train_dataset)//batch_size) as progress_bar: - for batch_idx, batch in enumerate(progress_bar): - model.zero_grad() - x, x_lengths = batch['x'].cuda(), batch['x_lengths'].cuda() - y, y_lengths = batch['y'].cuda(), batch['y_lengths'].cuda() - dur_loss, prior_loss, diff_loss = model.compute_loss(x, x_lengths, - y, y_lengths, - out_size=out_size) - loss = sum([dur_loss, prior_loss, diff_loss]) - loss.backward() - - enc_grad_norm = torch.nn.utils.clip_grad_norm_(model.encoder.parameters(), - max_norm=1) - dec_grad_norm = torch.nn.utils.clip_grad_norm_(model.decoder.parameters(), - max_norm=1) - optimizer.step() - - logger.add_scalar('training/duration_loss', dur_loss.item(), - global_step=iteration) - logger.add_scalar('training/prior_loss', prior_loss.item(), - global_step=iteration) - logger.add_scalar('training/diffusion_loss', diff_loss.item(), - global_step=iteration) - logger.add_scalar('training/encoder_grad_norm', enc_grad_norm, - global_step=iteration) - logger.add_scalar('training/decoder_grad_norm', dec_grad_norm, - global_step=iteration) - - dur_losses.append(dur_loss.item()) - prior_losses.append(prior_loss.item()) - diff_losses.append(diff_loss.item()) - - if batch_idx % 5 == 0: - msg = f'Epoch: {epoch}, iteration: {iteration} | dur_loss: {dur_loss.item()}, prior_loss: {prior_loss.item()}, diff_loss: {diff_loss.item()}' - progress_bar.set_description(msg) - - iteration += 1 - - log_msg = 'Epoch %d: duration loss = %.3f ' % (epoch, np.mean(dur_losses)) - log_msg += '| prior loss = %.3f ' % np.mean(prior_losses) - log_msg += '| diffusion loss = %.3f\n' % np.mean(diff_losses) - with open(f'{log_dir}/train.log', 'a') as f: - f.write(log_msg) - - if epoch % params.save_every > 0: - continue - - model.eval() - print('Synthesis...') - with torch.no_grad(): - for i, item in enumerate(test_batch): - x = item['x'].to(torch.long).unsqueeze(0).cuda() - x_lengths = torch.LongTensor([x.shape[-1]]).cuda() - y_enc, y_dec, attn = model(x, x_lengths, n_timesteps=50) - logger.add_image(f'image_{i}/generated_enc', - plot_tensor(y_enc.squeeze().cpu()), - global_step=iteration, dataformats='HWC') - logger.add_image(f'image_{i}/generated_dec', - plot_tensor(y_dec.squeeze().cpu()), - global_step=iteration, dataformats='HWC') - logger.add_image(f'image_{i}/alignment', - plot_tensor(attn.squeeze().cpu()), - global_step=iteration, dataformats='HWC') - save_plot(y_enc.squeeze().cpu(), - f'{log_dir}/generated_enc_{i}.png') - save_plot(y_dec.squeeze().cpu(), - f'{log_dir}/generated_dec_{i}.png') - save_plot(attn.squeeze().cpu(), - f'{log_dir}/alignment_{i}.png') - - ckpt = model.state_dict() - torch.save(ckpt, f=f"{log_dir}/grad_{epoch}.pt") diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/roi_align_rotated.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/roi_align_rotated.py deleted file mode 100644 index 2a523992e7c736262ad5a158f209aae7875f6f0b..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/roi_align_rotated.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - - -class _ROIAlignRotated(Function): - @staticmethod - def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio): - ctx.save_for_backward(roi) - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.sampling_ratio = sampling_ratio - ctx.input_shape = input.size() - output = torch.ops.detectron2.roi_align_rotated_forward( - input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - (rois,) = ctx.saved_tensors - output_size = ctx.output_size - spatial_scale = ctx.spatial_scale - sampling_ratio = ctx.sampling_ratio - bs, ch, h, w = ctx.input_shape - grad_input = torch.ops.detectron2.roi_align_rotated_backward( - grad_output, - rois, - spatial_scale, - output_size[0], - output_size[1], - bs, - ch, - h, - w, - sampling_ratio, - ) - return grad_input, None, None, None, None, None - - -roi_align_rotated = _ROIAlignRotated.apply - - -class ROIAlignRotated(nn.Module): - def __init__(self, output_size, spatial_scale, sampling_ratio): - """ - Args: - output_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sampling_ratio (int): number of inputs samples to take for each output - sample. 0 to take samples densely. - - Note: - ROIAlignRotated supports continuous coordinate by default: - Given a continuous coordinate c, its two neighboring pixel indices (in our - pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example, - c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled - from the underlying signal at continuous coordinates 0.5 and 1.5). - """ - super(ROIAlignRotated, self).__init__() - self.output_size = output_size - self.spatial_scale = spatial_scale - self.sampling_ratio = sampling_ratio - - def forward(self, input, rois): - """ - Args: - input: NCHW images - rois: Bx6 boxes. First column is the index into N. - The other 5 columns are (x_ctr, y_ctr, width, height, angle_degrees). - """ - assert rois.dim() == 2 and rois.size(1) == 6 - orig_dtype = input.dtype - if orig_dtype == torch.float16: - input = input.float() - rois = rois.float() - output_size = _pair(self.output_size) - - # Scripting for Autograd is currently unsupported. - # This is a quick fix without having to rewrite code on the C++ side - if torch.jit.is_scripting() or torch.jit.is_tracing(): - return torch.ops.detectron2.roi_align_rotated_forward( - input, rois, self.spatial_scale, output_size[0], output_size[1], self.sampling_ratio - ).to(dtype=orig_dtype) - - return roi_align_rotated( - input, rois, self.output_size, self.spatial_scale, self.sampling_ratio - ).to(dtype=orig_dtype) - - def __repr__(self): - tmpstr = self.__class__.__name__ + "(" - tmpstr += "output_size=" + str(self.output_size) - tmpstr += ", spatial_scale=" + str(self.spatial_scale) - tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) - tmpstr += ")" - return tmpstr diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/poolers.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/poolers.py deleted file mode 100644 index 3393794507c6504bf6ac1bfae7a1c80a0d81725e..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/poolers.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List, Optional -import torch -from torch import nn -from torchvision.ops import RoIPool - -from detectron2.layers import ROIAlign, ROIAlignRotated, cat, nonzero_tuple, shapes_to_tensor -from detectron2.structures import Boxes -from detectron2.utils.tracing import assert_fx_safe, is_fx_tracing - -""" -To export ROIPooler to torchscript, in this file, variables that should be annotated with -`Union[List[Boxes], List[RotatedBoxes]]` are only annotated with `List[Boxes]`. - -TODO: Correct these annotations when torchscript support `Union`. -https://github.com/pytorch/pytorch/issues/41412 -""" - -__all__ = ["ROIPooler"] - - -def assign_boxes_to_levels( - box_lists: List[Boxes], - min_level: int, - max_level: int, - canonical_box_size: int, - canonical_level: int, -): - """ - Map each box in `box_lists` to a feature map level index and return the assignment - vector. - - Args: - box_lists (list[Boxes] | list[RotatedBoxes]): A list of N Boxes or N RotatedBoxes, - where N is the number of images in the batch. - min_level (int): Smallest feature map level index. The input is considered index 0, - the output of stage 1 is index 1, and so. - max_level (int): Largest feature map level index. - canonical_box_size (int): A canonical box size in pixels (sqrt(box area)). - canonical_level (int): The feature map level index on which a canonically-sized box - should be placed. - - Returns: - A tensor of length M, where M is the total number of boxes aggregated over all - N batch images. The memory layout corresponds to the concatenation of boxes - from all images. Each element is the feature map index, as an offset from - `self.min_level`, for the corresponding box (so value i means the box is at - `self.min_level + i`). - """ - box_sizes = torch.sqrt(cat([boxes.area() for boxes in box_lists])) - # Eqn.(1) in FPN paper - level_assignments = torch.floor( - canonical_level + torch.log2(box_sizes / canonical_box_size + 1e-8) - ) - # clamp level to (min, max), in case the box size is too large or too small - # for the available feature maps - level_assignments = torch.clamp(level_assignments, min=min_level, max=max_level) - return level_assignments.to(torch.int64) - min_level - - -# script the module to avoid hardcoded device type -@torch.jit.script_if_tracing -def _convert_boxes_to_pooler_format(boxes: torch.Tensor, sizes: torch.Tensor) -> torch.Tensor: - sizes = sizes.to(device=boxes.device) - indices = torch.repeat_interleave( - torch.arange(len(sizes), dtype=boxes.dtype, device=boxes.device), sizes - ) - return cat([indices[:, None], boxes], dim=1) - - -def convert_boxes_to_pooler_format(box_lists: List[Boxes]): - """ - Convert all boxes in `box_lists` to the low-level format used by ROI pooling ops - (see description under Returns). - - Args: - box_lists (list[Boxes] | list[RotatedBoxes]): - A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch. - - Returns: - When input is list[Boxes]: - A tensor of shape (M, 5), where M is the total number of boxes aggregated over all - N batch images. - The 5 columns are (batch index, x0, y0, x1, y1), where batch index - is the index in [0, N) identifying which batch image the box with corners at - (x0, y0, x1, y1) comes from. - When input is list[RotatedBoxes]: - A tensor of shape (M, 6), where M is the total number of boxes aggregated over all - N batch images. - The 6 columns are (batch index, x_ctr, y_ctr, width, height, angle_degrees), - where batch index is the index in [0, N) identifying which batch image the - rotated box (x_ctr, y_ctr, width, height, angle_degrees) comes from. - """ - boxes = torch.cat([x.tensor for x in box_lists], dim=0) - # __len__ returns Tensor in tracing. - sizes = shapes_to_tensor([x.__len__() for x in box_lists]) - return _convert_boxes_to_pooler_format(boxes, sizes) - - -@torch.jit.script_if_tracing -def _create_zeros( - batch_target: Optional[torch.Tensor], - channels: int, - height: int, - width: int, - like_tensor: torch.Tensor, -) -> torch.Tensor: - batches = batch_target.shape[0] if batch_target is not None else 0 - sizes = (batches, channels, height, width) - return torch.zeros(sizes, dtype=like_tensor.dtype, device=like_tensor.device) - - -class ROIPooler(nn.Module): - """ - Region of interest feature map pooler that supports pooling from one or more - feature maps. - """ - - def __init__( - self, - output_size, - scales, - sampling_ratio, - pooler_type, - canonical_box_size=224, - canonical_level=4, - ): - """ - Args: - output_size (int, tuple[int] or list[int]): output size of the pooled region, - e.g., 14 x 14. If tuple or list is given, the length must be 2. - scales (list[float]): The scale for each low-level pooling op relative to - the input image. For a feature map with stride s relative to the input - image, scale is defined as 1/s. The stride must be power of 2. - When there are multiple scales, they must form a pyramid, i.e. they must be - a monotically decreasing geometric sequence with a factor of 1/2. - sampling_ratio (int): The `sampling_ratio` parameter for the ROIAlign op. - pooler_type (string): Name of the type of pooling operation that should be applied. - For instance, "ROIPool" or "ROIAlignV2". - canonical_box_size (int): A canonical box size in pixels (sqrt(box area)). The default - is heuristically defined as 224 pixels in the FPN paper (based on ImageNet - pre-training). - canonical_level (int): The feature map level index from which a canonically-sized box - should be placed. The default is defined as level 4 (stride=16) in the FPN paper, - i.e., a box of size 224x224 will be placed on the feature with stride=16. - The box placement for all boxes will be determined from their sizes w.r.t - canonical_box_size. For example, a box whose area is 4x that of a canonical box - should be used to pool features from feature level ``canonical_level+1``. - - Note that the actual input feature maps given to this module may not have - sufficiently many levels for the input boxes. If the boxes are too large or too - small for the input feature maps, the closest level will be used. - """ - super().__init__() - - if isinstance(output_size, int): - output_size = (output_size, output_size) - assert len(output_size) == 2 - assert isinstance(output_size[0], int) and isinstance(output_size[1], int) - self.output_size = output_size - - if pooler_type == "ROIAlign": - self.level_poolers = nn.ModuleList( - ROIAlign( - output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=False - ) - for scale in scales - ) - elif pooler_type == "ROIAlignV2": - self.level_poolers = nn.ModuleList( - ROIAlign( - output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=True - ) - for scale in scales - ) - elif pooler_type == "ROIPool": - self.level_poolers = nn.ModuleList( - RoIPool(output_size, spatial_scale=scale) for scale in scales - ) - elif pooler_type == "ROIAlignRotated": - self.level_poolers = nn.ModuleList( - ROIAlignRotated(output_size, spatial_scale=scale, sampling_ratio=sampling_ratio) - for scale in scales - ) - else: - raise ValueError("Unknown pooler type: {}".format(pooler_type)) - - # Map scale (defined as 1 / stride) to its feature map level under the - # assumption that stride is a power of 2. - min_level = -(math.log2(scales[0])) - max_level = -(math.log2(scales[-1])) - assert math.isclose(min_level, int(min_level)) and math.isclose( - max_level, int(max_level) - ), "Featuremap stride is not power of 2!" - self.min_level = int(min_level) - self.max_level = int(max_level) - assert ( - len(scales) == self.max_level - self.min_level + 1 - ), "[ROIPooler] Sizes of input featuremaps do not form a pyramid!" - assert 0 <= self.min_level and self.min_level <= self.max_level - self.canonical_level = canonical_level - assert canonical_box_size > 0 - self.canonical_box_size = canonical_box_size - - def forward(self, x: List[torch.Tensor], box_lists: List[Boxes]): - """ - Args: - x (list[Tensor]): A list of feature maps of NCHW shape, with scales matching those - used to construct this module. - box_lists (list[Boxes] | list[RotatedBoxes]): - A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch. - The box coordinates are defined on the original image and - will be scaled by the `scales` argument of :class:`ROIPooler`. - - Returns: - Tensor: - A tensor of shape (M, C, output_size, output_size) where M is the total number of - boxes aggregated over all N batch images and C is the number of channels in `x`. - """ - num_level_assignments = len(self.level_poolers) - - if not is_fx_tracing(): - torch._assert( - isinstance(x, list) and isinstance(box_lists, list), - "Arguments to pooler must be lists", - ) - assert_fx_safe( - len(x) == num_level_assignments, - "unequal value, num_level_assignments={}, but x is list of {} Tensors".format( - num_level_assignments, len(x) - ), - ) - assert_fx_safe( - len(box_lists) == x[0].size(0), - "unequal value, x[0] batch dim 0 is {}, but box_list has length {}".format( - x[0].size(0), len(box_lists) - ), - ) - if len(box_lists) == 0: - return _create_zeros(None, x[0].shape[1], *self.output_size, x[0]) - - pooler_fmt_boxes = convert_boxes_to_pooler_format(box_lists) - - if num_level_assignments == 1: - return self.level_poolers[0](x[0], pooler_fmt_boxes) - - level_assignments = assign_boxes_to_levels( - box_lists, self.min_level, self.max_level, self.canonical_box_size, self.canonical_level - ) - - num_channels = x[0].shape[1] - output_size = self.output_size[0] - - output = _create_zeros(pooler_fmt_boxes, num_channels, output_size, output_size, x[0]) - - for level, pooler in enumerate(self.level_poolers): - inds = nonzero_tuple(level_assignments == level)[0] - pooler_fmt_boxes_level = pooler_fmt_boxes[inds] - # Use index_put_ instead of advance indexing, to avoid pytorch/issues/49852 - output.index_put_((inds,), pooler(x[level], pooler_fmt_boxes_level)) - - return output diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PdfParser.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PdfParser.py deleted file mode 100644 index dc1012f54d3d0d683e96fed41ee7ace492904e71..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PdfParser.py +++ /dev/null @@ -1,996 +0,0 @@ -import calendar -import codecs -import collections -import mmap -import os -import re -import time -import zlib - - -# see 7.9.2.2 Text String Type on page 86 and D.3 PDFDocEncoding Character Set -# on page 656 -def encode_text(s): - return codecs.BOM_UTF16_BE + s.encode("utf_16_be") - - -PDFDocEncoding = { - 0x16: "\u0017", - 0x18: "\u02D8", - 0x19: "\u02C7", - 0x1A: "\u02C6", - 0x1B: "\u02D9", - 0x1C: "\u02DD", - 0x1D: "\u02DB", - 0x1E: "\u02DA", - 0x1F: "\u02DC", - 0x80: "\u2022", - 0x81: "\u2020", - 0x82: "\u2021", - 0x83: "\u2026", - 0x84: "\u2014", - 0x85: "\u2013", - 0x86: "\u0192", - 0x87: "\u2044", - 0x88: "\u2039", - 0x89: "\u203A", - 0x8A: "\u2212", - 0x8B: "\u2030", - 0x8C: "\u201E", - 0x8D: "\u201C", - 0x8E: "\u201D", - 0x8F: "\u2018", - 0x90: "\u2019", - 0x91: "\u201A", - 0x92: "\u2122", - 0x93: "\uFB01", - 0x94: "\uFB02", - 0x95: "\u0141", - 0x96: "\u0152", - 0x97: "\u0160", - 0x98: "\u0178", - 0x99: "\u017D", - 0x9A: "\u0131", - 0x9B: "\u0142", - 0x9C: "\u0153", - 0x9D: "\u0161", - 0x9E: "\u017E", - 0xA0: "\u20AC", -} - - -def decode_text(b): - if b[: len(codecs.BOM_UTF16_BE)] == codecs.BOM_UTF16_BE: - return b[len(codecs.BOM_UTF16_BE) :].decode("utf_16_be") - else: - return "".join(PDFDocEncoding.get(byte, chr(byte)) for byte in b) - - -class PdfFormatError(RuntimeError): - """An error that probably indicates a syntactic or semantic error in the - PDF file structure""" - - pass - - -def check_format_condition(condition, error_message): - if not condition: - raise PdfFormatError(error_message) - - -class IndirectReference( - collections.namedtuple("IndirectReferenceTuple", ["object_id", "generation"]) -): - def __str__(self): - return "%s %s R" % self - - def __bytes__(self): - return self.__str__().encode("us-ascii") - - def __eq__(self, other): - return ( - other.__class__ is self.__class__ - and other.object_id == self.object_id - and other.generation == self.generation - ) - - def __ne__(self, other): - return not (self == other) - - def __hash__(self): - return hash((self.object_id, self.generation)) - - -class IndirectObjectDef(IndirectReference): - def __str__(self): - return "%s %s obj" % self - - -class XrefTable: - def __init__(self): - self.existing_entries = {} # object ID => (offset, generation) - self.new_entries = {} # object ID => (offset, generation) - self.deleted_entries = {0: 65536} # object ID => generation - self.reading_finished = False - - def __setitem__(self, key, value): - if self.reading_finished: - self.new_entries[key] = value - else: - self.existing_entries[key] = value - if key in self.deleted_entries: - del self.deleted_entries[key] - - def __getitem__(self, key): - try: - return self.new_entries[key] - except KeyError: - return self.existing_entries[key] - - def __delitem__(self, key): - if key in self.new_entries: - generation = self.new_entries[key][1] + 1 - del self.new_entries[key] - self.deleted_entries[key] = generation - elif key in self.existing_entries: - generation = self.existing_entries[key][1] + 1 - self.deleted_entries[key] = generation - elif key in self.deleted_entries: - generation = self.deleted_entries[key] - else: - msg = ( - "object ID " + str(key) + " cannot be deleted because it doesn't exist" - ) - raise IndexError(msg) - - def __contains__(self, key): - return key in self.existing_entries or key in self.new_entries - - def __len__(self): - return len( - set(self.existing_entries.keys()) - | set(self.new_entries.keys()) - | set(self.deleted_entries.keys()) - ) - - def keys(self): - return ( - set(self.existing_entries.keys()) - set(self.deleted_entries.keys()) - ) | set(self.new_entries.keys()) - - def write(self, f): - keys = sorted(set(self.new_entries.keys()) | set(self.deleted_entries.keys())) - deleted_keys = sorted(set(self.deleted_entries.keys())) - startxref = f.tell() - f.write(b"xref\n") - while keys: - # find a contiguous sequence of object IDs - prev = None - for index, key in enumerate(keys): - if prev is None or prev + 1 == key: - prev = key - else: - contiguous_keys = keys[:index] - keys = keys[index:] - break - else: - contiguous_keys = keys - keys = None - f.write(b"%d %d\n" % (contiguous_keys[0], len(contiguous_keys))) - for object_id in contiguous_keys: - if object_id in self.new_entries: - f.write(b"%010d %05d n \n" % self.new_entries[object_id]) - else: - this_deleted_object_id = deleted_keys.pop(0) - check_format_condition( - object_id == this_deleted_object_id, - f"expected the next deleted object ID to be {object_id}, " - f"instead found {this_deleted_object_id}", - ) - try: - next_in_linked_list = deleted_keys[0] - except IndexError: - next_in_linked_list = 0 - f.write( - b"%010d %05d f \n" - % (next_in_linked_list, self.deleted_entries[object_id]) - ) - return startxref - - -class PdfName: - def __init__(self, name): - if isinstance(name, PdfName): - self.name = name.name - elif isinstance(name, bytes): - self.name = name - else: - self.name = name.encode("us-ascii") - - def name_as_str(self): - return self.name.decode("us-ascii") - - def __eq__(self, other): - return ( - isinstance(other, PdfName) and other.name == self.name - ) or other == self.name - - def __hash__(self): - return hash(self.name) - - def __repr__(self): - return f"PdfName({repr(self.name)})" - - @classmethod - def from_pdf_stream(cls, data): - return cls(PdfParser.interpret_name(data)) - - allowed_chars = set(range(33, 127)) - {ord(c) for c in "#%/()<>[]{}"} - - def __bytes__(self): - result = bytearray(b"/") - for b in self.name: - if b in self.allowed_chars: - result.append(b) - else: - result.extend(b"#%02X" % b) - return bytes(result) - - -class PdfArray(list): - def __bytes__(self): - return b"[ " + b" ".join(pdf_repr(x) for x in self) + b" ]" - - -class PdfDict(collections.UserDict): - def __setattr__(self, key, value): - if key == "data": - collections.UserDict.__setattr__(self, key, value) - else: - self[key.encode("us-ascii")] = value - - def __getattr__(self, key): - try: - value = self[key.encode("us-ascii")] - except KeyError as e: - raise AttributeError(key) from e - if isinstance(value, bytes): - value = decode_text(value) - if key.endswith("Date"): - if value.startswith("D:"): - value = value[2:] - - relationship = "Z" - if len(value) > 17: - relationship = value[14] - offset = int(value[15:17]) * 60 - if len(value) > 20: - offset += int(value[18:20]) - - format = "%Y%m%d%H%M%S"[: len(value) - 2] - value = time.strptime(value[: len(format) + 2], format) - if relationship in ["+", "-"]: - offset *= 60 - if relationship == "+": - offset *= -1 - value = time.gmtime(calendar.timegm(value) + offset) - return value - - def __bytes__(self): - out = bytearray(b"<<") - for key, value in self.items(): - if value is None: - continue - value = pdf_repr(value) - out.extend(b"\n") - out.extend(bytes(PdfName(key))) - out.extend(b" ") - out.extend(value) - out.extend(b"\n>>") - return bytes(out) - - -class PdfBinary: - def __init__(self, data): - self.data = data - - def __bytes__(self): - return b"<%s>" % b"".join(b"%02X" % b for b in self.data) - - -class PdfStream: - def __init__(self, dictionary, buf): - self.dictionary = dictionary - self.buf = buf - - def decode(self): - try: - filter = self.dictionary.Filter - except AttributeError: - return self.buf - if filter == b"FlateDecode": - try: - expected_length = self.dictionary.DL - except AttributeError: - expected_length = self.dictionary.Length - return zlib.decompress(self.buf, bufsize=int(expected_length)) - else: - msg = f"stream filter {repr(self.dictionary.Filter)} unknown/unsupported" - raise NotImplementedError(msg) - - -def pdf_repr(x): - if x is True: - return b"true" - elif x is False: - return b"false" - elif x is None: - return b"null" - elif isinstance(x, (PdfName, PdfDict, PdfArray, PdfBinary)): - return bytes(x) - elif isinstance(x, (int, float)): - return str(x).encode("us-ascii") - elif isinstance(x, time.struct_time): - return b"(D:" + time.strftime("%Y%m%d%H%M%SZ", x).encode("us-ascii") + b")" - elif isinstance(x, dict): - return bytes(PdfDict(x)) - elif isinstance(x, list): - return bytes(PdfArray(x)) - elif isinstance(x, str): - return pdf_repr(encode_text(x)) - elif isinstance(x, bytes): - # XXX escape more chars? handle binary garbage - x = x.replace(b"\\", b"\\\\") - x = x.replace(b"(", b"\\(") - x = x.replace(b")", b"\\)") - return b"(" + x + b")" - else: - return bytes(x) - - -class PdfParser: - """Based on - https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/PDF32000_2008.pdf - Supports PDF up to 1.4 - """ - - def __init__(self, filename=None, f=None, buf=None, start_offset=0, mode="rb"): - if buf and f: - msg = "specify buf or f or filename, but not both buf and f" - raise RuntimeError(msg) - self.filename = filename - self.buf = buf - self.f = f - self.start_offset = start_offset - self.should_close_buf = False - self.should_close_file = False - if filename is not None and f is None: - self.f = f = open(filename, mode) - self.should_close_file = True - if f is not None: - self.buf = buf = self.get_buf_from_file(f) - self.should_close_buf = True - if not filename and hasattr(f, "name"): - self.filename = f.name - self.cached_objects = {} - if buf: - self.read_pdf_info() - else: - self.file_size_total = self.file_size_this = 0 - self.root = PdfDict() - self.root_ref = None - self.info = PdfDict() - self.info_ref = None - self.page_tree_root = {} - self.pages = [] - self.orig_pages = [] - self.pages_ref = None - self.last_xref_section_offset = None - self.trailer_dict = {} - self.xref_table = XrefTable() - self.xref_table.reading_finished = True - if f: - self.seek_end() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self.close() - return False # do not suppress exceptions - - def start_writing(self): - self.close_buf() - self.seek_end() - - def close_buf(self): - try: - self.buf.close() - except AttributeError: - pass - self.buf = None - - def close(self): - if self.should_close_buf: - self.close_buf() - if self.f is not None and self.should_close_file: - self.f.close() - self.f = None - - def seek_end(self): - self.f.seek(0, os.SEEK_END) - - def write_header(self): - self.f.write(b"%PDF-1.4\n") - - def write_comment(self, s): - self.f.write(f"% {s}\n".encode()) - - def write_catalog(self): - self.del_root() - self.root_ref = self.next_object_id(self.f.tell()) - self.pages_ref = self.next_object_id(0) - self.rewrite_pages() - self.write_obj(self.root_ref, Type=PdfName(b"Catalog"), Pages=self.pages_ref) - self.write_obj( - self.pages_ref, - Type=PdfName(b"Pages"), - Count=len(self.pages), - Kids=self.pages, - ) - return self.root_ref - - def rewrite_pages(self): - pages_tree_nodes_to_delete = [] - for i, page_ref in enumerate(self.orig_pages): - page_info = self.cached_objects[page_ref] - del self.xref_table[page_ref.object_id] - pages_tree_nodes_to_delete.append(page_info[PdfName(b"Parent")]) - if page_ref not in self.pages: - # the page has been deleted - continue - # make dict keys into strings for passing to write_page - stringified_page_info = {} - for key, value in page_info.items(): - # key should be a PdfName - stringified_page_info[key.name_as_str()] = value - stringified_page_info["Parent"] = self.pages_ref - new_page_ref = self.write_page(None, **stringified_page_info) - for j, cur_page_ref in enumerate(self.pages): - if cur_page_ref == page_ref: - # replace the page reference with the new one - self.pages[j] = new_page_ref - # delete redundant Pages tree nodes from xref table - for pages_tree_node_ref in pages_tree_nodes_to_delete: - while pages_tree_node_ref: - pages_tree_node = self.cached_objects[pages_tree_node_ref] - if pages_tree_node_ref.object_id in self.xref_table: - del self.xref_table[pages_tree_node_ref.object_id] - pages_tree_node_ref = pages_tree_node.get(b"Parent", None) - self.orig_pages = [] - - def write_xref_and_trailer(self, new_root_ref=None): - if new_root_ref: - self.del_root() - self.root_ref = new_root_ref - if self.info: - self.info_ref = self.write_obj(None, self.info) - start_xref = self.xref_table.write(self.f) - num_entries = len(self.xref_table) - trailer_dict = {b"Root": self.root_ref, b"Size": num_entries} - if self.last_xref_section_offset is not None: - trailer_dict[b"Prev"] = self.last_xref_section_offset - if self.info: - trailer_dict[b"Info"] = self.info_ref - self.last_xref_section_offset = start_xref - self.f.write( - b"trailer\n" - + bytes(PdfDict(trailer_dict)) - + b"\nstartxref\n%d\n%%%%EOF" % start_xref - ) - - def write_page(self, ref, *objs, **dict_obj): - if isinstance(ref, int): - ref = self.pages[ref] - if "Type" not in dict_obj: - dict_obj["Type"] = PdfName(b"Page") - if "Parent" not in dict_obj: - dict_obj["Parent"] = self.pages_ref - return self.write_obj(ref, *objs, **dict_obj) - - def write_obj(self, ref, *objs, **dict_obj): - f = self.f - if ref is None: - ref = self.next_object_id(f.tell()) - else: - self.xref_table[ref.object_id] = (f.tell(), ref.generation) - f.write(bytes(IndirectObjectDef(*ref))) - stream = dict_obj.pop("stream", None) - if stream is not None: - dict_obj["Length"] = len(stream) - if dict_obj: - f.write(pdf_repr(dict_obj)) - for obj in objs: - f.write(pdf_repr(obj)) - if stream is not None: - f.write(b"stream\n") - f.write(stream) - f.write(b"\nendstream\n") - f.write(b"endobj\n") - return ref - - def del_root(self): - if self.root_ref is None: - return - del self.xref_table[self.root_ref.object_id] - del self.xref_table[self.root[b"Pages"].object_id] - - @staticmethod - def get_buf_from_file(f): - if hasattr(f, "getbuffer"): - return f.getbuffer() - elif hasattr(f, "getvalue"): - return f.getvalue() - else: - try: - return mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) - except ValueError: # cannot mmap an empty file - return b"" - - def read_pdf_info(self): - self.file_size_total = len(self.buf) - self.file_size_this = self.file_size_total - self.start_offset - self.read_trailer() - self.root_ref = self.trailer_dict[b"Root"] - self.info_ref = self.trailer_dict.get(b"Info", None) - self.root = PdfDict(self.read_indirect(self.root_ref)) - if self.info_ref is None: - self.info = PdfDict() - else: - self.info = PdfDict(self.read_indirect(self.info_ref)) - check_format_condition(b"Type" in self.root, "/Type missing in Root") - check_format_condition( - self.root[b"Type"] == b"Catalog", "/Type in Root is not /Catalog" - ) - check_format_condition(b"Pages" in self.root, "/Pages missing in Root") - check_format_condition( - isinstance(self.root[b"Pages"], IndirectReference), - "/Pages in Root is not an indirect reference", - ) - self.pages_ref = self.root[b"Pages"] - self.page_tree_root = self.read_indirect(self.pages_ref) - self.pages = self.linearize_page_tree(self.page_tree_root) - # save the original list of page references - # in case the user modifies, adds or deletes some pages - # and we need to rewrite the pages and their list - self.orig_pages = self.pages[:] - - def next_object_id(self, offset=None): - try: - # TODO: support reuse of deleted objects - reference = IndirectReference(max(self.xref_table.keys()) + 1, 0) - except ValueError: - reference = IndirectReference(1, 0) - if offset is not None: - self.xref_table[reference.object_id] = (offset, 0) - return reference - - delimiter = rb"[][()<>{}/%]" - delimiter_or_ws = rb"[][()<>{}/%\000\011\012\014\015\040]" - whitespace = rb"[\000\011\012\014\015\040]" - whitespace_or_hex = rb"[\000\011\012\014\015\0400-9a-fA-F]" - whitespace_optional = whitespace + b"*" - whitespace_mandatory = whitespace + b"+" - # No "\012" aka "\n" or "\015" aka "\r": - whitespace_optional_no_nl = rb"[\000\011\014\040]*" - newline_only = rb"[\r\n]+" - newline = whitespace_optional_no_nl + newline_only + whitespace_optional_no_nl - re_trailer_end = re.compile( - whitespace_mandatory - + rb"trailer" - + whitespace_optional - + rb"<<(.*>>)" - + newline - + rb"startxref" - + newline - + rb"([0-9]+)" - + newline - + rb"%%EOF" - + whitespace_optional - + rb"$", - re.DOTALL, - ) - re_trailer_prev = re.compile( - whitespace_optional - + rb"trailer" - + whitespace_optional - + rb"<<(.*?>>)" - + newline - + rb"startxref" - + newline - + rb"([0-9]+)" - + newline - + rb"%%EOF" - + whitespace_optional, - re.DOTALL, - ) - - def read_trailer(self): - search_start_offset = len(self.buf) - 16384 - if search_start_offset < self.start_offset: - search_start_offset = self.start_offset - m = self.re_trailer_end.search(self.buf, search_start_offset) - check_format_condition(m, "trailer end not found") - # make sure we found the LAST trailer - last_match = m - while m: - last_match = m - m = self.re_trailer_end.search(self.buf, m.start() + 16) - if not m: - m = last_match - trailer_data = m.group(1) - self.last_xref_section_offset = int(m.group(2)) - self.trailer_dict = self.interpret_trailer(trailer_data) - self.xref_table = XrefTable() - self.read_xref_table(xref_section_offset=self.last_xref_section_offset) - if b"Prev" in self.trailer_dict: - self.read_prev_trailer(self.trailer_dict[b"Prev"]) - - def read_prev_trailer(self, xref_section_offset): - trailer_offset = self.read_xref_table(xref_section_offset=xref_section_offset) - m = self.re_trailer_prev.search( - self.buf[trailer_offset : trailer_offset + 16384] - ) - check_format_condition(m, "previous trailer not found") - trailer_data = m.group(1) - check_format_condition( - int(m.group(2)) == xref_section_offset, - "xref section offset in previous trailer doesn't match what was expected", - ) - trailer_dict = self.interpret_trailer(trailer_data) - if b"Prev" in trailer_dict: - self.read_prev_trailer(trailer_dict[b"Prev"]) - - re_whitespace_optional = re.compile(whitespace_optional) - re_name = re.compile( - whitespace_optional - + rb"/([!-$&'*-.0-;=?-Z\\^-z|~]+)(?=" - + delimiter_or_ws - + rb")" - ) - re_dict_start = re.compile(whitespace_optional + rb"<<") - re_dict_end = re.compile(whitespace_optional + rb">>" + whitespace_optional) - - @classmethod - def interpret_trailer(cls, trailer_data): - trailer = {} - offset = 0 - while True: - m = cls.re_name.match(trailer_data, offset) - if not m: - m = cls.re_dict_end.match(trailer_data, offset) - check_format_condition( - m and m.end() == len(trailer_data), - "name not found in trailer, remaining data: " - + repr(trailer_data[offset:]), - ) - break - key = cls.interpret_name(m.group(1)) - value, offset = cls.get_value(trailer_data, m.end()) - trailer[key] = value - check_format_condition( - b"Size" in trailer and isinstance(trailer[b"Size"], int), - "/Size not in trailer or not an integer", - ) - check_format_condition( - b"Root" in trailer and isinstance(trailer[b"Root"], IndirectReference), - "/Root not in trailer or not an indirect reference", - ) - return trailer - - re_hashes_in_name = re.compile(rb"([^#]*)(#([0-9a-fA-F]{2}))?") - - @classmethod - def interpret_name(cls, raw, as_text=False): - name = b"" - for m in cls.re_hashes_in_name.finditer(raw): - if m.group(3): - name += m.group(1) + bytearray.fromhex(m.group(3).decode("us-ascii")) - else: - name += m.group(1) - if as_text: - return name.decode("utf-8") - else: - return bytes(name) - - re_null = re.compile(whitespace_optional + rb"null(?=" + delimiter_or_ws + rb")") - re_true = re.compile(whitespace_optional + rb"true(?=" + delimiter_or_ws + rb")") - re_false = re.compile(whitespace_optional + rb"false(?=" + delimiter_or_ws + rb")") - re_int = re.compile( - whitespace_optional + rb"([-+]?[0-9]+)(?=" + delimiter_or_ws + rb")" - ) - re_real = re.compile( - whitespace_optional - + rb"([-+]?([0-9]+\.[0-9]*|[0-9]*\.[0-9]+))(?=" - + delimiter_or_ws - + rb")" - ) - re_array_start = re.compile(whitespace_optional + rb"\[") - re_array_end = re.compile(whitespace_optional + rb"]") - re_string_hex = re.compile( - whitespace_optional + rb"<(" + whitespace_or_hex + rb"*)>" - ) - re_string_lit = re.compile(whitespace_optional + rb"\(") - re_indirect_reference = re.compile( - whitespace_optional - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"R(?=" - + delimiter_or_ws - + rb")" - ) - re_indirect_def_start = re.compile( - whitespace_optional - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"obj(?=" - + delimiter_or_ws - + rb")" - ) - re_indirect_def_end = re.compile( - whitespace_optional + rb"endobj(?=" + delimiter_or_ws + rb")" - ) - re_comment = re.compile( - rb"(" + whitespace_optional + rb"%[^\r\n]*" + newline + rb")*" - ) - re_stream_start = re.compile(whitespace_optional + rb"stream\r?\n") - re_stream_end = re.compile( - whitespace_optional + rb"endstream(?=" + delimiter_or_ws + rb")" - ) - - @classmethod - def get_value(cls, data, offset, expect_indirect=None, max_nesting=-1): - if max_nesting == 0: - return None, None - m = cls.re_comment.match(data, offset) - if m: - offset = m.end() - m = cls.re_indirect_def_start.match(data, offset) - if m: - check_format_condition( - int(m.group(1)) > 0, - "indirect object definition: object ID must be greater than 0", - ) - check_format_condition( - int(m.group(2)) >= 0, - "indirect object definition: generation must be non-negative", - ) - check_format_condition( - expect_indirect is None - or expect_indirect - == IndirectReference(int(m.group(1)), int(m.group(2))), - "indirect object definition different than expected", - ) - object, offset = cls.get_value(data, m.end(), max_nesting=max_nesting - 1) - if offset is None: - return object, None - m = cls.re_indirect_def_end.match(data, offset) - check_format_condition(m, "indirect object definition end not found") - return object, m.end() - check_format_condition( - not expect_indirect, "indirect object definition not found" - ) - m = cls.re_indirect_reference.match(data, offset) - if m: - check_format_condition( - int(m.group(1)) > 0, - "indirect object reference: object ID must be greater than 0", - ) - check_format_condition( - int(m.group(2)) >= 0, - "indirect object reference: generation must be non-negative", - ) - return IndirectReference(int(m.group(1)), int(m.group(2))), m.end() - m = cls.re_dict_start.match(data, offset) - if m: - offset = m.end() - result = {} - m = cls.re_dict_end.match(data, offset) - while not m: - key, offset = cls.get_value(data, offset, max_nesting=max_nesting - 1) - if offset is None: - return result, None - value, offset = cls.get_value(data, offset, max_nesting=max_nesting - 1) - result[key] = value - if offset is None: - return result, None - m = cls.re_dict_end.match(data, offset) - offset = m.end() - m = cls.re_stream_start.match(data, offset) - if m: - try: - stream_len = int(result[b"Length"]) - except (TypeError, KeyError, ValueError) as e: - msg = "bad or missing Length in stream dict (%r)" % result.get( - b"Length", None - ) - raise PdfFormatError(msg) from e - stream_data = data[m.end() : m.end() + stream_len] - m = cls.re_stream_end.match(data, m.end() + stream_len) - check_format_condition(m, "stream end not found") - offset = m.end() - result = PdfStream(PdfDict(result), stream_data) - else: - result = PdfDict(result) - return result, offset - m = cls.re_array_start.match(data, offset) - if m: - offset = m.end() - result = [] - m = cls.re_array_end.match(data, offset) - while not m: - value, offset = cls.get_value(data, offset, max_nesting=max_nesting - 1) - result.append(value) - if offset is None: - return result, None - m = cls.re_array_end.match(data, offset) - return result, m.end() - m = cls.re_null.match(data, offset) - if m: - return None, m.end() - m = cls.re_true.match(data, offset) - if m: - return True, m.end() - m = cls.re_false.match(data, offset) - if m: - return False, m.end() - m = cls.re_name.match(data, offset) - if m: - return PdfName(cls.interpret_name(m.group(1))), m.end() - m = cls.re_int.match(data, offset) - if m: - return int(m.group(1)), m.end() - m = cls.re_real.match(data, offset) - if m: - # XXX Decimal instead of float??? - return float(m.group(1)), m.end() - m = cls.re_string_hex.match(data, offset) - if m: - # filter out whitespace - hex_string = bytearray( - b for b in m.group(1) if b in b"0123456789abcdefABCDEF" - ) - if len(hex_string) % 2 == 1: - # append a 0 if the length is not even - yes, at the end - hex_string.append(ord(b"0")) - return bytearray.fromhex(hex_string.decode("us-ascii")), m.end() - m = cls.re_string_lit.match(data, offset) - if m: - return cls.get_literal_string(data, m.end()) - # return None, offset # fallback (only for debugging) - msg = "unrecognized object: " + repr(data[offset : offset + 32]) - raise PdfFormatError(msg) - - re_lit_str_token = re.compile( - rb"(\\[nrtbf()\\])|(\\[0-9]{1,3})|(\\(\r\n|\r|\n))|(\r\n|\r|\n)|(\()|(\))" - ) - escaped_chars = { - b"n": b"\n", - b"r": b"\r", - b"t": b"\t", - b"b": b"\b", - b"f": b"\f", - b"(": b"(", - b")": b")", - b"\\": b"\\", - ord(b"n"): b"\n", - ord(b"r"): b"\r", - ord(b"t"): b"\t", - ord(b"b"): b"\b", - ord(b"f"): b"\f", - ord(b"("): b"(", - ord(b")"): b")", - ord(b"\\"): b"\\", - } - - @classmethod - def get_literal_string(cls, data, offset): - nesting_depth = 0 - result = bytearray() - for m in cls.re_lit_str_token.finditer(data, offset): - result.extend(data[offset : m.start()]) - if m.group(1): - result.extend(cls.escaped_chars[m.group(1)[1]]) - elif m.group(2): - result.append(int(m.group(2)[1:], 8)) - elif m.group(3): - pass - elif m.group(5): - result.extend(b"\n") - elif m.group(6): - result.extend(b"(") - nesting_depth += 1 - elif m.group(7): - if nesting_depth == 0: - return bytes(result), m.end() - result.extend(b")") - nesting_depth -= 1 - offset = m.end() - msg = "unfinished literal string" - raise PdfFormatError(msg) - - re_xref_section_start = re.compile(whitespace_optional + rb"xref" + newline) - re_xref_subsection_start = re.compile( - whitespace_optional - + rb"([0-9]+)" - + whitespace_mandatory - + rb"([0-9]+)" - + whitespace_optional - + newline_only - ) - re_xref_entry = re.compile(rb"([0-9]{10}) ([0-9]{5}) ([fn])( \r| \n|\r\n)") - - def read_xref_table(self, xref_section_offset): - subsection_found = False - m = self.re_xref_section_start.match( - self.buf, xref_section_offset + self.start_offset - ) - check_format_condition(m, "xref section start not found") - offset = m.end() - while True: - m = self.re_xref_subsection_start.match(self.buf, offset) - if not m: - check_format_condition( - subsection_found, "xref subsection start not found" - ) - break - subsection_found = True - offset = m.end() - first_object = int(m.group(1)) - num_objects = int(m.group(2)) - for i in range(first_object, first_object + num_objects): - m = self.re_xref_entry.match(self.buf, offset) - check_format_condition(m, "xref entry not found") - offset = m.end() - is_free = m.group(3) == b"f" - if not is_free: - generation = int(m.group(2)) - new_entry = (int(m.group(1)), generation) - if i not in self.xref_table: - self.xref_table[i] = new_entry - return offset - - def read_indirect(self, ref, max_nesting=-1): - offset, generation = self.xref_table[ref[0]] - check_format_condition( - generation == ref[1], - f"expected to find generation {ref[1]} for object ID {ref[0]} in xref " - f"table, instead found generation {generation} at offset {offset}", - ) - value = self.get_value( - self.buf, - offset + self.start_offset, - expect_indirect=IndirectReference(*ref), - max_nesting=max_nesting, - )[0] - self.cached_objects[ref] = value - return value - - def linearize_page_tree(self, node=None): - if node is None: - node = self.page_tree_root - check_format_condition( - node[b"Type"] == b"Pages", "/Type of page tree node is not /Pages" - ) - pages = [] - for kid in node[b"Kids"]: - kid_object = self.read_indirect(kid) - if kid_object[b"Type"] == b"Page": - pages.append(kid) - else: - pages.extend(self.linearize_page_tree(node=kid_object)) - return pages diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/export/torchscript.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/export/torchscript.py deleted file mode 100644 index 24fe59bda44225324928542df3f2ef1745375dfd..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/export/torchscript.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import torch - -from detectron2.utils.file_io import PathManager - -from .torchscript_patch import freeze_training_mode, patch_instances - -__all__ = ["scripting_with_instances", "dump_torchscript_IR"] - - -def scripting_with_instances(model, fields): - """ - Run :func:`torch.jit.script` on a model that uses the :class:`Instances` class. Since - attributes of :class:`Instances` are "dynamically" added in eager mode,it is difficult - for scripting to support it out of the box. This function is made to support scripting - a model that uses :class:`Instances`. It does the following: - - 1. Create a scriptable ``new_Instances`` class which behaves similarly to ``Instances``, - but with all attributes been "static". - The attributes need to be statically declared in the ``fields`` argument. - 2. Register ``new_Instances``, and force scripting compiler to - use it when trying to compile ``Instances``. - - After this function, the process will be reverted. User should be able to script another model - using different fields. - - Example: - Assume that ``Instances`` in the model consist of two attributes named - ``proposal_boxes`` and ``objectness_logits`` with type :class:`Boxes` and - :class:`Tensor` respectively during inference. You can call this function like: - :: - fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor} - torchscipt_model = scripting_with_instances(model, fields) - - Note: - It only support models in evaluation mode. - - Args: - model (nn.Module): The input model to be exported by scripting. - fields (Dict[str, type]): Attribute names and corresponding type that - ``Instances`` will use in the model. Note that all attributes used in ``Instances`` - need to be added, regardless of whether they are inputs/outputs of the model. - Data type not defined in detectron2 is not supported for now. - - Returns: - torch.jit.ScriptModule: the model in torchscript format - """ - assert ( - not model.training - ), "Currently we only support exporting models in evaluation mode to torchscript" - - with freeze_training_mode(model), patch_instances(fields): - scripted_model = torch.jit.script(model) - return scripted_model - - -# alias for old name -export_torchscript_with_instances = scripting_with_instances - - -def dump_torchscript_IR(model, dir): - """ - Dump IR of a TracedModule/ScriptModule/Function in various format (code, graph, - inlined graph). Useful for debugging. - - Args: - model (TracedModule/ScriptModule/ScriptFUnction): traced or scripted module - dir (str): output directory to dump files. - """ - dir = os.path.expanduser(dir) - PathManager.mkdirs(dir) - - def _get_script_mod(mod): - if isinstance(mod, torch.jit.TracedModule): - return mod._actual_script_module - return mod - - # Dump pretty-printed code: https://pytorch.org/docs/stable/jit.html#inspecting-code - with PathManager.open(os.path.join(dir, "model_ts_code.txt"), "w") as f: - - def get_code(mod): - # Try a few ways to get code using private attributes. - try: - # This contains more information than just `mod.code` - return _get_script_mod(mod)._c.code - except AttributeError: - pass - try: - return mod.code - except AttributeError: - return None - - def dump_code(prefix, mod): - code = get_code(mod) - name = prefix or "root model" - if code is None: - f.write(f"Could not found code for {name} (type={mod.original_name})\n") - f.write("\n") - else: - f.write(f"\nCode for {name}, type={mod.original_name}:\n") - f.write(code) - f.write("\n") - f.write("-" * 80) - - for name, m in mod.named_children(): - dump_code(prefix + "." + name, m) - - if isinstance(model, torch.jit.ScriptFunction): - f.write(get_code(model)) - else: - dump_code("", model) - - def _get_graph(model): - try: - # Recursively dump IR of all modules - return _get_script_mod(model)._c.dump_to_str(True, False, False) - except AttributeError: - return model.graph.str() - - with PathManager.open(os.path.join(dir, "model_ts_IR.txt"), "w") as f: - f.write(_get_graph(model)) - - # Dump IR of the entire graph (all submodules inlined) - with PathManager.open(os.path.join(dir, "model_ts_IR_inlined.txt"), "w") as f: - f.write(str(model.inlined_graph)) - - if not isinstance(model, torch.jit.ScriptFunction): - # Dump the model structure in pytorch style - with PathManager.open(os.path.join(dir, "model.txt"), "w") as f: - f.write(str(model)) diff --git a/spaces/chatpdfdemo/demo/app.py b/spaces/chatpdfdemo/demo/app.py deleted file mode 100644 index 910d227990c32067b078ec40e142349c2e4309be..0000000000000000000000000000000000000000 --- a/spaces/chatpdfdemo/demo/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import streamlit as st -from dotenv import load_dotenv -import pickle -from PyPDF2 import PdfReader -from streamlit_extras.add_vertical_space import add_vertical_space -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.llms import OpenAI -from langchain.chains.question_answering import load_qa_chain -from langchain.callbacks import get_openai_callback -import os - -# Sidebar contents -with st.sidebar: - st.title('🤗💬 LLM Chat App') - st.markdown(''' - ## About - This app is an LLM-powered chatbot built using: - - [Streamlit](https://streamlit.io/) - - [LangChain](https://python.langchain.com/) - - [OpenAI](https://platform.openai.com/docs/models) LLM model - - ''') - add_vertical_space(5) - st.write('Made with ❤️ by [Prompt Engineer](https://youtube.com/@engineerprompt)') - -load_dotenv() - -def main(): - st.header("Chat with PDF 💬") - - - # upload a PDF file - pdf = st.file_uploader("Upload your PDF", type='pdf') - - # st.write(pdf) - if pdf is not None: - pdf_reader = PdfReader(pdf) - - text = "" - for page in pdf_reader.pages: - text += page.extract_text() - - text_splitter = RecursiveCharacterTextSplitter( - chunk_size=1000, - chunk_overlap=200, - length_function=len - ) - chunks = text_splitter.split_text(text=text) - - # # embeddings - store_name = pdf.name[:-4] - st.write(f'{store_name}') - # st.write(chunks) - - if os.path.exists(f"{store_name}.pkl"): - with open(f"{store_name}.pkl", "rb") as f: - VectorStore = pickle.load(f) - # st.write('Embeddings Loaded from the Disk')s - else: - embeddings = OpenAIEmbeddings() - VectorStore = FAISS.from_texts(chunks, embedding=embeddings) - with open(f"{store_name}.pkl", "wb") as f: - pickle.dump(VectorStore, f) - - # embeddings = OpenAIEmbeddings() - # VectorStore = FAISS.from_texts(chunks, embedding=embeddings) - - # Accept user questions/query - query = st.text_input("Ask questions about your PDF file:") - # st.write(query) - - if query: - docs = VectorStore.similarity_search(query=query, k=3) - - llm = OpenAI(model_name='gpt-3.5-turbo') - chain = load_qa_chain(llm=llm, chain_type="stuff") - with get_openai_callback() as cb: - response = chain.run(input_documents=docs, question=query) - print(cb) - st.write(response) - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/setup.py b/spaces/chendl/compositional_test/multimodal/YOLOX/setup.py deleted file mode 100644 index 5fec79764f284e49947e9b343b59fe3249fa04ed..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/setup.py +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Megvii, Inc. and its affiliates. All Rights Reserved - -import re -import setuptools -import sys - -TORCH_AVAILABLE = True -try: - import torch - from torch.utils import cpp_extension -except ImportError: - TORCH_AVAILABLE = False - print("[WARNING] Unable to import torch, pre-compiling ops will be disabled.") - - -def get_package_dir(): - pkg_dir = { - "yolox.tools": "tools", - "yolox.exp.default": "exps/default", - } - return pkg_dir - - -def get_install_requirements(): - with open("requirements.txt", "r", encoding="utf-8") as f: - reqs = [x.strip() for x in f.read().splitlines()] - reqs = [x for x in reqs if not x.startswith("#")] - return reqs - - -def get_yolox_version(): - with open("yolox/__init__.py", "r") as f: - version = re.search( - r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]', - f.read(), re.MULTILINE - ).group(1) - return version - - -def get_long_description(): - with open("README.md", "r", encoding="utf-8") as f: - long_description = f.read() - return long_description - - -def get_ext_modules(): - ext_module = [] - if sys.platform != "win32": # pre-compile ops on linux - assert TORCH_AVAILABLE, "torch is required for pre-compiling ops, please install it first." - # if any other op is added, please also add it here - from yolox.layers import FastCOCOEvalOp - ext_module.append(FastCOCOEvalOp().build_op()) - return ext_module - - -def get_cmd_class(): - cmdclass = {} - if TORCH_AVAILABLE: - cmdclass["build_ext"] = cpp_extension.BuildExtension - return cmdclass - - -setuptools.setup( - name="yolox", - version=get_yolox_version(), - author="megvii basedet team", - url="https://github.com/Megvii-BaseDetection/YOLOX", - package_dir=get_package_dir(), - packages=setuptools.find_packages(exclude=("tests", "tools")) + list(get_package_dir().keys()), - python_requires=">=3.6", - install_requires=get_install_requirements(), - setup_requires=["wheel"], # avoid building error when pip is not updated - long_description=get_long_description(), - long_description_content_type="text/markdown", - include_package_data=True, # include files in MANIFEST.in - ext_modules=get_ext_modules(), - cmdclass=get_cmd_class(), - classifiers=[ - "Programming Language :: Python :: 3", "Operating System :: OS Independent", - "License :: OSI Approved :: Apache Software License", - ], - project_urls={ - "Documentation": "https://yolox.readthedocs.io", - "Source": "https://github.com/Megvii-BaseDetection/YOLOX", - "Tracker": "https://github.com/Megvii-BaseDetection/YOLOX/issues", - }, -) diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/generation/tf_logits_process.py b/spaces/chendl/compositional_test/transformers/src/transformers/generation/tf_logits_process.py deleted file mode 100644 index 369c969526a4f7d3c338e6b9b915660033a3c5a1..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/generation/tf_logits_process.py +++ /dev/null @@ -1,586 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import List, Tuple - -import numpy as np -import tensorflow as tf - -from ..tf_utils import stable_softmax -from ..utils import add_start_docstrings -from ..utils.logging import get_logger - - -logger = get_logger(__name__) - - -TF_LOGITS_PROCESSOR_INPUTS_DOCSTRING = r""" - Args: - input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - scores (`tf.Tensor` of shape `(batch_size, config.vocab_size)`): - Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam - search or log softmax for each vocabulary token when using beam search. - cur_len (`int`): - The current length of valid input sequence tokens. In the TF implementation, the input_ids' sequence length - is the maximum length generate can produce, and we need to know which of its tokens are valid. - kwargs: - Additional logits processor specific kwargs. - - Return: - `tf.Tensor` of shape `(batch_size, config.vocab_size)`: The processed prediction scores. -""" - - -class TFLogitsProcessor: - """Abstract base class for all logit processors that can be applied during generation.""" - - @add_start_docstrings(TF_LOGITS_PROCESSOR_INPUTS_DOCSTRING) - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - """TF method for processing logits.""" - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - -class TFLogitsWarper: - """Abstract base class for all logit warpers that can be applied during generation with multinomial sampling.""" - - @add_start_docstrings(TF_LOGITS_PROCESSOR_INPUTS_DOCSTRING) - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - """TF method for warping logits.""" - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - -class TFLogitsProcessorList(list): - """ - This class can be used to create a list of [`TFLogitsProcessor`] to subsequently process a `scores` input tensor. - This class inherits from list and adds a specific *__call__* method to apply each [`TFLogitsProcessor`] to the - inputs. - """ - - @add_start_docstrings(TF_LOGITS_PROCESSOR_INPUTS_DOCSTRING) - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int, **kwargs) -> tf.Tensor: - for processor in self: - function_args = inspect.signature(processor.__call__).parameters - if len(function_args) > 3: - if not all(arg in kwargs for arg in list(function_args.keys())[2:]): - raise ValueError( - f"Make sure that all the required parameters: {list(function_args.keys())} for " - f"{processor.__class__} are passed to the logits processor." - ) - scores = processor(input_ids, scores, cur_len, **kwargs) - else: - scores = processor(input_ids, scores, cur_len) - return scores - - -class TFTemperatureLogitsWarper(TFLogitsWarper): - r""" - [`TFLogitsWarper`] for temperature (exponential scaling output probability distribution). - - Args: - temperature (`float`): - The value used to module the logits distribution. - """ - - def __init__(self, temperature: float): - if not isinstance(temperature, float) or not (temperature > 0): - raise ValueError(f"`temperature` has to be a strictly positive float, but is {temperature}") - - self.temperature = temperature - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - scores = scores / self.temperature - return scores - - -class TFTopKLogitsWarper(TFLogitsWarper): - r""" - [`TFLogitsWarper`] that performs top-k, i.e. restricting to the k highest probability elements. - - Args: - top_k (`int`): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - filter_value (`float`, *optional*, defaults to `-float("Inf")`): - All filtered values will be set to this float value. - min_tokens_to_keep (`int`, *optional*, defaults to 1): - Minimum number of tokens that cannot be filtered. - """ - - def __init__(self, top_k: int, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - if not isinstance(top_k, int) or top_k <= 0: - raise ValueError(f"`top_k` has to be a strictly positive integer, but is {top_k}") - - self.top_k = max(top_k, min_tokens_to_keep) - self.filter_value = filter_value - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - top_k = min(self.top_k, scores.shape[-1]) # Safety check - # Boolean mask containing all tokens with a probability less than the last token of the top-k - indices_to_remove = scores < tf.math.top_k(scores, k=top_k)[0][..., -1:] - next_scores = tf.where(indices_to_remove, self.filter_value, scores) - return next_scores - - -class TFTopPLogitsWarper(TFLogitsWarper): - """ - [`TFLogitsWarper`] that performs top-p, i.e. restricting to top tokens summing to <= prob_cut_off. - - Args: - top_p (`float`): - If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or - higher are kept for generation. - filter_value (`float`, *optional*, defaults to `-float("Inf")`): - All filtered values will be set to this float value. - min_tokens_to_keep (`int`, *optional*, defaults to 1): - Minimum number of tokens that cannot be filtered. - """ - - def __init__(self, top_p: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - if not isinstance(top_p, float) or (top_p < 0 or top_p > 1.0): - raise ValueError(f"`top_p` has to be a float > 0 and < 1, but is {top_p}") - - self.top_p = top_p - self.filter_value = filter_value - self.min_tokens_to_keep = min_tokens_to_keep - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - topk_scores, topk_indices = tf.math.top_k(scores, scores.shape[-1]) - - mask_scores = tf.fill(scores.shape, self.filter_value) - cumulative_probs = tf.math.cumsum(stable_softmax(topk_scores, axis=-1), axis=-1) - score_mask = cumulative_probs < self.top_p - - # Also include the token that is higher than top_p (the first false = shift and insert a True on the left) - score_mask = tf.concat((tf.ones([score_mask.shape[0], 1], dtype=tf.bool), score_mask[:, :-1]), axis=-1) - - # Ensure min tokens to keep - score_mask = tf.concat( - ( - tf.ones([score_mask.shape[0], self.min_tokens_to_keep], dtype=tf.bool), - score_mask[:, self.min_tokens_to_keep :], - ), - axis=-1, - ) - - # Mask the values that do not fit the criteria - topk_next_scores = tf.where(score_mask, topk_scores, mask_scores) - - # Undo the topk sorting: converts the 2D matrix of per-row original indices of shape (batch_size, vocab_size) - # to a 3D tensor of shape (batch_size, vocab_size, 2) containing the original score coordinate, from which we - # can scatter (i.e. `scatter_indices[row, col, :]` is a tensor containing `[row, topk_indices[row, col]]`) - scatter_rows = tf.tile(tf.expand_dims(tf.range(topk_indices.shape[0]), axis=-1), [1, topk_indices.shape[-1]]) - scatter_indices = tf.stack((scatter_rows, topk_indices), axis=-1) - next_scores = tf.scatter_nd(scatter_indices, topk_next_scores, shape=topk_next_scores.shape) - - return next_scores - - -class TFMinLengthLogitsProcessor(TFLogitsProcessor): - r""" - [`TFLogitsProcessor`] enforcing a min-length by setting EOS probability to 0. - - Args: - min_length (`int`): - The minimum length below which the score of `eos_token_id` is set to `-float("Inf")`. - eos_token_id (`int`): - The id of the *end-of-sequence* token. - """ - - def __init__(self, min_length: int, eos_token_id: int): - if not isinstance(min_length, int) or min_length < 0: - raise ValueError(f"`min_length` has to be a positive integer, but is {min_length}") - - if not isinstance(eos_token_id, int) or eos_token_id < 0: - raise ValueError(f"`eos_token_id` has to be a positive integer, but is {eos_token_id}") - - self.min_length = min_length - self.eos_token_id = eos_token_id - - def _apply_eos_token_mask(self, scores: tf.Tensor) -> tf.Tensor: - eos_token_id_mask = tf.range(scores.shape[-1]) == self.eos_token_id - scores = tf.where(eos_token_id_mask, float("-inf"), scores) - return scores - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - # applies eos token masking if the first argument is true - scores = tf.cond( - tf.less(cur_len, self.min_length), - lambda: self._apply_eos_token_mask(scores), - lambda: tf.identity(scores), - ) - return scores - - -class TFRepetitionPenaltyLogitsProcessor(TFLogitsProcessor): - r""" - [`TFLogitsProcessor`] enforcing an exponential penalty on repeated sequences. - - Args: - repetition_penalty (`float`): - The parameter for repetition penalty. 1.0 means no penalty. See [this - paper](https://arxiv.org/pdf/1909.05858.pdf) for more details. - """ - - def __init__(self, penalty: float): - if not isinstance(penalty, float) or not (penalty > 0): - raise ValueError(f"`penalty` has to be a strictly positive float, but is {penalty}") - - self.penalty = penalty - - def _create_score_penalties(self, input_ids: tf.Tensor, logits: tf.Tensor) -> tf.Tensor: - # We want to populate the penalties in the positions of `input_ids`. Since XLA can't handle shapes unknown - # before runtime, `tf.unique` can't be used. Therefore, we may have redundant updates, when a given row has - # the same token multiple times. - - # Gathers the penalties to apply - logit_penalties = tf.gather(logits, input_ids, axis=1, batch_dims=1) - logit_penalties = tf.where(logit_penalties > 0, 1 / self.penalty, logit_penalties) - logit_penalties = tf.where(logit_penalties < 0, self.penalty, logit_penalties) - - # Scatters the penalties - token_penalties = tf.ones(logits.shape) - batch_size = input_ids.shape[0] - seq_len = tf.shape(input_ids)[1] # the sequence length has dynamic size, hence the dynamic shape - indexable_prev_input_ids = tf.concat( - ( - tf.expand_dims(tf.repeat(tf.range(batch_size), seq_len), axis=-1), - tf.expand_dims(tf.reshape(input_ids, [-1]), axis=-1), - ), - axis=1, - ) - token_penalties = tf.tensor_scatter_nd_update( - token_penalties, indices=indexable_prev_input_ids, updates=tf.reshape(logit_penalties, [-1]) - ) - return token_penalties - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - score_penalties = self._create_score_penalties(input_ids[:, :cur_len], scores) - - scores = tf.math.multiply(scores, score_penalties) - - return scores - - -class TFNoBadWordsLogitsProcessor(TFLogitsProcessor): - """ - [`TFLogitsProcessor`] that enforces that specified sequences will never be sampled. - - Args: - bad_words_ids (`List[List[int]]`): - List of list of token ids that are not allowed to be generated. In order to get the tokens of the words - that should not appear in the generated text, use `tokenizer(bad_word, add_prefix_space=True).input_ids`. - eos_token_id (`int`): - The id of the *end-of-sequence* token. - """ - - def __init__(self, bad_words_ids: List[List[int]], eos_token_id: int): - if not isinstance(bad_words_ids, List) or len(bad_words_ids) == 0: - raise ValueError(f"`bad_words_ids` has to be a non-empty list, but is {bad_words_ids}.") - if any(not isinstance(bad_word_ids, list) for bad_word_ids in bad_words_ids): - raise ValueError(f"`bad_words_ids` has to be a list of lists, but is {bad_words_ids}.") - if any( - any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in bad_word_ids) - for bad_word_ids in bad_words_ids - ): - raise ValueError( - f"Each list in `bad_words_ids` has to be a list of positive integers, but is {bad_words_ids}." - ) - - # stores the information about bad words in three tensors: - # 1. a rectangular tensor with the forbidden sequences (padded with `-1`), for full data comparisons - self.bad_word_seqs_ids = tf.ragged.constant(bad_words_ids).to_tensor(default_value=-1) - # 2. a tensor with the unpadded length of each forbidden sequence, for quick length comparisons - bad_word_seqs_len = [len(bad_words) for bad_words in bad_words_ids] - if any([word_len == 0 for word_len in bad_word_seqs_len]): - raise ValueError(f"Banned words token sequences {bad_words_ids} cannot have an empty list") - self.bad_word_seqs_len = tf.convert_to_tensor(bad_word_seqs_len, dtype=tf.int32) - # 3. a tensor containing the last token for each sequence, for easy access to the tokens that may be banned - self.seq_forbidden_tokens = tf.convert_to_tensor([bad_words[-1] for bad_words in bad_words_ids]) - - def _calc_row_banned_bad_tokens(self, row_input_ids: tf.Tensor) -> tf.Tensor: - def _tokens_match(bad_word_seq_number): - def _len_one(): - # If the bad sequence only has one token, always mask it - return tf.cond( - tf.math.equal(self.bad_word_seqs_len[bad_word_seq_number], 1), - lambda: tf.ones((), dtype=tf.bool), - _len_greater_than_cur_len, - ) - - def _len_greater_than_cur_len(): - # Otherwise, if the bad sequence is longer than the current length they can't ever match - return tf.cond( - tf.math.greater(self.bad_word_seqs_len[bad_word_seq_number], tf.shape(row_input_ids)[0]), - lambda: tf.zeros((), dtype=tf.bool), - _match_found, - ) - - def _match_found(): - # Finaly, runs the actual comparison. Can only be called if the previous comparisons do not yield - # an answer (otherwise we get indexing exceptions) - compare_len = self.bad_word_seqs_len[bad_word_seq_number] - 1 - return tf.cond( - tf.math.reduce_all( - tf.math.equal( - row_input_ids[-compare_len:], self.bad_word_seqs_ids[bad_word_seq_number, :compare_len] - ) - ), - lambda: tf.ones((), dtype=tf.bool), - lambda: tf.zeros((), dtype=tf.bool), - ) - - match = _len_one() - return match - - # Compares the current row against all bad word sequences, obtaining a mask with the matches. - match_mask = tf.map_fn(_tokens_match, tf.range(self.bad_word_seqs_ids.shape[0]), fn_output_signature=tf.bool) - row_banned_tokens = self.seq_forbidden_tokens[match_mask] - return row_banned_tokens - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - # We want to mask some banned tokens, at a score level. Since the banned tokens depend on the previous - # `input_ids`, they may have a different length for each row, and they may even be empty for some rows. - # To remain simple and XLA-compatible, we work on a per-row fashion. - # TODO (Joao): this function might trigger XLA retracing as `cur_len` increases. Fix it if it becomes - # a frequent choke point. (make `cur_len` a tensor?) - def _get_row_updated_score(row_inputs: Tuple[tf.Tensor]) -> tf.Tensor: - row_input_ids, row_score = row_inputs - banned_tokens = self._calc_row_banned_bad_tokens(row_input_ids[:cur_len]) - banned_tokens_mask = tf.scatter_nd( - indices=tf.expand_dims(banned_tokens, axis=-1), - updates=tf.ones_like(banned_tokens, dtype=tf.bool), - shape=row_score.shape, - ) - row_score = tf.where(banned_tokens_mask, -float("inf"), row_score) - return row_score - - scores = tf.map_fn(_get_row_updated_score, (input_ids, scores), fn_output_signature=tf.float32) - return scores - - -class TFNoRepeatNGramLogitsProcessor(TFLogitsProcessor): - r""" - [`TFLogitsProcessor`] that enforces no repetition of n-grams. See - [Fairseq](https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345). - - Args: - ngram_size (`int`): - All ngrams of size `ngram_size` can only occur once. - """ - - def __init__(self, ngram_size: int): - if not isinstance(ngram_size, int) or ngram_size <= 0: - raise ValueError(f"`ngram_size` has to be a strictly positive integer, but is {ngram_size}") - self.ngram_size = ngram_size - - def calc_banned_ngram_tokens(self, input_ids, num_hypos, cur_len): - # Copied from fairseq for no_repeat_ngram in beam_search - if cur_len + 1 < self.ngram_size: - # return no banned tokens if we haven't generated ngram_size tokens yet - return [[] for _ in range(num_hypos)] - generated_ngrams = [{} for _ in range(num_hypos)] - prev_input_ids = input_ids[:, :cur_len] - for idx in range(num_hypos): - gen_tokens = prev_input_ids[idx].numpy().tolist() - generated_ngram = generated_ngrams[idx] - for ngram in zip(*[gen_tokens[i:] for i in range(self.ngram_size)]): - prev_ngram_tuple = tuple(ngram[:-1]) - generated_ngram[prev_ngram_tuple] = generated_ngram.get(prev_ngram_tuple, []) + [ngram[-1]] - - def _get_generated_ngrams(hypo_idx): - # Before decoding the next token, prevent decoding of ngrams that have already appeared - start_idx = cur_len + 1 - self.ngram_size - ngram_idx = tuple(prev_input_ids[hypo_idx, start_idx:cur_len].numpy().tolist()) - return generated_ngrams[hypo_idx].get(ngram_idx, []) - - banned_tokens = [_get_generated_ngrams(hypo_idx) for hypo_idx in range(num_hypos)] - - return banned_tokens - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - # TODO (joao): enable XLA on this logits processor. See discussion and attempts in - # https://github.com/huggingface/transformers/pull/16974 - if not tf.executing_eagerly(): - raise NotImplementedError("TFNoRepeatNGramLogitsProcessor is only implemented for eager execution.") - - batch_size, vocab_size = scores.shape - banned_tokens = self.calc_banned_ngram_tokens(input_ids, batch_size, cur_len) - - # create banned_tokens boolean mask - banned_tokens_indices_mask = [] - for banned_tokens_slice in banned_tokens: - banned_tokens_indices_mask.append( - [True if token in banned_tokens_slice else False for token in range(vocab_size)] - ) - - scores = tf.where(tf.convert_to_tensor(banned_tokens_indices_mask, dtype=tf.bool), -float("inf"), scores) - - return scores - - -class TFForcedBOSTokenLogitsProcessor(TFLogitsProcessor): - r""" - [`TFLogitsProcessor`] that enforces the specified token as the first generated token. - - Args: - bos_token_id (`int`): - The id of the token to force as the first generated token. - """ - - def __init__(self, bos_token_id: int): - if bos_token_id < 0: - raise ValueError(f"The forced bos token id must be a non-negative integer, got {bos_token_id}") - self.bos_token_id = bos_token_id - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - if cur_len == 1: - batch_size, num_tokens = scores.shape - # sets the score to 0 in the bos_token_id column - scores = tf.zeros((batch_size, 1)) - # sets the score to -inf everywhere else - if self.bos_token_id > 0: - scores = tf.concat((tf.broadcast_to(-float("inf"), (batch_size, self.bos_token_id)), scores), axis=-1) - if self.bos_token_id < (num_tokens - 1): - scores = tf.concat( - (scores, tf.broadcast_to(-float("inf"), (batch_size, (num_tokens - 1) - self.bos_token_id))), - axis=-1, - ) - return scores - - -class TFForcedEOSTokenLogitsProcessor(TFLogitsProcessor): - r""" - [`TFLogitsProcessor`] that enforces the specified token as the last generated token when `max_length` is reached. - - Args: - max_length (`int`): - The maximum length of the sequence to be generated. - eos_token_id (`int`): - The id of the token to force as the last generated token when `max_length` is reached. - """ - - def __init__(self, max_length: int, eos_token_id: int): - self.max_length = max_length - if eos_token_id < 0: - raise ValueError(f"The forced eos token id must be a non-negative integer, got {eos_token_id}") - self.eos_token_id = eos_token_id - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - if cur_len == self.max_length - 1: - batch_size, num_tokens = scores.shape - # sets the score to 0 in the eos_token_id column - scores = tf.zeros((batch_size, 1)) - # sets the score to -inf everywhere else - if self.eos_token_id > 0: - scores = tf.concat((tf.broadcast_to(-float("inf"), (batch_size, self.eos_token_id)), scores), axis=-1) - if self.eos_token_id < (num_tokens - 1): - scores = tf.concat( - (scores, tf.broadcast_to(-float("inf"), (batch_size, (num_tokens - 1) - self.eos_token_id))), - axis=-1, - ) - return scores - - -class TFSuppressTokensAtBeginLogitsProcessor(TFLogitsProcessor): - r""" - [`TFSuppressTokensAtBeginLogitsProcessor`] suppresses a list of tokens as soon as the `generate` function starts - generating using `begin_index` tokens. This should ensure that the tokens defined by `begin_suppress_tokens` at not - sampled at the begining of the generation. - """ - - def __init__(self, begin_suppress_tokens, begin_index): - self.begin_suppress_tokens = list(begin_suppress_tokens) - self.begin_index = begin_index - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - scores = tf.cond( - tf.equal(cur_len, self.begin_index), - lambda: tf.tensor_scatter_nd_update( - scores, - indices=[[i, token] for i in range(scores.shape[0]) for token in self.begin_suppress_tokens], - updates=[-float("inf") for _ in range(scores.shape[0] * len(self.begin_suppress_tokens))], - ), - lambda: scores, - ) - return scores - - -class TFSuppressTokensLogitsProcessor(TFLogitsProcessor): - r"""This processor can be used to suppress a list of tokens. The processor will set their log probs to `-inf` so that they - are not sampled.""" - - def __init__(self, suppress_tokens): - self.suppress_tokens = list(suppress_tokens) - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - scores = tf.tensor_scatter_nd_update( - scores, - indices=[[i, token] for i in range(scores.shape[0]) for token in self.suppress_tokens], - updates=[-float("inf") for _ in range(scores.shape[0] * len(self.suppress_tokens))], - ) - return scores - - -class TFForceTokensLogitsProcessor(TFLogitsProcessor): - r"""This processor takes a list of pairs of integers which indicates a mapping from generation indices to token - indices that will be forced before sampling. The processor will set their log probs to `0` and all other tokens to - `-inf` so that they are sampled at their corresponding index.""" - - def __init__(self, force_token_map: List[List[int]]): - force_token_map = dict(force_token_map) - # Converts the dictionary of format {index: token} containing the tokens to be forced to an array, where the - # index of the array corresponds to the index of the token to be forced, for XLA compatibility. - # Indexes without forced tokens will have an negative value. - force_token_array = np.ones((max(force_token_map.keys()) + 1), dtype=np.int32) * -1 - for index, token in force_token_map.items(): - if token is not None: - force_token_array[index] = token - self.force_token_array = tf.convert_to_tensor(force_token_array, dtype=tf.int32) - - def __call__(self, input_ids: tf.Tensor, scores: tf.Tensor, cur_len: int) -> tf.Tensor: - def _force_token(generation_idx): - batch_size = scores.shape[0] - current_token = self.force_token_array[generation_idx] - - new_scores = tf.ones_like(scores, dtype=scores.dtype) * -float("inf") - indices = tf.stack((tf.range(batch_size), tf.tile([current_token], [batch_size])), axis=1) - updates = tf.zeros((batch_size,), dtype=scores.dtype) - new_scores = tf.tensor_scatter_nd_update(new_scores, indices, updates) - return new_scores - - scores = tf.cond( - tf.greater_equal(cur_len, tf.shape(self.force_token_array)[0]), - # If the current length is geq than the length of force_token_array, the processor does nothing. - lambda: tf.identity(scores), - # Otherwise, it may force a certain token. - lambda: tf.cond( - tf.greater_equal(self.force_token_array[cur_len], 0), - # Only valid (positive) tokens are forced - lambda: _force_token(cur_len), - # Otherwise, the processor does nothing. - lambda: scores, - ), - ) - return scores diff --git a/spaces/chenmgtea/cn_tts/text/symbols.py b/spaces/chenmgtea/cn_tts/text/symbols.py deleted file mode 100644 index 80fd41ea8ee57725ce0f76aa5347a3a1fdd0047d..0000000000000000000000000000000000000000 --- a/spaces/chenmgtea/cn_tts/text/symbols.py +++ /dev/null @@ -1,71 +0,0 @@ -_pause = ["sil", "eos", "sp", "#0", "#1", "#2", "#3"] - -_initials = [ - "^", - "b", - "c", - "ch", - "d", - "f", - "g", - "h", - "j", - "k", - "l", - "m", - "n", - "p", - "q", - "r", - "s", - "sh", - "t", - "x", - "z", - "zh", -] - -_tones = ["1", "2", "3", "4", "5"] - -_finals = [ - "a", - "ai", - "an", - "ang", - "ao", - "e", - "ei", - "en", - "eng", - "er", - "i", - "ia", - "ian", - "iang", - "iao", - "ie", - "ii", - "iii", - "in", - "ing", - "iong", - "iou", - "o", - "ong", - "ou", - "u", - "ua", - "uai", - "uan", - "uang", - "uei", - "uen", - "ueng", - "uo", - "v", - "van", - "ve", - "vn", -] - -symbols = _pause + _initials + [i + j for i in _finals for j in _tones] \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/__init__.py deleted file mode 100644 index 55991fc38062b9c800805437ee49b0cf42b98103..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/__init__.py +++ /dev/null @@ -1,46 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Charset-Normalizer -~~~~~~~~~~~~~~ -The Real First Universal Charset Detector. -A library that helps you read text from an unknown charset encoding. -Motivated by chardet, This package is trying to resolve the issue by taking a new approach. -All IANA character set names for which the Python core library provides codecs are supported. - -Basic usage: - >>> from charset_normalizer import from_bytes - >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8')) - >>> best_guess = results.best() - >>> str(best_guess) - 'Bсеки човек има право на образование. Oбразованието!' - -Others methods and usages are available - see the full documentation -at . -:copyright: (c) 2021 by Ahmed TAHRI -:license: MIT, see LICENSE for more details. -""" -import logging - -from .api import from_bytes, from_fp, from_path, is_binary -from .legacy import detect -from .models import CharsetMatch, CharsetMatches -from .utils import set_logging_handler -from .version import VERSION, __version__ - -__all__ = ( - "from_fp", - "from_path", - "from_bytes", - "is_binary", - "detect", - "CharsetMatch", - "CharsetMatches", - "__version__", - "VERSION", - "set_logging_handler", -) - -# Attach a NullHandler to the top level logger by default -# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library - -logging.getLogger("charset_normalizer").addHandler(logging.NullHandler()) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/pkgwriter.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/pkgwriter.py deleted file mode 100644 index fccda6cd820f0492e59a0235b95a03d7120fe7e5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/pkgwriter.py +++ /dev/null @@ -1,125 +0,0 @@ -# encoding: utf-8 - -""" -Provides a low-level, write-only API to a serialized Open Packaging -Convention (OPC) package, essentially an implementation of OpcPackage.save() -""" - -from __future__ import absolute_import - -from .constants import CONTENT_TYPE as CT -from .oxml import CT_Types, serialize_part_xml -from .packuri import CONTENT_TYPES_URI, PACKAGE_URI -from .phys_pkg import PhysPkgWriter -from .shared import CaseInsensitiveDict -from .spec import default_content_types - - -class PackageWriter(object): - """ - Writes a zip-format OPC package to *pkg_file*, where *pkg_file* can be - either a path to a zip file (a string) or a file-like object. Its single - API method, :meth:`write`, is static, so this class is not intended to - be instantiated. - """ - @staticmethod - def write(pkg_file, pkg_rels, parts): - """ - Write a physical package (.pptx file) to *pkg_file* containing - *pkg_rels* and *parts* and a content types stream based on the - content types of the parts. - """ - phys_writer = PhysPkgWriter(pkg_file) - PackageWriter._write_content_types_stream(phys_writer, parts) - PackageWriter._write_pkg_rels(phys_writer, pkg_rels) - PackageWriter._write_parts(phys_writer, parts) - phys_writer.close() - - @staticmethod - def _write_content_types_stream(phys_writer, parts): - """ - Write ``[Content_Types].xml`` part to the physical package with an - appropriate content type lookup target for each part in *parts*. - """ - cti = _ContentTypesItem.from_parts(parts) - phys_writer.write(CONTENT_TYPES_URI, cti.blob) - - @staticmethod - def _write_parts(phys_writer, parts): - """ - Write the blob of each part in *parts* to the package, along with a - rels item for its relationships if and only if it has any. - """ - for part in parts: - phys_writer.write(part.partname, part.blob) - if len(part._rels): - phys_writer.write(part.partname.rels_uri, part._rels.xml) - - @staticmethod - def _write_pkg_rels(phys_writer, pkg_rels): - """ - Write the XML rels item for *pkg_rels* ('/_rels/.rels') to the - package. - """ - phys_writer.write(PACKAGE_URI.rels_uri, pkg_rels.xml) - - -class _ContentTypesItem(object): - """ - Service class that composes a content types item ([Content_Types].xml) - based on a list of parts. Not meant to be instantiated directly, its - single interface method is xml_for(), e.g. - ``_ContentTypesItem.xml_for(parts)``. - """ - def __init__(self): - self._defaults = CaseInsensitiveDict() - self._overrides = dict() - - @property - def blob(self): - """ - Return XML form of this content types item, suitable for storage as - ``[Content_Types].xml`` in an OPC package. - """ - return serialize_part_xml(self._element) - - @classmethod - def from_parts(cls, parts): - """ - Return content types XML mapping each part in *parts* to the - appropriate content type and suitable for storage as - ``[Content_Types].xml`` in an OPC package. - """ - cti = cls() - cti._defaults['rels'] = CT.OPC_RELATIONSHIPS - cti._defaults['xml'] = CT.XML - for part in parts: - cti._add_content_type(part.partname, part.content_type) - return cti - - def _add_content_type(self, partname, content_type): - """ - Add a content type for the part with *partname* and *content_type*, - using a default or override as appropriate. - """ - ext = partname.ext - if (ext.lower(), content_type) in default_content_types: - self._defaults[ext] = content_type - else: - self._overrides[partname] = content_type - - @property - def _element(self): - """ - Return XML form of this content types item, suitable for storage as - ``[Content_Types].xml`` in an OPC package. Although the sequence of - elements is not strictly significant, as an aid to testing and - readability Default elements are sorted by extension and Override - elements are sorted by partname. - """ - _types_elm = CT_Types.new() - for ext in sorted(self._defaults.keys()): - _types_elm.add_default(ext, self._defaults[ext]) - for partname in sorted(self._overrides.keys()): - _types_elm.add_override(partname, self._overrides[partname]) - return _types_elm diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/parts/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/parts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cihyFjudo/fairness-paper-search/EMERGENCY 20 Update V4 2 0-PLAZA Crack Free The Ultimate Guide to the Best Emergency Simulation Game.md b/spaces/cihyFjudo/fairness-paper-search/EMERGENCY 20 Update V4 2 0-PLAZA Crack Free The Ultimate Guide to the Best Emergency Simulation Game.md deleted file mode 100644 index 15aa6eb5a87a2151a677b7874cee0d11acb04c20..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/EMERGENCY 20 Update V4 2 0-PLAZA Crack Free The Ultimate Guide to the Best Emergency Simulation Game.md +++ /dev/null @@ -1,6 +0,0 @@ -

    EMERGENCY 20 Update V4 2 0-PLAZA Crack Free


    Download Ziphttps://tinurli.com/2uwhBe



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Onone Perfect Photo Suite 8 Torrent Mac LINK.md b/spaces/cihyFjudo/fairness-paper-search/Onone Perfect Photo Suite 8 Torrent Mac LINK.md deleted file mode 100644 index 0371e1c4f4d4315418cdf5eef8c52e53d2ba89de..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Onone Perfect Photo Suite 8 Torrent Mac LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Since 2017, Portland-based ON1 Software's flagship product has been ON1 Photo Raw, an all-in-one photographic toolbox. This suite gives you pretty much everything you'll need to import, organize, edit and print your photos. With year's end fast approaching, another major release of the app has just landed and it's jam-packed with new features.

    -

    As you can tell from the name, Aurora HDR is designed for High Dynamic Range (HDR) photography. The app automatically aligns and merges multiple exposures with the help of Artificial Intelligence to create a single tone-mapped image. For single-exposure photos, Aurora creates a tonal map that allows you to achieve outstanding results by bringing out more information to work with. Aurora HDR offers over 20 core tools for tweaking your HDR photos to perfection, including Dodge & Burn, Denoise, Tone Curve, LUT Mapping, HSL, and a Polarizing filter. It also boasts a number of automated tools like HDR Denoise, HDR Smart Structure, and HDR Clarity that automatically recognize and fix flaws in your photos.

    -

    Onone Perfect Photo Suite 8 Torrent Mac


    Download Zip ->->->-> https://tinurli.com/2uwiFE



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Ragnarok Mobile Mod Offline ((BETTER)).md b/spaces/cihyFjudo/fairness-paper-search/Ragnarok Mobile Mod Offline ((BETTER)).md deleted file mode 100644 index 88597a090d6d88fc4f1b865e01cfd448a09f2a94..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Ragnarok Mobile Mod Offline ((BETTER)).md +++ /dev/null @@ -1,33 +0,0 @@ -
    -

    Ragnarok M Eternal Love latest update on the RO 2.0 includes an offline mode where players can now grind in the game even not going online. This is very helpful for players who has no time grinding on the game.

    -

    Are you an anime fan with extremely adorable characters, and are eager to own an anime character that is so unique and unique? Or do you like exciting and dramatic love stories, and are there fiery battles? If you can not find it in real life, why not come to the mobile gaming world to try and experience it? Ragnarok M: Eternal Love (English version), a brand new game by the publisher of Heartbeat Network (a Chinese publisher) promises to be one of the hottest games in 2018, will satisfy all your desires. Anime style characters, exciting love stories, and fierce battles never stop.

    -

    Ragnarok Mobile Mod Offline


    Download File ✯✯✯ https://tinurli.com/2uwkyv



    -

    Also, there are blood remedies and healing abilities that help you survive better when faced with them. You will have to use the money earned to upgrade your equipment as well as your skills, so it is essential to use it properly. At the start of the game there will be locked features, and in the course of the mission and promotion, those features will open up. So you slowly enjoy the game and have a lot of fun developing your character offline.

    -

    Although currently available only in Chinese & Korea, the publisher promises to deliver an English version for players around the world who can understand and participate in the game. So, if you love a genre of adventure RPGs do not go to romance, now download fast and experience this great game offline!

    -

    There are times when you might want to appear offline in Roblox. You may want to play some game alone, might be playing with some random people online, or other reasons. Whatever the case may be the feature to hide your visibility from your Friends is needed in Roblox. And since it is a massive game platform, where you can find a ton of games to play, you are bound to meet some friends online. So in this guide let us find out whether you can appear Offline in Roblox and how to do it.

    -

    That sums up this guide on whether you can appear offline in Roblox and how to do that. If you are new to Roblox then you should find our other guides helpful like how to whisper and how to give Robux.

    -


    This package good demonstration of how everything should be configured by yourself to make it work.
    We can find here a good guide on how to configure everything by yourself:
    -how-to-setup-offline-server-for-personal-development-use/

    -

    No, you do not need a paid PS Plus account to access any part of God of War Ragnarok on PlayStation 4 and PlayStation 5. With it being an offline, single-player experience, PS Plus is not a requirement.

    -

    In terms of flexibility in gaming, no other platform can match the PC. The hardware comes with a lot of advantages for those that can overcome the often-daunting price of setting up a computer. As a bonus, while consoles require a subscription fee for online gaming, the majority of PC games have free online. Regardless, many people find the most enjoyment in offline PC games.

    -

    -

    Whether searching for triple-A open-world behemoths or indie darlings that use pixel art, PC players are spoiled for choice. New games are released on services such as Steam on a daily basis, and while they might not all be classics, there is never a shortage of great titles. What are the best offline games for PC?

    -

    PvP can be a big part of the Elden Ring experience, permitting a player wants it to be. While online certainly adds another dimension to FromSoftware's masterpiece, Elden Ring also offers an offline mode for those who prefer to focus solely on PvE or do not have access to the internet at that moment. An absolutely massive action RPG that implements the Souls formula in a proper open-world setting, Elden Ring has garnered almost universal acclaim, receiving praise for its combat, exploration, and replayability.

    -

    Far Cry 6 is perhaps at its best when played in co-op, which does require an internet connection. However, the game can be played offline without much issue, and it delivers a fun open-world Ubisoft experience in the process.

    -

    For the most part, JRPGs tend to be (lengthy) single-player experiences, so they are typically a safe bet for anyone looking for offline games for PC. Bandai Namco's Tales of Arise is the most recent entry in the long-running real-time action RPG franchise, and it is one of the best in the series.

    -

    Initially, Ember Lab's Kena: Bridge of Spirits did not support offline mode on PC, however, the feature was patched in later down the line. A rare example of a modern AA title, Kena is a beautiful adventure game about a spirit guide investigating a corrupted forest and the Rot she collects along the way. Kena takes a few pages from a number of other adventure games like Tomb Raider, using this inspiration to craft a familiar but engaging experience that does not overstay its welcome.

    -

    System Shock 2 tells an engrossing narrative featuring a contender for the greatest villain in gaming history, SHODAN. The plot is built up nicely through environmental storytelling and cutscenes, culminating in an unforgettable final act. System Shock 2 is available on Steam and GOG, but the latter is a slightly better pick for offline play.

    -

    This 2022 release absolutely nails its gameplay, combat, and environments. Visually, Tunic is gorgeous by any metric, and that extends to its enemy designs. If someone is searching for one of the best offline games for PC, this title needs to be in contention.

    -

    The scale of the game is massive. Dialogue is engaging and most players can find something to appreciate in the game. Using a controller is easy and straightforward with this offline PC game. It's a must-have PC RPG.

    -

    Putting aside a few spin-offs, Resident Evil games are generally designed to be single-player experiences, and the remake is perfectly playable offline. For anyone who has yet to dip their toes into Capcom's license or even horror games in general, Resident Evil 2 is one of the best places to start.

    -

    As an offline-only game, what The Witcher 3 does, it does perfectly. There is no need for online as it would only take away from the game. If you only play one role-playing game this year, The Witcher 3: Wild Hunt is a solid choice.

    -

    Neatly packaged in one collection on consoles but sold separately on Steam, the BioShock: The Collection offers remastered versions of all three games. The BioShock franchise is considered one of the greatest single-player trilogies in gaming. Its refined gameplay mechanics combined with its story that twists and turns make it a must-play offline PC game.

    -

    Valve's Half-Life 2 is generally considered one of the best, if not the best offline PC game of all time. Launching in 2004 and spawning two episodes, Half-Life 2 expands everything from its predecessor, turning a claustrophobic shooter into a blockbuster. With Earth succumbing to a multidimensional force called the Combine, Gordon Freeman goes on a cross-country trip as part of humanity's resistance unit.

    -

    After all this time, Half-Life 2 naturally shows its age; however, the game remains fun and exhilarating to play. While a VR release, 2020's Half-Life: Alyx is also a great offline game.

    -

    Rockstar hit the jackpot with Grand Theft Auto Online, crafting a multiplayer package that continues to be a huge success nearly a decade following its introduction. While the single-player campaign has been overshadowed by the online version, Grand Theft Auto 5 nevertheless offers one of the best offline PC gaming experiences on the market.

    -

    Fans of the series are eagerly waiting for The Elder Scrolls 6 since the franchise is arguably at its best when focusing on offline single-player content. The Elder Scrolls Online has a sizable fan base, but the difference between The Elder Scrolls V: Skyrim and The Elder Scrolls Online is night and day.

    -

    XDA Developers was founded by developers, for developers. It is now a valuable resource for people who want to make the most of their mobile devices, from customizing the look and feel to adding new functionality.

    -

    The main feature of godofwar: mobile edition is battles with gigantic bosses. Each boss has their own characteristics. This game deserves special attention. These bosses are well-drawn and can cause fear in players. The bosses are also reminiscent of ancient greek myths. This helps to immerse the player in a real atmosphere.

    -

    In this article, get the link to the god of war ragnarok APK + OBB download for android to have a try on this game on your devices. Also, make sure to share this page with your friends to let them know about the god of war ragnarok apk mobile.

    -

    The addition of some of the new enemies and characters can be easily observed in this game and travel through the nine realms in search of new answers. This is the great work done by the modders and creators of this game for the people to enjoy the god of war ragnarok apk + obb download for android.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/The Kis Kisko Pyaar Karoon Full Movie In Hindi Download Utorrent For Free.md b/spaces/cihyFjudo/fairness-paper-search/The Kis Kisko Pyaar Karoon Full Movie In Hindi Download Utorrent For Free.md deleted file mode 100644 index 4d92e6252a03bae8b161d4265aeca038bbf2014c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/The Kis Kisko Pyaar Karoon Full Movie In Hindi Download Utorrent For Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

    the Kis Kisko Pyaar Karoon full movie in hindi download utorrent for free


    Download File 🗸 https://tinurli.com/2uwhSd



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/visitor.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/visitor.py deleted file mode 100644 index 3d28135fad3a951c447d03b7f2b08403cb24a12e..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/visitor.py +++ /dev/null @@ -1,143 +0,0 @@ -"""Generic visitor pattern implementation for Python objects.""" - -import enum - - -class Visitor(object): - - defaultStop = False - - @classmethod - def _register(celf, clazzes_attrs): - assert celf != Visitor, "Subclass Visitor instead." - if "_visitors" not in celf.__dict__: - celf._visitors = {} - - def wrapper(method): - assert method.__name__ == "visit" - for clazzes, attrs in clazzes_attrs: - if type(clazzes) != tuple: - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - for clazz in clazzes: - _visitors = celf._visitors.setdefault(clazz, {}) - for attr in attrs: - assert attr not in _visitors, ( - "Oops, class '%s' has visitor function for '%s' defined already." - % (clazz.__name__, attr) - ) - _visitors[attr] = method - return None - - return wrapper - - @classmethod - def register(celf, clazzes): - if type(clazzes) != tuple: - clazzes = (clazzes,) - return celf._register([(clazzes, (None,))]) - - @classmethod - def register_attr(celf, clazzes, attrs): - clazzes_attrs = [] - if type(clazzes) != tuple: - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - for clazz in clazzes: - clazzes_attrs.append((clazz, attrs)) - return celf._register(clazzes_attrs) - - @classmethod - def register_attrs(celf, clazzes_attrs): - return celf._register(clazzes_attrs) - - @classmethod - def _visitorsFor(celf, thing, _default={}): - typ = type(thing) - - for celf in celf.mro(): - - _visitors = getattr(celf, "_visitors", None) - if _visitors is None: - break - - m = celf._visitors.get(typ, None) - if m is not None: - return m - - return _default - - def visitObject(self, obj, *args, **kwargs): - """Called to visit an object. This function loops over all non-private - attributes of the objects and calls any user-registered (via - @register_attr() or @register_attrs()) visit() functions. - - If there is no user-registered visit function, of if there is and it - returns True, or it returns None (or doesn't return anything) and - visitor.defaultStop is False (default), then the visitor will proceed - to call self.visitAttr()""" - - keys = sorted(vars(obj).keys()) - _visitors = self._visitorsFor(obj) - defaultVisitor = _visitors.get("*", None) - for key in keys: - if key[0] == "_": - continue - value = getattr(obj, key) - visitorFunc = _visitors.get(key, defaultVisitor) - if visitorFunc is not None: - ret = visitorFunc(self, obj, key, value, *args, **kwargs) - if ret == False or (ret is None and self.defaultStop): - continue - self.visitAttr(obj, key, value, *args, **kwargs) - - def visitAttr(self, obj, attr, value, *args, **kwargs): - """Called to visit an attribute of an object.""" - self.visit(value, *args, **kwargs) - - def visitList(self, obj, *args, **kwargs): - """Called to visit any value that is a list.""" - for value in obj: - self.visit(value, *args, **kwargs) - - def visitDict(self, obj, *args, **kwargs): - """Called to visit any value that is a dictionary.""" - for value in obj.values(): - self.visit(value, *args, **kwargs) - - def visitLeaf(self, obj, *args, **kwargs): - """Called to visit any value that is not an object, list, - or dictionary.""" - pass - - def visit(self, obj, *args, **kwargs): - """This is the main entry to the visitor. The visitor will visit object - obj. - - The visitor will first determine if there is a registered (via - @register()) visit function for the type of object. If there is, it - will be called, and (visitor, obj, *args, **kwargs) will be passed to - the user visit function. - - If there is no user-registered visit function, of if there is and it - returns True, or it returns None (or doesn't return anything) and - visitor.defaultStop is False (default), then the visitor will proceed - to dispatch to one of self.visitObject(), self.visitList(), - self.visitDict(), or self.visitLeaf() (any of which can be overriden in - a subclass).""" - - visitorFunc = self._visitorsFor(obj).get(None, None) - if visitorFunc is not None: - ret = visitorFunc(self, obj, *args, **kwargs) - if ret == False or (ret is None and self.defaultStop): - return - if hasattr(obj, "__dict__") and not isinstance(obj, enum.Enum): - self.visitObject(obj, *args, **kwargs) - elif isinstance(obj, list): - self.visitList(obj, *args, **kwargs) - elif isinstance(obj, dict): - self.visitDict(obj, *args, **kwargs) - else: - self.visitLeaf(obj, *args, **kwargs) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asvdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asvdec.c deleted file mode 100644 index 699aab9f8f9865f7a74ede24a48787b9638141bb..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asvdec.c +++ /dev/null @@ -1,373 +0,0 @@ -/* - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * ASUS V1/V2 decoder. - */ - -#include "libavutil/attributes.h" -#include "libavutil/mem.h" -#include "libavutil/mem_internal.h" -#include "libavutil/thread.h" - -#include "asv.h" -#include "avcodec.h" -#include "blockdsp.h" -#include "codec_internal.h" -#include "config_components.h" -#include "decode.h" -#include "get_bits.h" -#include "idctdsp.h" -#include "mpeg12data.h" -#include "vlc.h" - -#define CCP_VLC_BITS 5 -#define DC_CCP_VLC_BITS 4 -#define AC_CCP_VLC_BITS 6 -#define ASV1_LEVEL_VLC_BITS 4 -#define ASV2_LEVEL_VLC_BITS 10 - -static VLC ccp_vlc; -static VLC level_vlc; -static VLC dc_ccp_vlc; -static VLC ac_ccp_vlc; -static VLC asv2_level_vlc; - -typedef struct ASVDecContext { - ASVCommonContext c; - - GetBitContext gb; - - BlockDSPContext bdsp; - IDCTDSPContext idsp; - uint8_t permutated_scantable[64]; - DECLARE_ALIGNED(32, int16_t, block)[6][64]; - uint16_t intra_matrix[64]; - uint8_t *bitstream_buffer; - unsigned int bitstream_buffer_size; -} ASVDecContext; - -static av_cold void init_vlcs(void) -{ - INIT_VLC_STATIC(&ccp_vlc, CCP_VLC_BITS, 17, - &ff_asv_ccp_tab[0][1], 2, 1, - &ff_asv_ccp_tab[0][0], 2, 1, 32); - INIT_LE_VLC_STATIC(&dc_ccp_vlc, DC_CCP_VLC_BITS, 8, - &ff_asv_dc_ccp_tab[0][1], 2, 1, - &ff_asv_dc_ccp_tab[0][0], 2, 1, 16); - INIT_LE_VLC_STATIC(&ac_ccp_vlc, AC_CCP_VLC_BITS, 16, - &ff_asv_ac_ccp_tab[0][1], 2, 1, - &ff_asv_ac_ccp_tab[0][0], 2, 1, 64); - INIT_VLC_STATIC(&level_vlc, ASV1_LEVEL_VLC_BITS, 7, - &ff_asv_level_tab[0][1], 2, 1, - &ff_asv_level_tab[0][0], 2, 1, 16); - INIT_LE_VLC_STATIC(&asv2_level_vlc, ASV2_LEVEL_VLC_BITS, 63, - &ff_asv2_level_tab[0][1], 4, 2, - &ff_asv2_level_tab[0][0], 4, 2, 1024); -} - -static inline int asv1_get_level(GetBitContext *gb) -{ - int code = get_vlc2(gb, level_vlc.table, ASV1_LEVEL_VLC_BITS, 1); - - if (code == 3) - return get_sbits(gb, 8); - else - return code - 3; -} - -// get_vlc2() is big-endian in this file -static inline int asv2_get_vlc2(GetBitContext *gb, const VLCElem *table, int bits) -{ - unsigned int index; - int code, n; - - OPEN_READER(re, gb); - UPDATE_CACHE_LE(re, gb); - - index = SHOW_UBITS_LE(re, gb, bits); - code = table[index].sym; - n = table[index].len; - LAST_SKIP_BITS(re, gb, n); - - CLOSE_READER(re, gb); - - return code; -} - -static inline int asv2_get_level(GetBitContext *gb) -{ - int code = asv2_get_vlc2(gb, asv2_level_vlc.table, ASV2_LEVEL_VLC_BITS); - - if (code == 31) - return (int8_t) get_bits_le(gb, 8); - else - return code - 31; -} - -static inline int asv1_decode_block(ASVDecContext *a, int16_t block[64]) -{ - int i; - - block[0] = 8 * get_bits(&a->gb, 8); - - for (i = 0; i < 11; i++) { - const int ccp = get_vlc2(&a->gb, ccp_vlc.table, CCP_VLC_BITS, 1); - - if (ccp) { - if (ccp == 16) - break; - if (ccp < 0 || i >= 10) { - av_log(a->c.avctx, AV_LOG_ERROR, "coded coeff pattern damaged\n"); - return AVERROR_INVALIDDATA; - } - - if (ccp & 8) - block[a->permutated_scantable[4 * i + 0]] = (asv1_get_level(&a->gb) * a->intra_matrix[4 * i + 0]) >> 4; - if (ccp & 4) - block[a->permutated_scantable[4 * i + 1]] = (asv1_get_level(&a->gb) * a->intra_matrix[4 * i + 1]) >> 4; - if (ccp & 2) - block[a->permutated_scantable[4 * i + 2]] = (asv1_get_level(&a->gb) * a->intra_matrix[4 * i + 2]) >> 4; - if (ccp & 1) - block[a->permutated_scantable[4 * i + 3]] = (asv1_get_level(&a->gb) * a->intra_matrix[4 * i + 3]) >> 4; - } - } - - return 0; -} - -static inline int asv2_decode_block(ASVDecContext *a, int16_t block[64]) -{ - int i, count, ccp; - - count = get_bits_le(&a->gb, 4); - - block[0] = 8 * get_bits_le(&a->gb, 8); - - ccp = asv2_get_vlc2(&a->gb, dc_ccp_vlc.table, DC_CCP_VLC_BITS); - if (ccp) { - if (ccp & 4) - block[a->permutated_scantable[1]] = (asv2_get_level(&a->gb) * a->intra_matrix[1]) >> 4; - if (ccp & 2) - block[a->permutated_scantable[2]] = (asv2_get_level(&a->gb) * a->intra_matrix[2]) >> 4; - if (ccp & 1) - block[a->permutated_scantable[3]] = (asv2_get_level(&a->gb) * a->intra_matrix[3]) >> 4; - } - - for (i = 1; i < count + 1; i++) { - const int ccp = asv2_get_vlc2(&a->gb, ac_ccp_vlc.table, AC_CCP_VLC_BITS); - - if (ccp) { - if (ccp & 8) - block[a->permutated_scantable[4 * i + 0]] = (asv2_get_level(&a->gb) * a->intra_matrix[4 * i + 0]) >> 4; - if (ccp & 4) - block[a->permutated_scantable[4 * i + 1]] = (asv2_get_level(&a->gb) * a->intra_matrix[4 * i + 1]) >> 4; - if (ccp & 2) - block[a->permutated_scantable[4 * i + 2]] = (asv2_get_level(&a->gb) * a->intra_matrix[4 * i + 2]) >> 4; - if (ccp & 1) - block[a->permutated_scantable[4 * i + 3]] = (asv2_get_level(&a->gb) * a->intra_matrix[4 * i + 3]) >> 4; - } - } - - return 0; -} - -static inline int decode_mb(ASVDecContext *a, int16_t block[6][64]) -{ - int i, ret; - - a->bdsp.clear_blocks(block[0]); - - if (a->c.avctx->codec_id == AV_CODEC_ID_ASV1) { - for (i = 0; i < 6; i++) { - if ((ret = asv1_decode_block(a, block[i])) < 0) - return ret; - } - } else { - for (i = 0; i < 6; i++) { - if ((ret = asv2_decode_block(a, block[i])) < 0) - return ret; - } - } - return 0; -} - -static inline void idct_put(ASVDecContext *a, AVFrame *frame, int mb_x, int mb_y) -{ - int16_t(*block)[64] = a->block; - int linesize = frame->linesize[0]; - - uint8_t *dest_y = frame->data[0] + (mb_y * 16 * linesize) + mb_x * 16; - uint8_t *dest_cb = frame->data[1] + (mb_y * 8 * frame->linesize[1]) + mb_x * 8; - uint8_t *dest_cr = frame->data[2] + (mb_y * 8 * frame->linesize[2]) + mb_x * 8; - - a->idsp.idct_put(dest_y, linesize, block[0]); - a->idsp.idct_put(dest_y + 8, linesize, block[1]); - a->idsp.idct_put(dest_y + 8 * linesize, linesize, block[2]); - a->idsp.idct_put(dest_y + 8 * linesize + 8, linesize, block[3]); - - if (!(a->c.avctx->flags & AV_CODEC_FLAG_GRAY)) { - a->idsp.idct_put(dest_cb, frame->linesize[1], block[4]); - a->idsp.idct_put(dest_cr, frame->linesize[2], block[5]); - } -} - -static int decode_frame(AVCodecContext *avctx, AVFrame *p, - int *got_frame, AVPacket *avpkt) -{ - ASVDecContext *const a = avctx->priv_data; - const ASVCommonContext *const c = &a->c; - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - int ret; - - if (buf_size * 8LL < c->mb_height * c->mb_width * 13LL) - return AVERROR_INVALIDDATA; - - if ((ret = ff_get_buffer(avctx, p, 0)) < 0) - return ret; - p->pict_type = AV_PICTURE_TYPE_I; - p->key_frame = 1; - - if (avctx->codec_id == AV_CODEC_ID_ASV1) { - av_fast_padded_malloc(&a->bitstream_buffer, &a->bitstream_buffer_size, - buf_size); - if (!a->bitstream_buffer) - return AVERROR(ENOMEM); - - c->bbdsp.bswap_buf((uint32_t *) a->bitstream_buffer, - (const uint32_t *) buf, buf_size / 4); - ret = init_get_bits8(&a->gb, a->bitstream_buffer, buf_size); - } else { - ret = init_get_bits8_le(&a->gb, buf, buf_size); - } - if (ret < 0) - return ret; - - for (int mb_y = 0; mb_y < c->mb_height2; mb_y++) { - for (int mb_x = 0; mb_x < c->mb_width2; mb_x++) { - if ((ret = decode_mb(a, a->block)) < 0) - return ret; - - idct_put(a, p, mb_x, mb_y); - } - } - - if (c->mb_width2 != c->mb_width) { - int mb_x = c->mb_width2; - for (int mb_y = 0; mb_y < c->mb_height2; mb_y++) { - if ((ret = decode_mb(a, a->block)) < 0) - return ret; - - idct_put(a, p, mb_x, mb_y); - } - } - - if (c->mb_height2 != c->mb_height) { - int mb_y = c->mb_height2; - for (int mb_x = 0; mb_x < c->mb_width; mb_x++) { - if ((ret = decode_mb(a, a->block)) < 0) - return ret; - - idct_put(a, p, mb_x, mb_y); - } - } - - *got_frame = 1; - - return (get_bits_count(&a->gb) + 31) / 32 * 4; -} - -static av_cold int decode_init(AVCodecContext *avctx) -{ - static AVOnce init_static_once = AV_ONCE_INIT; - ASVDecContext *const a = avctx->priv_data; - const int scale = avctx->codec_id == AV_CODEC_ID_ASV1 ? 1 : 2; - int inv_qscale; - int i; - - if (avctx->extradata_size < 1) { - av_log(avctx, AV_LOG_WARNING, "No extradata provided\n"); - } - - ff_asv_common_init(avctx); - ff_blockdsp_init(&a->bdsp); - ff_idctdsp_init(&a->idsp, avctx); - ff_permute_scantable(a->permutated_scantable, ff_asv_scantab, - a->idsp.idct_permutation); - avctx->pix_fmt = AV_PIX_FMT_YUV420P; - - if (avctx->extradata_size < 1 || (inv_qscale = avctx->extradata[0]) == 0) { - av_log(avctx, AV_LOG_ERROR, "illegal qscale 0\n"); - if (avctx->codec_id == AV_CODEC_ID_ASV1) - inv_qscale = 6; - else - inv_qscale = 10; - } - - for (i = 0; i < 64; i++) { - int index = ff_asv_scantab[i]; - - a->intra_matrix[i] = 64 * scale * ff_mpeg1_default_intra_matrix[index] / - inv_qscale; - } - - ff_thread_once(&init_static_once, init_vlcs); - - return 0; -} - -static av_cold int decode_end(AVCodecContext *avctx) -{ - ASVDecContext *const a = avctx->priv_data; - - av_freep(&a->bitstream_buffer); - a->bitstream_buffer_size = 0; - - return 0; -} - -#if CONFIG_ASV1_DECODER -const FFCodec ff_asv1_decoder = { - .p.name = "asv1", - CODEC_LONG_NAME("ASUS V1"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ASV1, - .priv_data_size = sizeof(ASVDecContext), - .init = decode_init, - .close = decode_end, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; -#endif - -#if CONFIG_ASV2_DECODER -const FFCodec ff_asv2_decoder = { - .p.name = "asv2", - CODEC_LONG_NAME("ASUS V2"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ASV2, - .priv_data_size = sizeof(ASVDecContext), - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cga_data.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cga_data.h deleted file mode 100644 index 3f5281a264a3919b82a7cf79b2ff1ef751465dae..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cga_data.h +++ /dev/null @@ -1,47 +0,0 @@ -/* - * CGA/EGA/VGA ROM data - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * CGA/EGA/VGA ROM data - * @note fonts are in libavutil/xga_font_data.[ch] - */ - -#ifndef AVCODEC_CGA_DATA_H -#define AVCODEC_CGA_DATA_H - -#include - -extern const uint32_t ff_cga_palette[16]; -extern const uint32_t ff_ega_palette[64]; - -/** - * Draw CGA/EGA/VGA font to 8-bit pixel buffer - * - * @param dst Destination pixel buffer - * @param linesize Linesize (pixels) - * @param font Font table. We assume font width is always 8 pixels wide. - * @param font_height Font height (pixels) - * @param fg,bg Foreground and background palette index - * @param ch Character to draw - */ -void ff_draw_pc_font(uint8_t *dst, int linesize, const uint8_t *font, int font_height, int ch, int fg, int bg); - -#endif /* AVCODEC_CGA_DATA_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729postfilter.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729postfilter.c deleted file mode 100644 index 26e937f0baf454370fa33d0b115d4a91297d0858..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729postfilter.c +++ /dev/null @@ -1,618 +0,0 @@ -/* - * G.729, G729 Annex D postfilter - * Copyright (c) 2008 Vladimir Voroshilov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include - -#include "libavutil/common.h" -#include "libavutil/intmath.h" - -#include "audiodsp.h" -#include "g729.h" -#include "g729postfilter.h" -#include "celp_math.h" -#include "acelp_filters.h" -#include "acelp_vectors.h" -#include "celp_filters.h" - -#define FRAC_BITS 15 -#include "mathops.h" - -/** - * short interpolation filter (of length 33, according to spec) - * for computing signal with non-integer delay - */ -static const int16_t ff_g729_interp_filt_short[(ANALYZED_FRAC_DELAYS+1)*SHORT_INT_FILT_LEN] = { - 0, 31650, 28469, 23705, 18050, 12266, 7041, 2873, - 0, -1597, -2147, -1992, -1492, -933, -484, -188, -}; - -/** - * long interpolation filter (of length 129, according to spec) - * for computing signal with non-integer delay - */ -static const int16_t ff_g729_interp_filt_long[(ANALYZED_FRAC_DELAYS+1)*LONG_INT_FILT_LEN] = { - 0, 31915, 29436, 25569, 20676, 15206, 9639, 4439, - 0, -3390, -5579, -6549, -6414, -5392, -3773, -1874, - 0, 1595, 2727, 3303, 3319, 2850, 2030, 1023, - 0, -887, -1527, -1860, -1876, -1614, -1150, -579, - 0, 501, 859, 1041, 1044, 892, 631, 315, - 0, -266, -453, -543, -538, -455, -317, -156, - 0, 130, 218, 258, 253, 212, 147, 72, - 0, -59, -101, -122, -123, -106, -77, -40, -}; - -/** - * formant_pp_factor_num_pow[i] = FORMANT_PP_FACTOR_NUM^(i+1) - */ -static const int16_t formant_pp_factor_num_pow[10]= { - /* (0.15) */ - 18022, 9912, 5451, 2998, 1649, 907, 499, 274, 151, 83 -}; - -/** - * formant_pp_factor_den_pow[i] = FORMANT_PP_FACTOR_DEN^(i+1) - */ -static const int16_t formant_pp_factor_den_pow[10] = { - /* (0.15) */ - 22938, 16057, 11240, 7868, 5508, 3856, 2699, 1889, 1322, 925 -}; - -/** - * \brief Residual signal calculation (4.2.1 if G.729) - * \param out [out] output data filtered through A(z/FORMANT_PP_FACTOR_NUM) - * \param filter_coeffs (3.12) A(z/FORMANT_PP_FACTOR_NUM) filter coefficients - * \param in input speech data to process - * \param subframe_size size of one subframe - * - * \note in buffer must contain 10 items of previous speech data before top of the buffer - * \remark It is safe to pass the same buffer for input and output. - */ -static void residual_filter(int16_t* out, const int16_t* filter_coeffs, const int16_t* in, - int subframe_size) -{ - int i, n; - - for (n = subframe_size - 1; n >= 0; n--) { - int sum = 0x800; - for (i = 0; i < 10; i++) - sum += filter_coeffs[i] * in[n - i - 1]; - - out[n] = in[n] + (sum >> 12); - } -} - -/** - * \brief long-term postfilter (4.2.1) - * \param dsp initialized DSP context - * \param pitch_delay_int integer part of the pitch delay in the first subframe - * \param residual filtering input data - * \param residual_filt [out] speech signal with applied A(z/FORMANT_PP_FACTOR_NUM) filter - * \param subframe_size size of subframe - * - * \return 0 if long-term prediction gain is less than 3dB, 1 - otherwise - */ -static int16_t long_term_filter(AudioDSPContext *adsp, int pitch_delay_int, - const int16_t* residual, int16_t *residual_filt, - int subframe_size) -{ - int i, k, tmp, tmp2; - int sum; - int L_temp0; - int L_temp1; - int64_t L64_temp0; - int64_t L64_temp1; - int16_t shift; - int corr_int_num, corr_int_den; - - int ener; - int16_t sh_ener; - - int16_t gain_num,gain_den; //selected signal's gain numerator and denominator - int16_t sh_gain_num, sh_gain_den; - int gain_num_square; - - int16_t gain_long_num,gain_long_den; //filtered through long interpolation filter signal's gain numerator and denominator - int16_t sh_gain_long_num, sh_gain_long_den; - - int16_t best_delay_int, best_delay_frac; - - int16_t delayed_signal_offset; - int lt_filt_factor_a, lt_filt_factor_b; - - int16_t * selected_signal; - const int16_t * selected_signal_const; //Necessary to avoid compiler warning - - int16_t sig_scaled[SUBFRAME_SIZE + RES_PREV_DATA_SIZE]; - int16_t delayed_signal[ANALYZED_FRAC_DELAYS][SUBFRAME_SIZE+1]; - int corr_den[ANALYZED_FRAC_DELAYS][2]; - - tmp = 0; - for(i=0; i 0) - for (i = 0; i < subframe_size + RES_PREV_DATA_SIZE; i++) - sig_scaled[i] = residual[i] >> shift; - else - for (i = 0; i < subframe_size + RES_PREV_DATA_SIZE; i++) - sig_scaled[i] = (unsigned)residual[i] << -shift; - - /* Start of best delay searching code */ - gain_num = 0; - - ener = adsp->scalarproduct_int16(sig_scaled + RES_PREV_DATA_SIZE, - sig_scaled + RES_PREV_DATA_SIZE, - subframe_size); - if (ener) { - sh_ener = av_log2(ener) - 14; - sh_ener = FFMAX(sh_ener, 0); - ener >>= sh_ener; - /* Search for best pitch delay. - - sum{ r(n) * r(k,n) ] }^2 - R'(k)^2 := ------------------------- - sum{ r(k,n) * r(k,n) } - - - R(T) := sum{ r(n) * r(n-T) ] } - - - where - r(n-T) is integer delayed signal with delay T - r(k,n) is non-integer delayed signal with integer delay best_delay - and fractional delay k */ - - /* Find integer delay best_delay which maximizes correlation R(T). - - This is also equals to numerator of R'(0), - since the fine search (second step) is done with 1/8 - precision around best_delay. */ - corr_int_num = 0; - best_delay_int = pitch_delay_int - 1; - for (i = pitch_delay_int - 1; i <= pitch_delay_int + 1; i++) { - sum = adsp->scalarproduct_int16(sig_scaled + RES_PREV_DATA_SIZE, - sig_scaled + RES_PREV_DATA_SIZE - i, - subframe_size); - if (sum > corr_int_num) { - corr_int_num = sum; - best_delay_int = i; - } - } - if (corr_int_num) { - /* Compute denominator of pseudo-normalized correlation R'(0). */ - corr_int_den = adsp->scalarproduct_int16(sig_scaled + RES_PREV_DATA_SIZE - best_delay_int, - sig_scaled + RES_PREV_DATA_SIZE - best_delay_int, - subframe_size); - - /* Compute signals with non-integer delay k (with 1/8 precision), - where k is in [0;6] range. - Entire delay is qual to best_delay+(k+1)/8 - This is archieved by applying an interpolation filter of - legth 33 to source signal. */ - for (k = 0; k < ANALYZED_FRAC_DELAYS; k++) { - ff_acelp_interpolate(&delayed_signal[k][0], - &sig_scaled[RES_PREV_DATA_SIZE - best_delay_int], - ff_g729_interp_filt_short, - ANALYZED_FRAC_DELAYS+1, - 8 - k - 1, - SHORT_INT_FILT_LEN, - subframe_size + 1); - } - - /* Compute denominator of pseudo-normalized correlation R'(k). - - corr_den[k][0] is square root of R'(k) denominator, for int(T) == int(T0) - corr_den[k][1] is square root of R'(k) denominator, for int(T) == int(T0)+1 - - Also compute maximum value of above denominators over all k. */ - tmp = corr_int_den; - for (k = 0; k < ANALYZED_FRAC_DELAYS; k++) { - sum = adsp->scalarproduct_int16(&delayed_signal[k][1], - &delayed_signal[k][1], - subframe_size - 1); - corr_den[k][0] = sum + delayed_signal[k][0 ] * delayed_signal[k][0 ]; - corr_den[k][1] = sum + delayed_signal[k][subframe_size] * delayed_signal[k][subframe_size]; - - tmp = FFMAX3(tmp, corr_den[k][0], corr_den[k][1]); - } - - sh_gain_den = av_log2(tmp) - 14; - if (sh_gain_den >= 0) { - - sh_gain_num = FFMAX(sh_gain_den, sh_ener); - /* Loop through all k and find delay that maximizes - R'(k) correlation. - Search is done in [int(T0)-1; intT(0)+1] range - with 1/8 precision. */ - delayed_signal_offset = 1; - best_delay_frac = 0; - gain_den = corr_int_den >> sh_gain_den; - gain_num = corr_int_num >> sh_gain_num; - gain_num_square = gain_num * gain_num; - for (k = 0; k < ANALYZED_FRAC_DELAYS; k++) { - for (i = 0; i < 2; i++) { - int16_t gain_num_short, gain_den_short; - int gain_num_short_square; - /* Compute numerator of pseudo-normalized - correlation R'(k). */ - sum = adsp->scalarproduct_int16(&delayed_signal[k][i], - sig_scaled + RES_PREV_DATA_SIZE, - subframe_size); - gain_num_short = FFMAX(sum >> sh_gain_num, 0); - - /* - gain_num_short_square gain_num_square - R'(T)^2 = -----------------------, max R'(T)^2= -------------- - den gain_den - */ - gain_num_short_square = gain_num_short * gain_num_short; - gain_den_short = corr_den[k][i] >> sh_gain_den; - - tmp = MULL(gain_num_short_square, gain_den, FRAC_BITS); - tmp2 = MULL(gain_num_square, gain_den_short, FRAC_BITS); - - // R'(T)^2 > max R'(T)^2 - if (tmp > tmp2) { - gain_num = gain_num_short; - gain_den = gain_den_short; - gain_num_square = gain_num_short_square; - delayed_signal_offset = i; - best_delay_frac = k + 1; - } - } - } - - /* - R'(T)^2 - 2 * --------- < 1 - R(0) - */ - L64_temp0 = (int64_t)gain_num_square << ((sh_gain_num << 1) + 1); - L64_temp1 = ((int64_t)gain_den * ener) << (sh_gain_den + sh_ener); - if (L64_temp0 < L64_temp1) - gain_num = 0; - } // if(sh_gain_den >= 0) - } // if(corr_int_num) - } // if(ener) - /* End of best delay searching code */ - - if (!gain_num) { - memcpy(residual_filt, residual + RES_PREV_DATA_SIZE, subframe_size * sizeof(int16_t)); - - /* Long-term prediction gain is less than 3dB. Long-term postfilter is disabled. */ - return 0; - } - if (best_delay_frac) { - /* Recompute delayed signal with an interpolation filter of length 129. */ - ff_acelp_interpolate(residual_filt, - &sig_scaled[RES_PREV_DATA_SIZE - best_delay_int + delayed_signal_offset], - ff_g729_interp_filt_long, - ANALYZED_FRAC_DELAYS + 1, - 8 - best_delay_frac, - LONG_INT_FILT_LEN, - subframe_size + 1); - /* Compute R'(k) correlation's numerator. */ - sum = adsp->scalarproduct_int16(residual_filt, - sig_scaled + RES_PREV_DATA_SIZE, - subframe_size); - - if (sum < 0) { - gain_long_num = 0; - sh_gain_long_num = 0; - } else { - tmp = av_log2(sum) - 14; - tmp = FFMAX(tmp, 0); - sum >>= tmp; - gain_long_num = sum; - sh_gain_long_num = tmp; - } - - /* Compute R'(k) correlation's denominator. */ - sum = adsp->scalarproduct_int16(residual_filt, residual_filt, subframe_size); - - tmp = av_log2(sum) - 14; - tmp = FFMAX(tmp, 0); - sum >>= tmp; - gain_long_den = sum; - sh_gain_long_den = tmp; - - /* Select between original and delayed signal. - Delayed signal will be selected if it increases R'(k) - correlation. */ - L_temp0 = gain_num * gain_num; - L_temp0 = MULL(L_temp0, gain_long_den, FRAC_BITS); - - L_temp1 = gain_long_num * gain_long_num; - L_temp1 = MULL(L_temp1, gain_den, FRAC_BITS); - - tmp = ((sh_gain_long_num - sh_gain_num) * 2) - (sh_gain_long_den - sh_gain_den); - if (tmp > 0) - L_temp0 >>= tmp; - else - L_temp1 >>= FFMIN(-tmp, 31); - - /* Check if longer filter increases the values of R'(k). */ - if (L_temp1 > L_temp0) { - /* Select long filter. */ - selected_signal = residual_filt; - gain_num = gain_long_num; - gain_den = gain_long_den; - sh_gain_num = sh_gain_long_num; - sh_gain_den = sh_gain_long_den; - } else - /* Select short filter. */ - selected_signal = &delayed_signal[best_delay_frac-1][delayed_signal_offset]; - - /* Rescale selected signal to original value. */ - if (shift > 0) - for (i = 0; i < subframe_size; i++) - selected_signal[i] *= 1 << shift; - else - for (i = 0; i < subframe_size; i++) - selected_signal[i] >>= -shift; - - /* necessary to avoid compiler warning */ - selected_signal_const = selected_signal; - } // if(best_delay_frac) - else - selected_signal_const = residual + RES_PREV_DATA_SIZE - (best_delay_int + 1 - delayed_signal_offset); -#ifdef G729_BITEXACT - tmp = sh_gain_num - sh_gain_den; - if (tmp > 0) - gain_den >>= tmp; - else - gain_num >>= -tmp; - - if (gain_num > gain_den) - lt_filt_factor_a = MIN_LT_FILT_FACTOR_A; - else { - gain_num >>= 2; - gain_den >>= 1; - lt_filt_factor_a = (gain_den << 15) / (gain_den + gain_num); - } -#else - L64_temp0 = (((int64_t)gain_num) << sh_gain_num) >> 1; - L64_temp1 = ((int64_t)gain_den) << sh_gain_den; - lt_filt_factor_a = FFMAX((L64_temp1 << 15) / (L64_temp1 + L64_temp0), MIN_LT_FILT_FACTOR_A); -#endif - - /* Filter through selected filter. */ - lt_filt_factor_b = 32767 - lt_filt_factor_a + 1; - - ff_acelp_weighted_vector_sum(residual_filt, residual + RES_PREV_DATA_SIZE, - selected_signal_const, - lt_filt_factor_a, lt_filt_factor_b, - 1<<14, 15, subframe_size); - - // Long-term prediction gain is larger than 3dB. - return 1; -} - -/** - * \brief Calculate reflection coefficient for tilt compensation filter (4.2.3). - * \param dsp initialized DSP context - * \param lp_gn (3.12) coefficients of A(z/FORMANT_PP_FACTOR_NUM) filter - * \param lp_gd (3.12) coefficients of A(z/FORMANT_PP_FACTOR_DEN) filter - * \param speech speech to update - * \param subframe_size size of subframe - * - * \return (3.12) reflection coefficient - * - * \remark The routine also calculates the gain term for the short-term - * filter (gf) and multiplies the speech data by 1/gf. - * - * \note All members of lp_gn, except 10-19 must be equal to zero. - */ -static int16_t get_tilt_comp(AudioDSPContext *adsp, int16_t *lp_gn, - const int16_t *lp_gd, int16_t* speech, - int subframe_size) -{ - int rh1,rh0; // (3.12) - int temp; - int i; - int gain_term; - - lp_gn[10] = 4096; //1.0 in (3.12) - - /* Apply 1/A(z/FORMANT_PP_FACTOR_DEN) filter to hf. */ - ff_celp_lp_synthesis_filter(lp_gn + 11, lp_gd + 1, lp_gn + 11, 22, 10, 0, 0, 0x800); - /* Now lp_gn (starting with 10) contains impulse response - of A(z/FORMANT_PP_FACTOR_NUM)/A(z/FORMANT_PP_FACTOR_DEN) filter. */ - - rh0 = adsp->scalarproduct_int16(lp_gn + 10, lp_gn + 10, 20); - rh1 = adsp->scalarproduct_int16(lp_gn + 10, lp_gn + 11, 20); - - /* downscale to avoid overflow */ - temp = av_log2(rh0) - 14; - if (temp > 0) { - rh0 >>= temp; - rh1 >>= temp; - } - - if (FFABS(rh1) > rh0 || !rh0) - return 0; - - gain_term = 0; - for (i = 0; i < 20; i++) - gain_term += FFABS(lp_gn[i + 10]); - gain_term >>= 2; // (3.12) -> (5.10) - - if (gain_term > 0x400) { // 1.0 in (5.10) - temp = 0x2000000 / gain_term; // 1.0/gain_term in (0.15) - for (i = 0; i < subframe_size; i++) - speech[i] = (speech[i] * temp + 0x4000) >> 15; - } - - return -(rh1 * (1 << 15)) / rh0; -} - -/** - * \brief Apply tilt compensation filter (4.2.3). - * \param res_pst [in/out] residual signal (partially filtered) - * \param k1 (3.12) reflection coefficient - * \param subframe_size size of subframe - * \param ht_prev_data previous data for 4.2.3, equation 86 - * - * \return new value for ht_prev_data -*/ -static int16_t apply_tilt_comp(int16_t* out, int16_t* res_pst, int refl_coeff, - int subframe_size, int16_t ht_prev_data) -{ - int tmp, tmp2; - int i; - int gt, ga; - int fact, sh_fact; - - if (refl_coeff > 0) { - gt = (refl_coeff * G729_TILT_FACTOR_PLUS + 0x4000) >> 15; - fact = 0x2000; // 0.5 in (0.15) - sh_fact = 14; - } else { - gt = (refl_coeff * G729_TILT_FACTOR_MINUS + 0x4000) >> 15; - fact = 0x400; // 0.5 in (3.12) - sh_fact = 11; - } - ga = (fact << 16) / av_clip_int16(32768 - FFABS(gt)); - gt >>= 1; - - /* Apply tilt compensation filter to signal. */ - tmp = res_pst[subframe_size - 1]; - - for (i = subframe_size - 1; i >= 1; i--) { - tmp2 = (gt * res_pst[i-1]) * 2 + 0x4000; - tmp2 = res_pst[i] + (tmp2 >> 15); - - tmp2 = (tmp2 * ga + fact) >> sh_fact; - out[i] = tmp2; - } - tmp2 = (gt * ht_prev_data) * 2 + 0x4000; - tmp2 = res_pst[0] + (tmp2 >> 15); - tmp2 = (tmp2 * ga + fact) >> sh_fact; - out[0] = tmp2; - - return tmp; -} - -void ff_g729_postfilter(AudioDSPContext *adsp, int16_t* ht_prev_data, int* voicing, - const int16_t *lp_filter_coeffs, int pitch_delay_int, - int16_t* residual, int16_t* res_filter_data, - int16_t* pos_filter_data, int16_t *speech, int subframe_size) -{ - int16_t residual_filt_buf[SUBFRAME_SIZE+11]; - int16_t lp_gn[33]; // (3.12) - int16_t lp_gd[11]; // (3.12) - int tilt_comp_coeff; - int i; - - /* Zero-filling is necessary for tilt-compensation filter. */ - memset(lp_gn, 0, 33 * sizeof(int16_t)); - - /* Calculate A(z/FORMANT_PP_FACTOR_NUM) filter coefficients. */ - for (i = 0; i < 10; i++) - lp_gn[i + 11] = (lp_filter_coeffs[i + 1] * formant_pp_factor_num_pow[i] + 0x4000) >> 15; - - /* Calculate A(z/FORMANT_PP_FACTOR_DEN) filter coefficients. */ - for (i = 0; i < 10; i++) - lp_gd[i + 1] = (lp_filter_coeffs[i + 1] * formant_pp_factor_den_pow[i] + 0x4000) >> 15; - - /* residual signal calculation (one-half of short-term postfilter) */ - memcpy(speech - 10, res_filter_data, 10 * sizeof(int16_t)); - residual_filter(residual + RES_PREV_DATA_SIZE, lp_gn + 11, speech, subframe_size); - /* Save data to use it in the next subframe. */ - memcpy(res_filter_data, speech + subframe_size - 10, 10 * sizeof(int16_t)); - - /* long-term filter. If long-term prediction gain is larger than 3dB (returned value is - nonzero) then declare current subframe as periodic. */ - i = long_term_filter(adsp, pitch_delay_int, - residual, residual_filt_buf + 10, - subframe_size); - *voicing = FFMAX(*voicing, i); - - /* shift residual for using in next subframe */ - memmove(residual, residual + subframe_size, RES_PREV_DATA_SIZE * sizeof(int16_t)); - - /* short-term filter tilt compensation */ - tilt_comp_coeff = get_tilt_comp(adsp, lp_gn, lp_gd, residual_filt_buf + 10, subframe_size); - - /* Apply second half of short-term postfilter: 1/A(z/FORMANT_PP_FACTOR_DEN) */ - ff_celp_lp_synthesis_filter(pos_filter_data + 10, lp_gd + 1, - residual_filt_buf + 10, - subframe_size, 10, 0, 0, 0x800); - memcpy(pos_filter_data, pos_filter_data + subframe_size, 10 * sizeof(int16_t)); - - *ht_prev_data = apply_tilt_comp(speech, pos_filter_data + 10, tilt_comp_coeff, - subframe_size, *ht_prev_data); -} - -/** - * \brief Adaptive gain control (4.2.4) - * \param gain_before gain of speech before applying postfilters - * \param gain_after gain of speech after applying postfilters - * \param speech [in/out] signal buffer - * \param subframe_size length of subframe - * \param gain_prev (3.12) previous value of gain coefficient - * - * \return (3.12) last value of gain coefficient - */ -int16_t ff_g729_adaptive_gain_control(int gain_before, int gain_after, int16_t *speech, - int subframe_size, int16_t gain_prev) -{ - int gain; // (3.12) - int n; - int exp_before, exp_after; - - if(!gain_after && gain_before) - return 0; - - if (gain_before) { - - exp_before = 14 - av_log2(gain_before); - gain_before = bidir_sal(gain_before, exp_before); - - exp_after = 14 - av_log2(gain_after); - gain_after = bidir_sal(gain_after, exp_after); - - if (gain_before < gain_after) { - gain = (gain_before << 15) / gain_after; - gain = bidir_sal(gain, exp_after - exp_before - 1); - } else { - gain = ((gain_before - gain_after) << 14) / gain_after + 0x4000; - gain = bidir_sal(gain, exp_after - exp_before); - } - gain = av_clip_int16(gain); - gain = (gain * G729_AGC_FAC1 + 0x4000) >> 15; // gain * (1-0.9875) - } else - gain = 0; - - for (n = 0; n < subframe_size; n++) { - // gain_prev = gain + 0.9875 * gain_prev - gain_prev = (G729_AGC_FACTOR * gain_prev + 0x4000) >> 15; - gain_prev = av_clip_int16(gain + gain_prev); - speech[n] = av_clip_int16((speech[n] * gain_prev + 0x2000) >> 14); - } - return gain_prev; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/h264_cabac.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/h264_cabac.c deleted file mode 100644 index d88743bed7a427f7734435c77207f63ff87f16b2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/h264_cabac.c +++ /dev/null @@ -1,140 +0,0 @@ -/* - * Loongson optimized cabac - * - * Copyright (c) 2021 Loongson Technology Corporation Limited - * Contributed by Hao Chen - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavcodec/cabac.h" -#include "cabac.h" - -#define decode_significance decode_significance_loongarch -static int decode_significance_loongarch(CABACContext *c, int max_coeff, - uint8_t *significant_coeff_ctx_base, int *index, int64_t last_off) -{ - void *end = significant_coeff_ctx_base + max_coeff - 1; - int64_t minusstart = -(int64_t)significant_coeff_ctx_base; - int64_t minusindex = 4 - (int64_t)index; - int64_t bit, tmp0, tmp1, tmp2, one = 1; - uint8_t *state = significant_coeff_ctx_base; - - __asm__ volatile( - "3:" -#if UNCHECKED_BITSTREAM_READER - GET_CABAC_LOONGARCH_UNCBSR -#else - GET_CABAC_LOONGARCH -#endif - "blt %[bit], %[one], 4f \n\t" - "add.d %[state], %[state], %[last_off] \n\t" -#if UNCHECKED_BITSTREAM_READER - GET_CABAC_LOONGARCH_UNCBSR -#else - GET_CABAC_LOONGARCH -#endif - "sub.d %[state], %[state], %[last_off] \n\t" - "add.d %[tmp0], %[state], %[minusstart] \n\t" - "st.w %[tmp0], %[index], 0 \n\t" - "bge %[bit], %[one], 5f \n\t" - "addi.d %[index], %[index], 4 \n\t" - "4: \n\t" - "addi.d %[state], %[state], 1 \n\t" - "blt %[state], %[end], 3b \n\t" - "add.d %[tmp0], %[state], %[minusstart] \n\t" - "st.w %[tmp0], %[index], 0 \n\t" - "5: \n\t" - "add.d %[tmp0], %[index], %[minusindex] \n\t" - "srli.d %[tmp0], %[tmp0], 2 \n\t" - : [bit]"=&r"(bit), [tmp0]"=&r"(tmp0), [tmp1]"=&r"(tmp1), [tmp2]"=&r"(tmp2), - [c_range]"+&r"(c->range), [c_low]"+&r"(c->low), [state]"+&r"(state), - [c_bytestream]"+&r"(c->bytestream), [index]"+&r"(index) - : [tables]"r"(ff_h264_cabac_tables), [end]"r"(end), [one]"r"(one), - [minusstart]"r"(minusstart), [minusindex]"r"(minusindex), - [last_off]"r"(last_off), -#if !UNCHECKED_BITSTREAM_READER - [c_bytestream_end]"r"(c->bytestream_end), -#endif - [lps_off]"i"(H264_LPS_RANGE_OFFSET), - [mlps_off]"i"(H264_MLPS_STATE_OFFSET + 128), - [norm_off]"i"(H264_NORM_SHIFT_OFFSET), - [cabac_mask]"r"(CABAC_MASK) - : "memory" - ); - - return (int)tmp0; -} - -#define decode_significance_8x8 decode_significance_8x8_loongarch -static int decode_significance_8x8_loongarch( - CABACContext *c, uint8_t *significant_coeff_ctx_base, - int *index, uint8_t *last_coeff_ctx_base, const uint8_t *sig_off) -{ - int64_t minusindex = 4 - (int64_t)index; - int64_t bit, tmp0, tmp1, tmp2, one = 1, end = 63, last = 0; - uint8_t *state = 0; - int64_t flag_offset = H264_LAST_COEFF_FLAG_OFFSET_8x8_OFFSET; - - __asm__ volatile( - "3: \n\t" - "ldx.bu %[tmp0], %[sig_off], %[last] \n\t" - "add.d %[state], %[tmp0], %[significant_coeff_ctx_base] \n\t" -#if UNCHECKED_BITSTREAM_READER - GET_CABAC_LOONGARCH_UNCBSR -#else - GET_CABAC_LOONGARCH -#endif - "blt %[bit], %[one], 4f \n\t" - "add.d %[tmp0], %[tables], %[flag_offset] \n\t" - "ldx.bu %[tmp1], %[tmp0], %[last] \n\t" - "add.d %[state], %[tmp1], %[last_coeff_ctx_base] \n\t" -#if UNCHECKED_BITSTREAM_READER - GET_CABAC_LOONGARCH_UNCBSR -#else - GET_CABAC_LOONGARCH -#endif - "st.w %[last], %[index], 0 \n\t" - "bge %[bit], %[one], 5f \n\t" - "addi.d %[index], %[index], 4 \n\t" - "4: \n\t" - "addi.d %[last], %[last], 1 \n\t" - "blt %[last], %[end], 3b \n\t" - "st.w %[last], %[index], 0 \n\t" - "5: \n\t" - "add.d %[tmp0], %[index], %[minusindex] \n\t" - "srli.d %[tmp0], %[tmp0], 2 \n\t" - : [bit]"=&r"(bit), [tmp0]"=&r"(tmp0), [tmp1]"=&r"(tmp1), - [tmp2]"=&r"(tmp2), [c_range]"+&r"(c->range), - [c_low]"+&r"(c->low), [state]"+&r"(state), [last]"+&r"(last), - [c_bytestream]"+&r"(c->bytestream), [index]"+&r"(index) - : [tables]"r"(ff_h264_cabac_tables), [end]"r"(end), - [one]"r"(one), [minusindex]"r"(minusindex), - [last_coeff_ctx_base]"r"(last_coeff_ctx_base), - [flag_offset]"r"(flag_offset), -#if !UNCHECKED_BITSTREAM_READER - [c_bytestream_end]"r"(c->bytestream_end), -#endif - [lps_off]"i"(H264_LPS_RANGE_OFFSET), [sig_off]"r"(sig_off), - [mlps_off]"i"(H264_MLPS_STATE_OFFSET + 128), - [norm_off]"i"(H264_NORM_SHIFT_OFFSET), - [cabac_mask]"r"(CABAC_MASK), - [significant_coeff_ctx_base]"r"(significant_coeff_ctx_base) - ); - - return (int)tmp0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hpeldsp_lasx.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hpeldsp_lasx.h deleted file mode 100644 index 2e035eade803843ae0704ea7f79cd5f0e5d152f6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hpeldsp_lasx.h +++ /dev/null @@ -1,58 +0,0 @@ -/* - * Copyright (c) 2021 Loongson Technology Corporation Limited - * Contributed by Shiyou Yin - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_LOONGARCH_HPELDSP_LASX_H -#define AVCODEC_LOONGARCH_HPELDSP_LASX_H - -#include -#include -#include "libavutil/attributes.h" - -void ff_put_pixels8_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_pixels8_x2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int32_t h); -void ff_put_pixels8_y2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int32_t h); -void ff_put_pixels16_8_lsx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_pixels16_x2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int32_t h); -void ff_put_pixels16_y2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int32_t h); -void ff_put_no_rnd_pixels16_x2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_no_rnd_pixels16_y2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_no_rnd_pixels16_xy2_8_lasx(uint8_t *block, - const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_no_rnd_pixels8_x2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_no_rnd_pixels8_y2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_no_rnd_pixels8_xy2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_pixels8_xy2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -void ff_put_pixels16_xy2_8_lasx(uint8_t *block, const uint8_t *pixels, - ptrdiff_t line_size, int h); -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp9_intra_msa.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp9_intra_msa.c deleted file mode 100644 index 97cf21290e91a983b0f7d204320dddadbabc4cd9..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp9_intra_msa.c +++ /dev/null @@ -1,534 +0,0 @@ -/* - * Copyright (c) 2015 Shivraj Patil (Shivraj.Patil@imgtec.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavcodec/vp9dsp.h" -#include "libavutil/mips/generic_macros_msa.h" -#include "vp9dsp_mips.h" - -#define IPRED_SUBS_UH2_UH(in0, in1, out0, out1) \ -{ \ - out0 = __msa_subs_u_h(out0, in0); \ - out1 = __msa_subs_u_h(out1, in1); \ -} - -void ff_vert_16x16_msa(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *left, - const uint8_t *src) -{ - uint32_t row; - v16u8 src0; - - src0 = LD_UB(src); - - for (row = 16; row--;) { - ST_UB(src0, dst); - dst += dst_stride; - } -} - -void ff_vert_32x32_msa(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *left, - const uint8_t *src) -{ - uint32_t row; - v16u8 src1, src2; - - src1 = LD_UB(src); - src2 = LD_UB(src + 16); - - for (row = 32; row--;) { - ST_UB2(src1, src2, dst, 16); - dst += dst_stride; - } -} - -void ff_hor_16x16_msa(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src, - const uint8_t *top) -{ - uint32_t row, inp; - v16u8 src0, src1, src2, src3; - - src += 12; - for (row = 4; row--;) { - inp = LW(src); - src -= 4; - - src0 = (v16u8) __msa_fill_b(inp >> 24); - src1 = (v16u8) __msa_fill_b(inp >> 16); - src2 = (v16u8) __msa_fill_b(inp >> 8); - src3 = (v16u8) __msa_fill_b(inp); - - ST_UB4(src0, src1, src2, src3, dst, dst_stride); - dst += (4 * dst_stride); - } -} - -void ff_hor_32x32_msa(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src, - const uint8_t *top) -{ - uint32_t row, inp; - v16u8 src0, src1, src2, src3; - - src += 28; - for (row = 8; row--;) { - inp = LW(src); - src -= 4; - - src0 = (v16u8) __msa_fill_b(inp >> 24); - src1 = (v16u8) __msa_fill_b(inp >> 16); - src2 = (v16u8) __msa_fill_b(inp >> 8); - src3 = (v16u8) __msa_fill_b(inp); - - ST_UB2(src0, src0, dst, 16); - dst += dst_stride; - ST_UB2(src1, src1, dst, 16); - dst += dst_stride; - ST_UB2(src2, src2, dst, 16); - dst += dst_stride; - ST_UB2(src3, src3, dst, 16); - dst += dst_stride; - } -} - -void ff_dc_4x4_msa(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src_left, - const uint8_t *src_top) -{ - uint32_t val0, val1; - v16i8 store, src = { 0 }; - v8u16 sum_h; - v4u32 sum_w; - v2u64 sum_d; - - val0 = LW(src_top); - val1 = LW(src_left); - INSERT_W2_SB(val0, val1, src); - sum_h = __msa_hadd_u_h((v16u8) src, (v16u8) src); - sum_w = __msa_hadd_u_w(sum_h, sum_h); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 3); - store = __msa_splati_b((v16i8) sum_w, 0); - val0 = __msa_copy_u_w((v4i32) store, 0); - - SW4(val0, val0, val0, val0, dst, dst_stride); -} - -#define INTRA_DC_TL_4x4(dir) \ -void ff_dc_##dir##_4x4_msa(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *left, \ - const uint8_t *top) \ -{ \ - uint32_t val0; \ - v16i8 store, data = { 0 }; \ - v8u16 sum_h; \ - v4u32 sum_w; \ - \ - val0 = LW(dir); \ - data = (v16i8) __msa_insert_w((v4i32) data, 0, val0); \ - sum_h = __msa_hadd_u_h((v16u8) data, (v16u8) data); \ - sum_w = __msa_hadd_u_w(sum_h, sum_h); \ - sum_w = (v4u32) __msa_srari_w((v4i32) sum_w, 2); \ - store = __msa_splati_b((v16i8) sum_w, 0); \ - val0 = __msa_copy_u_w((v4i32) store, 0); \ - \ - SW4(val0, val0, val0, val0, dst, dst_stride); \ -} -INTRA_DC_TL_4x4(top); -INTRA_DC_TL_4x4(left); - -void ff_dc_8x8_msa(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src_left, - const uint8_t *src_top) -{ - uint64_t val0, val1; - v16i8 store; - v16u8 src = { 0 }; - v8u16 sum_h; - v4u32 sum_w; - v2u64 sum_d; - - val0 = LD(src_top); - val1 = LD(src_left); - INSERT_D2_UB(val0, val1, src); - sum_h = __msa_hadd_u_h(src, src); - sum_w = __msa_hadd_u_w(sum_h, sum_h); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_pckev_w((v4i32) sum_d, (v4i32) sum_d); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 4); - store = __msa_splati_b((v16i8) sum_w, 0); - val0 = __msa_copy_u_d((v2i64) store, 0); - - SD4(val0, val0, val0, val0, dst, dst_stride); - dst += (4 * dst_stride); - SD4(val0, val0, val0, val0, dst, dst_stride); -} - -#define INTRA_DC_TL_8x8(dir) \ -void ff_dc_##dir##_8x8_msa(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *left, \ - const uint8_t *top) \ -{ \ - uint64_t val0; \ - v16i8 store; \ - v16u8 data = { 0 }; \ - v8u16 sum_h; \ - v4u32 sum_w; \ - v2u64 sum_d; \ - \ - val0 = LD(dir); \ - data = (v16u8) __msa_insert_d((v2i64) data, 0, val0); \ - sum_h = __msa_hadd_u_h(data, data); \ - sum_w = __msa_hadd_u_w(sum_h, sum_h); \ - sum_d = __msa_hadd_u_d(sum_w, sum_w); \ - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 3); \ - store = __msa_splati_b((v16i8) sum_w, 0); \ - val0 = __msa_copy_u_d((v2i64) store, 0); \ - \ - SD4(val0, val0, val0, val0, dst, dst_stride); \ - dst += (4 * dst_stride); \ - SD4(val0, val0, val0, val0, dst, dst_stride); \ -} - -INTRA_DC_TL_8x8(top); -INTRA_DC_TL_8x8(left); - -void ff_dc_16x16_msa(uint8_t *dst, ptrdiff_t dst_stride, - const uint8_t *src_left, const uint8_t *src_top) -{ - v16u8 top, left, out; - v8u16 sum_h, sum_top, sum_left; - v4u32 sum_w; - v2u64 sum_d; - - top = LD_UB(src_top); - left = LD_UB(src_left); - HADD_UB2_UH(top, left, sum_top, sum_left); - sum_h = sum_top + sum_left; - sum_w = __msa_hadd_u_w(sum_h, sum_h); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_pckev_w((v4i32) sum_d, (v4i32) sum_d); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 5); - out = (v16u8) __msa_splati_b((v16i8) sum_w, 0); - - ST_UB8(out, out, out, out, out, out, out, out, dst, dst_stride); - dst += (8 * dst_stride); - ST_UB8(out, out, out, out, out, out, out, out, dst, dst_stride); -} - -#define INTRA_DC_TL_16x16(dir) \ -void ff_dc_##dir##_16x16_msa(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *left, \ - const uint8_t *top) \ -{ \ - v16u8 data, out; \ - v8u16 sum_h; \ - v4u32 sum_w; \ - v2u64 sum_d; \ - \ - data = LD_UB(dir); \ - sum_h = __msa_hadd_u_h(data, data); \ - sum_w = __msa_hadd_u_w(sum_h, sum_h); \ - sum_d = __msa_hadd_u_d(sum_w, sum_w); \ - sum_w = (v4u32) __msa_pckev_w((v4i32) sum_d, (v4i32) sum_d); \ - sum_d = __msa_hadd_u_d(sum_w, sum_w); \ - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 4); \ - out = (v16u8) __msa_splati_b((v16i8) sum_w, 0); \ - \ - ST_UB8(out, out, out, out, out, out, out, out, dst, dst_stride); \ - dst += (8 * dst_stride); \ - ST_UB8(out, out, out, out, out, out, out, out, dst, dst_stride); \ -} -INTRA_DC_TL_16x16(top); -INTRA_DC_TL_16x16(left); - -void ff_dc_32x32_msa(uint8_t *dst, ptrdiff_t dst_stride, - const uint8_t *src_left, const uint8_t *src_top) -{ - uint32_t row; - v16u8 top0, top1, left0, left1, out; - v8u16 sum_h, sum_top0, sum_top1, sum_left0, sum_left1; - v4u32 sum_w; - v2u64 sum_d; - - LD_UB2(src_top, 16, top0, top1); - LD_UB2(src_left, 16, left0, left1); - HADD_UB2_UH(top0, top1, sum_top0, sum_top1); - HADD_UB2_UH(left0, left1, sum_left0, sum_left1); - sum_h = sum_top0 + sum_top1; - sum_h += sum_left0 + sum_left1; - sum_w = __msa_hadd_u_w(sum_h, sum_h); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_pckev_w((v4i32) sum_d, (v4i32) sum_d); - sum_d = __msa_hadd_u_d(sum_w, sum_w); - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 6); - out = (v16u8) __msa_splati_b((v16i8) sum_w, 0); - - for (row = 16; row--;) - { - ST_UB2(out, out, dst, 16); - dst += dst_stride; - ST_UB2(out, out, dst, 16); - dst += dst_stride; - } -} - -#define INTRA_DC_TL_32x32(dir) \ -void ff_dc_##dir##_32x32_msa(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *left, \ - const uint8_t *top) \ -{ \ - uint32_t row; \ - v16u8 data0, data1, out; \ - v8u16 sum_h, sum_data0, sum_data1; \ - v4u32 sum_w; \ - v2u64 sum_d; \ - \ - LD_UB2(dir, 16, data0, data1); \ - HADD_UB2_UH(data0, data1, sum_data0, sum_data1); \ - sum_h = sum_data0 + sum_data1; \ - sum_w = __msa_hadd_u_w(sum_h, sum_h); \ - sum_d = __msa_hadd_u_d(sum_w, sum_w); \ - sum_w = (v4u32) __msa_pckev_w((v4i32) sum_d, (v4i32) sum_d); \ - sum_d = __msa_hadd_u_d(sum_w, sum_w); \ - sum_w = (v4u32) __msa_srari_w((v4i32) sum_d, 5); \ - out = (v16u8) __msa_splati_b((v16i8) sum_w, 0); \ - \ - for (row = 16; row--;) \ - { \ - ST_UB2(out, out, dst, 16); \ - dst += dst_stride; \ - ST_UB2(out, out, dst, 16); \ - dst += dst_stride; \ - } \ -} -INTRA_DC_TL_32x32(top); -INTRA_DC_TL_32x32(left); - -#define INTRA_PREDICT_VALDC_16X16_MSA(val) \ -void ff_dc_##val##_16x16_msa(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *left, const uint8_t *top) \ -{ \ - v16u8 out = (v16u8) __msa_ldi_b(val); \ - \ - ST_UB8(out, out, out, out, out, out, out, out, dst, dst_stride); \ - dst += (8 * dst_stride); \ - ST_UB8(out, out, out, out, out, out, out, out, dst, dst_stride); \ -} - -INTRA_PREDICT_VALDC_16X16_MSA(127); -INTRA_PREDICT_VALDC_16X16_MSA(128); -INTRA_PREDICT_VALDC_16X16_MSA(129); - -#define INTRA_PREDICT_VALDC_32X32_MSA(val) \ -void ff_dc_##val##_32x32_msa(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *left, const uint8_t *top) \ -{ \ - uint32_t row; \ - v16u8 out = (v16u8) __msa_ldi_b(val); \ - \ - for (row = 16; row--;) \ - { \ - ST_UB2(out, out, dst, 16); \ - dst += dst_stride; \ - ST_UB2(out, out, dst, 16); \ - dst += dst_stride; \ - } \ -} - -INTRA_PREDICT_VALDC_32X32_MSA(127); -INTRA_PREDICT_VALDC_32X32_MSA(128); -INTRA_PREDICT_VALDC_32X32_MSA(129); - -void ff_tm_4x4_msa(uint8_t *dst, ptrdiff_t dst_stride, - const uint8_t *src_left, const uint8_t *src_top_ptr) -{ - uint32_t left; - uint8_t top_left = src_top_ptr[-1]; - v16i8 src_top, src_left0, src_left1, src_left2, src_left3, tmp0, tmp1; - v16u8 src0, src1, src2, src3; - v8u16 src_top_left, vec0, vec1, vec2, vec3; - - src_top_left = (v8u16) __msa_fill_h(top_left); - src_top = LD_SB(src_top_ptr); - left = LW(src_left); - src_left0 = __msa_fill_b(left >> 24); - src_left1 = __msa_fill_b(left >> 16); - src_left2 = __msa_fill_b(left >> 8); - src_left3 = __msa_fill_b(left); - - ILVR_B4_UB(src_left0, src_top, src_left1, src_top, src_left2, src_top, - src_left3, src_top, src0, src1, src2, src3); - HADD_UB4_UH(src0, src1, src2, src3, vec0, vec1, vec2, vec3); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, vec0, vec1); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, vec2, vec3); - SAT_UH4_UH(vec0, vec1, vec2, vec3, 7); - PCKEV_B2_SB(vec1, vec0, vec3, vec2, tmp0, tmp1); - ST_W2(tmp0, 0, 2, dst, dst_stride); - ST_W2(tmp1, 0, 2, dst + 2 * dst_stride, dst_stride); -} - -void ff_tm_8x8_msa(uint8_t *dst, ptrdiff_t dst_stride, - const uint8_t *src_left, const uint8_t *src_top_ptr) -{ - uint8_t top_left = src_top_ptr[-1]; - uint32_t loop_cnt, left; - v16i8 src_top, src_left0, src_left1, src_left2, src_left3, tmp0, tmp1; - v8u16 src_top_left, vec0, vec1, vec2, vec3; - v16u8 src0, src1, src2, src3; - - src_top = LD_SB(src_top_ptr); - src_top_left = (v8u16) __msa_fill_h(top_left); - - src_left += 4; - for (loop_cnt = 2; loop_cnt--;) { - left = LW(src_left); - src_left0 = __msa_fill_b(left >> 24); - src_left1 = __msa_fill_b(left >> 16); - src_left2 = __msa_fill_b(left >> 8); - src_left3 = __msa_fill_b(left); - src_left -= 4; - - ILVR_B4_UB(src_left0, src_top, src_left1, src_top, src_left2, src_top, - src_left3, src_top, src0, src1, src2, src3); - HADD_UB4_UH(src0, src1, src2, src3, vec0, vec1, vec2, vec3); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, vec0, vec1); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, vec2, vec3); - SAT_UH4_UH(vec0, vec1, vec2, vec3, 7); - PCKEV_B2_SB(vec1, vec0, vec3, vec2, tmp0, tmp1); - ST_D4(tmp0, tmp1, 0, 1, 0, 1, dst, dst_stride); - dst += (4 * dst_stride); - } -} - -void ff_tm_16x16_msa(uint8_t *dst, ptrdiff_t dst_stride, - const uint8_t *src_left, const uint8_t *src_top_ptr) -{ - uint8_t top_left = src_top_ptr[-1]; - uint32_t loop_cnt, left; - v16i8 src_top, src_left0, src_left1, src_left2, src_left3; - v8u16 src_top_left, res_r, res_l; - - src_top = LD_SB(src_top_ptr); - src_top_left = (v8u16) __msa_fill_h(top_left); - - src_left += 12; - for (loop_cnt = 4; loop_cnt--;) { - left = LW(src_left); - src_left0 = __msa_fill_b(left >> 24); - src_left1 = __msa_fill_b(left >> 16); - src_left2 = __msa_fill_b(left >> 8); - src_left3 = __msa_fill_b(left); - src_left -= 4; - - ILVRL_B2_UH(src_left0, src_top, res_r, res_l); - HADD_UB2_UH(res_r, res_l, res_r, res_l); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r, res_l); - - SAT_UH2_UH(res_r, res_l, 7); - PCKEV_ST_SB(res_r, res_l, dst); - dst += dst_stride; - - ILVRL_B2_UH(src_left1, src_top, res_r, res_l); - HADD_UB2_UH(res_r, res_l, res_r, res_l); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r, res_l); - SAT_UH2_UH(res_r, res_l, 7); - PCKEV_ST_SB(res_r, res_l, dst); - dst += dst_stride; - - ILVRL_B2_UH(src_left2, src_top, res_r, res_l); - HADD_UB2_UH(res_r, res_l, res_r, res_l); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r, res_l); - SAT_UH2_UH(res_r, res_l, 7); - PCKEV_ST_SB(res_r, res_l, dst); - dst += dst_stride; - - ILVRL_B2_UH(src_left3, src_top, res_r, res_l); - HADD_UB2_UH(res_r, res_l, res_r, res_l); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r, res_l); - SAT_UH2_UH(res_r, res_l, 7); - PCKEV_ST_SB(res_r, res_l, dst); - dst += dst_stride; - } -} - -void ff_tm_32x32_msa(uint8_t *dst, ptrdiff_t dst_stride, - const uint8_t *src_left, const uint8_t *src_top_ptr) -{ - uint8_t top_left = src_top_ptr[-1]; - uint32_t loop_cnt, left; - v16i8 src_top0, src_top1, src_left0, src_left1, src_left2, src_left3; - v8u16 src_top_left, res_r0, res_r1, res_l0, res_l1; - - src_top0 = LD_SB(src_top_ptr); - src_top1 = LD_SB(src_top_ptr + 16); - src_top_left = (v8u16) __msa_fill_h(top_left); - - src_left += 28; - for (loop_cnt = 8; loop_cnt--;) { - left = LW(src_left); - src_left0 = __msa_fill_b(left >> 24); - src_left1 = __msa_fill_b(left >> 16); - src_left2 = __msa_fill_b(left >> 8); - src_left3 = __msa_fill_b(left); - src_left -= 4; - - ILVR_B2_UH(src_left0, src_top0, src_left0, src_top1, res_r0, res_r1); - ILVL_B2_UH(src_left0, src_top0, src_left0, src_top1, res_l0, res_l1); - HADD_UB4_UH(res_r0, res_l0, res_r1, res_l1, res_r0, res_l0, res_r1, - res_l1); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r0, res_l0); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r1, res_l1); - SAT_UH4_UH(res_r0, res_l0, res_r1, res_l1, 7); - PCKEV_ST_SB(res_r0, res_l0, dst); - PCKEV_ST_SB(res_r1, res_l1, dst + 16); - dst += dst_stride; - - ILVR_B2_UH(src_left1, src_top0, src_left1, src_top1, res_r0, res_r1); - ILVL_B2_UH(src_left1, src_top0, src_left1, src_top1, res_l0, res_l1); - HADD_UB4_UH(res_r0, res_l0, res_r1, res_l1, res_r0, res_l0, res_r1, - res_l1); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r0, res_l0); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r1, res_l1); - SAT_UH4_UH(res_r0, res_l0, res_r1, res_l1, 7); - PCKEV_ST_SB(res_r0, res_l0, dst); - PCKEV_ST_SB(res_r1, res_l1, dst + 16); - dst += dst_stride; - - ILVR_B2_UH(src_left2, src_top0, src_left2, src_top1, res_r0, res_r1); - ILVL_B2_UH(src_left2, src_top0, src_left2, src_top1, res_l0, res_l1); - HADD_UB4_UH(res_r0, res_l0, res_r1, res_l1, res_r0, res_l0, res_r1, - res_l1); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r0, res_l0); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r1, res_l1); - SAT_UH4_UH(res_r0, res_l0, res_r1, res_l1, 7); - PCKEV_ST_SB(res_r0, res_l0, dst); - PCKEV_ST_SB(res_r1, res_l1, dst + 16); - dst += dst_stride; - - ILVR_B2_UH(src_left3, src_top0, src_left3, src_top1, res_r0, res_r1); - ILVL_B2_UH(src_left3, src_top0, src_left3, src_top1, res_l0, res_l1); - HADD_UB4_UH(res_r0, res_l0, res_r1, res_l1, res_r0, res_l0, res_r1, - res_l1); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r0, res_l0); - IPRED_SUBS_UH2_UH(src_top_left, src_top_left, res_r1, res_l1); - SAT_UH4_UH(res_r0, res_l0, res_r1, res_l1, 7); - PCKEV_ST_SB(res_r0, res_l0, dst); - PCKEV_ST_SB(res_r1, res_l1, dst + 16); - dst += dst_stride; - } -} diff --git a/spaces/congsaPfin/Manga-OCR/ParayipettapanthirukulammalayalampdfPATCHED-Download.md b/spaces/congsaPfin/Manga-OCR/ParayipettapanthirukulammalayalampdfPATCHED-Download.md deleted file mode 100644 index 9863a3e96bd7b2992f5c92cfc83642c6f7e287dc..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/ParayipettapanthirukulammalayalampdfPATCHED-Download.md +++ /dev/null @@ -1,104 +0,0 @@ -## Parayipettapanthirukulammalayalampdfdownload - - - - - - ![ParayipettapanthirukulammalayalampdfPATCHED Download](https://malayalamebooks.org/wp-content/uploads/2011/04/banner-volunteers-needed.jpg) - - - - - -**Download ✶✶✶ [https://urlcod.com/2txiNp](https://urlcod.com/2txiNp)** - - - - - - - - - - - - - -# Parayi Petta Panthirukulam: A Classic Malayalam Folklore - - - -Parayi Petta Panthirukulam is a collection of stories that narrates the origin of twelve clans from a Pariah woman named Poothana. The stories are part of the oral tradition of Kerala and have been retold by many writers and poets. The most popular version is by Kottarathil Sankunni, who published it as part of his Aithihyamala (Garland of Legends) in the late 19th century. - - - -The stories are based on the premise that Vararuchi, a Sanskrit scholar and one of the nine gems in the court of King Vikramaditya, married Poothana, a Pariah woman, as per a divine prophecy. They travelled across Kerala and Poothana gave birth to a child at each place they visited. Vararuchi asked her to abandon each child if it had no mouth or take it with them if it had one. Poothana obeyed him and left eleven children on the way, who were miraculously rescued and raised by different families. The twelfth child was born with a mouth and they kept him with them. The children grew up to become the progenitors of twelve illustrious clans, each with a distinct identity and profession. - - - -The twelve clans and their respective professions are: - - - -- Mezhathol Agnihothri - Brahmin priest - -- Panamkuzhi Valiya Raja - King of Panamkuzhi - -- Perumthachan - Master carpenter and architect - -- Vayillakunnilappan - Temple singer - -- Uppukandam Valiya Raja - King of Uppukandam - -- Pakkanar - Maker of earthen pots - -- Rajakan - Washer of clothes - -- Naranath Bhranthan - Eccentric sage and poet - -- Karimban Valiya Raja - King of Karimban - -- Vaduthala Nair - Warrior - -- Akkitham Narayanan Bhattathiri - Astrologer - -- Vararuchi - Sanskrit scholar and poet - - - -The stories of Parayi Petta Panthirukulam are rich in cultural and historical details and reflect the social diversity and harmony of Kerala. They also showcase the values of human dignity, compassion, wisdom, courage, and faith. The stories have inspired many works of art, literature, and cinema in Malayalam and other languages. - - - -If you are interested in reading Parayi Petta Panthirukulam in Malayalam, you can download the PDF version from the following link: - - - -[Parayi Petta Panthirukulam PDF Download](https://www.malayalamebooks.org/2011/04/aithihya-mala-kottarathil-sankunni-part-1/) - - - -The stories of each child and their adventures are full of humor, wisdom, and wonder. Some of the stories are: - - - -- Mezhathol Agnihothri, the eldest son, was found by a Brahmin priest who performed a fire sacrifice near a river. He became a renowned scholar and ritualist who could control the fire and rain. He also had a miraculous cow that could produce anything he wished. - -- Pakkanar, the second son, was found by a Parayan couple who made earthen pots. He became a saintly figure who cremated the corpses of the poor and outcastes. He also composed songs and riddles that challenged the caste hierarchy and social norms. - -- Rajakan, the third son, was found by a washerman who washed clothes near a pond. He became a loyal servant of the king of Kochi and helped him in many wars and intrigues. He also had a magical cloth that could heal any wound. - -- Naranath Bhranthan, the fourth son, was found by an Ambalavasi family who performed temple services. He became an eccentric sage and poet who wandered around the hills and rolled big stones up and down. He also had a divine vision of Lord Shiva and Goddess Parvati. - -- Karimban Valiya Raja, the fifth son, was found by a royal Kshatriya woman who was childless. He became a brave and generous king of Karimban who fought against many enemies and protected his subjects. He also had a golden bow that never missed its target. - - - -The stories of Parayi Petta Panthirukulam are not only entertaining but also enlightening. They reveal the hidden potential and greatness of every human being, regardless of their birth or status. They also celebrate the diversity and unity of Kerala's culture and heritage. - - 1b8d091108 - - - - - diff --git a/spaces/congsaPfin/Manga-OCR/logs/Citra Emulator 60 FPS APK How to Install and Use Cheat Codes for More Fun.md b/spaces/congsaPfin/Manga-OCR/logs/Citra Emulator 60 FPS APK How to Install and Use Cheat Codes for More Fun.md deleted file mode 100644 index 9334702e8ea6ffe3c8011a560e1a55bdd736438a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Citra Emulator 60 FPS APK How to Install and Use Cheat Codes for More Fun.md +++ /dev/null @@ -1,98 +0,0 @@ - -

    What is Citra Emulator and Why You Should Try It

    -

    If you are a fan of Nintendo 3DS games, you might have heard of Citra Emulator. Citra is an open-source emulator that allows you to play your favorite 3DS games on your Android device. With Citra, you can enjoy games like Pokemon, Zelda, Mario, Fire Emblem, and more with enhanced graphics, resolution, and performance.

    -

    Some of the features that make Citra stand out from other emulators are:

    -

    citra emulator 60 fps apk


    Download File ✏ ✏ ✏ https://urlca.com/2uOfmx



    -
      -
    • It supports most of the 3DS games library, with compatibility list updated regularly.
    • -
    • It allows you to customize the controls, layout, and touch screen buttons according to your preference.
    • -
    • It supports multiplayer mode, where you can play with other Citra users online or locally.
    • -
    • It supports external gamepads, controllers, and keyboards for a more comfortable gaming experience.
    • -
    • It supports save states, cheats, screenshots, and video recording.
    • -
    -

    In this article, we will show you how to download and install Citra Emulator on your Android device, how to get any game in 60 FPS on Citra Emulator, and how to optimize Citra Emulator for the best performance and battery life. Let's get started!

    -

    How to Download and Install Citra Emulator on Android

    -

    Downloading and installing Citra Emulator on your Android device is very easy. Just follow these steps:

    -
      -
    1. Go to the official website of Citra Emulator at [3](https://citra-emu.org/download/).
    2. -
    3. Scroll down to the Android section and tap on the "Download" button.
    4. -
    5. You will be redirected to the Google Play Store page of Citra Emulator. Tap on the "Install" button.
    6. -
    7. Wait for the installation to finish. You will see a "Citra" icon on your home screen or app drawer.
    8. -
    9. Tap on the "Citra" icon to launch the emulator. You will see a welcome screen with some instructions and tips. Tap on "Next" until you reach the end.
    10. -
    11. You have successfully installed Citra Emulator on your Android device. Now you can start playing your favorite 3DS games!
    12. -
    -

    How to Get Any Game in 60 FPS on Citra Emulator

    -

    One of the most desired features for any gamer is to play games in 60 frames per second (FPS). FPS is a measure of how smoothly a game runs on a device. The higher the FPS, the smoother and more responsive the game feels. Most 3DS games run at 30 FPS or lower, which can cause lagging, stuttering, or choppy gameplay. However, with Citra Emulator, you can unlock 60 FPS mode for many games and enjoy a much better gaming experience.

    -

    Using Cheat Codes to Enable 60 FPS in Citra Games

    -

    One way to enable 60 FPS mode in Citra games is to use cheat codes. Cheat codes are special codes that modify the game's behavior or data. Some cheat codes can increase the game's frame rate from 30 FPS to 60 FPS. However, not all games have cheat codes for 60 FPS mode, and some cheat using a cooling pad or a fan. -

  11. Avoid charging your device while playing Citra games.
  12. -
  13. Avoid playing Citra games for too long without taking breaks.
  14. - - -

    Conclusion and FAQs

    -

    Citra Emulator is a great way to play your favorite 3DS games on your Android device. With Citra, you can enjoy games with enhanced graphics, resolution, and performance. You can also use cheat codes and GPU shader cache to enable 60 FPS mode for many games and improve your gaming experience. Moreover, you can optimize Citra Emulator for Android by following some tips and tricks that can help you get the best performance and battery life. We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!

    -

    Here are some FAQs that you may find useful:

    -
      -
    1. Q: Is Citra Emulator legal and safe to use?
      -A: Citra Emulator is legal and safe to use, as long as you own the original 3DS games that you want to play on Citra. You can dump your own 3DS game cartridges or digital downloads using a hacked 3DS console and a homebrew app. You can check this guide for more details on how to dump your own 3DS games.
    2. -
    3. Q: How can I get the best sound quality on Citra Emulator?
      -A: Citra Emulator supports various audio settings that can affect the sound quality of the games. You can adjust the audio settings by tapping on the menu button on the top right corner, tapping on "Settings", and tapping on "Audio". You can change the volume, output engine, output device, audio stretching, sink interpolation, etc.
    4. -
    5. Q: How can I transfer my save data from my 3DS console to Citra Emulator?
      -A: You can transfer your save data from your 3DS console to Citra Emulator by using a homebrew app called Checkpoint. Checkpoint allows you to backup and restore your save data from your 3DS game cartridges or digital downloads. You can check this guide for more details on how to use Checkpoint.
    6. -
    7. Q: How can I update my Citra Emulator to the latest version?
      -A: You can update your Citra Emulator to the latest version by going to the Google Play Store page of Citra Emulator and tapping on the "Update" button. Alternatively, you can check the official website of Citra Emulator at [3](https://citra-emu.org/download/) for the latest version and download it manually.
    8. -
    9. Q: How can I support the development of Citra Emulator?
      -A: You can support the development of Citra Emulator by becoming a patron on Patreon. By becoming a patron, you can get access to exclusive features, early builds, priority support, and more. You can also support Citra Emulator by reporting bugs, suggesting features, contributing code, or spreading the word.
    10. -

    -

    citra emulator 60 fps cheat codes
    -citra how to increase fps guide
    -citra enhanced beta 2.2.0 download
    -citra 60 fps patch for 3ds games
    -citra emulator android 60 fps mod
    -citra emulator pc 60 fps settings
    -citra 60 fps cheat engine tutorial
    -citra emulator 60 fps pokemon sun and moon
    -citra emulator 60 fps dragon quest viii
    -citra emulator 60 fps animal crossing new leaf
    -citra emulator 60 fps fire emblem awakening
    -citra emulator 60 fps super smash bros for 3ds
    -citra emulator 60 fps zelda ocarina of time
    -citra emulator 60 fps mario kart 7
    -citra emulator 60 fps monster hunter generations
    -citra emulator 60 fps kirby planet robobot
    -citra emulator 60 fps kingdom hearts dream drop distance
    -citra emulator 60 fps bravely default
    -citra emulator 60 fps persona q2
    -citra emulator 60 fps metroid samus returns
    -citra emulator 60 fps luigi's mansion dark moon
    -citra emulator 60 fps resident evil revelations
    -citra emulator 60 fps final fantasy explorers
    -citra emulator 60 fps yo-kai watch blasters
    -citra emulator 60 fps sonic generations
    -citra emulator 60 fps star fox 64 3d
    -citra emulator 60 fps donkey kong country returns 3d
    -citra emulator 60 fps xenoblade chronicles 3d
    -citra emulator 60 fps professor layton and the miracle mask
    -citra emulator 60 fps pokemon ultra sun and ultra moon
    -citra emulator android apk download with bios and user folder
    -citra how to fix lag and stuttering issues
    -citra enhanced beta latest version update
    -citra best settings for low end pc or android device
    -citra cheat codes database for ctrpf action replay plugin
    -citra discord server for sharing and discussing 60 fps patches
    -citra github repository for downloading and compiling source code
    -citra compatibility list for checking game performance and issues
    -citra official website for downloading stable or nightly builds
    -citra faq page for answering common questions and troubleshooting tips
    -how to play online multiplayer on citra emulator with friends or strangers
    -how to use amiibo on citra emulator with nfc files or real figures
    -how to use cheats on citra emulator with luma cfw or gateway codes
    -how to use save files on citra emulator with cia or decrypted roms
    -how to use custom textures on citra emulator with png or dds files
    -how to use controller on citra emulator with xbox or ps4 gamepad
    -how to use microphone on citra emulator with pc or android device
    -how to use camera on citra emulator with webcam or smartphone
    -how to use gyroscope on citra emulator with mouse or motion controls
    -how to use touchscreen on citra emulator with keyboard or stylus

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Fighting Pride - Pacquiao Saga The Ultimate Boxing Game with MOD APK.md b/spaces/congsaPfin/Manga-OCR/logs/Fighting Pride - Pacquiao Saga The Ultimate Boxing Game with MOD APK.md deleted file mode 100644 index e43d549ce462ec458b35ae7a4358ebe39b65102c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Fighting Pride - Pacquiao Saga The Ultimate Boxing Game with MOD APK.md +++ /dev/null @@ -1,110 +0,0 @@ - -

    Fighting Pride APK Mod: A Fighting Game Inspired by Manny Pacquiao

    -

    If you are a fan of boxing and Manny Pacquiao, you might want to check out Fighting Pride APK Mod, a fighting game that features the legendary Filipino boxer and his rivals. In this article, we will tell you what Fighting Pride APK Mod is, why you should play it, and how to download and install it on your Android device.

    -

    fighting pride apk mod


    Download Filehttps://urlca.com/2uOe86



    -

    What is Fighting Pride APK Mod?

    -

    Fighting Pride APK Mod is a modified version of Fighting Pride - Pacquiao Saga, a mobile game developed by Ranida Games and released in 2022. The game is a tribute to Manny Pacquiao, one of the greatest boxers of all time, who announced his retirement from boxing in 2021.

    -

    The original game

    -

    The original game is a 2D fighting game that lets you play as Manny Pacquiao or one of his opponents from his illustrious career. You can choose from different modes, such as story mode, arcade mode, versus mode, and online mode. You can also customize your fighter's appearance, skills, and equipment. The game features realistic graphics, animations, sound effects, and voice overs that capture the essence of boxing and Pacquiao's personality.

    -

    The modded version

    -

    The modded version is a modified version of the original game that adds some features and enhancements that make the game more fun and exciting. Some of the features and enhancements include:

    -
      -
    • Unlimited coins and gems that you can use to buy items and upgrade your fighter.
    • -
    • Unlocked all fighters and modes that you can access without completing any requirements.
    • -
    • Removed ads that might interrupt your gameplay or consume your data.
    • -
    • Optimized performance and compatibility that make the game run smoother and faster on your device.
    • -
    -

    Why should you play Fighting Pride APK Mod?

    -

    If you are looking for a fighting game that is entertaining, challenging, and inspiring, you should play Fighting Pride APK Mod. Here are some reasons why:

    -

    Features of the game

    -

    The game has many features that make it enjoyable and addictive. Some of the features are:

    -
      -
    • A rich and diverse roster of fighters that includes Pacquiao's famous opponents such as Floyd Mayweather Jr., Juan Manuel Marquez, Oscar De La Hoya, Ricky Hatton, Miguel Cotto, Shane Mosley, Antonio Margarito, Timothy Bradley Jr., Keith Thurman, Adrien Broner, and more.
    • -
    • A variety of modes that offer different challenges and rewards. You can follow Pacquiao's journey from his humble beginnings to his legendary status in story mode, fight against random opponents in arcade mode, challenge your friends or other players in versus mode or online mode.
    • -
    • A customization system that allows you to personalize your fighter's appearance, skills, and equipment. You can change your fighter's hair style, skin color, facial features, clothing, gloves, shoes, accessories, tattoos, etc. You can also upgrade your fighter's attributes such as power, speed, stamina, defense, etc. You can also equip your fighter with special items that give you an edge in combat.
    • -
    • A realistic and immersive gameplay that simulates the thrill and excitement of boxing. The game uses realistic physics, animations, sound effects, and voice overs that make you feel like you are in the ring. The game also uses a dynamic camera system that follows the action from different angles. The game also has a commentary system that provides feedback and analysis on your performance.
    • -

    Benefits of the mod

    -

    The mod has many benefits that make it superior to the original game. Some of the benefits are:

    -

    fighting pride pacquiao saga mod apk
    -fighting pride apk mod unlimited money
    -fighting pride apk mod download
    -fighting pride apk mod latest version
    -fighting pride apk mod free shopping
    -fighting pride apk mod android 1
    -fighting pride apk mod platinmods
    -fighting pride apk mod offline
    -fighting pride apk mod no ads
    -fighting pride apk mod 1 hit kill
    -fighting pride apk mod god mode
    -fighting pride apk mod menu
    -fighting pride apk mod rexdl
    -fighting pride apk mod revdl
    -fighting pride apk mod happymod
    -fighting pride apk mod unlimited gems
    -fighting pride apk mod unlimited coins
    -fighting pride apk mod unlimited energy
    -fighting pride apk mod unlimited skills
    -fighting pride apk mod unlimited tickets
    -fighting pride apk mod vip unlocked
    -fighting pride apk mod all characters unlocked
    -fighting pride apk mod all costumes unlocked
    -fighting pride apk mod all stages unlocked
    -fighting pride apk mod all modes unlocked
    -fighting pride apk mod high damage
    -fighting pride apk mod high defense
    -fighting pride apk mod high speed
    -fighting pride apk mod high combo
    -fighting pride apk mod high score
    -fighting pride apk mod easy win
    -fighting pride apk mod anti ban
    -fighting pride apk mod no root
    -fighting pride apk mod obb data
    -fighting pride apk mod online hack
    -fighting pride apk mod cheat engine
    -fighting pride apk mod generator tool
    -fighting pride apk mod premium access
    -fighting pride apk mod pro version
    -fighting pride apk mod full unlocked

    -
      -
    • You can save your time and money by getting unlimited coins and gems that you can use to buy items and upgrade your fighter. You don't have to grind or spend real money to get them.
    • -
    • You can enjoy the full content of the game by unlocking all fighters and modes that you can access without completing any requirements. You don't have to play the same mode or fighter over and over again to unlock them.
    • -
    • You can have a smoother and faster gameplay by removing ads that might interrupt your gameplay or consume your data. You don't have to watch annoying ads or wait for them to load.
    • -
    • You can have a better compatibility and performance by optimizing the game for your device. You don't have to worry about crashes, lags, or glitches.
    • -
    -

    How to download and install Fighting Pride APK Mod?

    -

    If you are interested in playing Fighting Pride APK Mod, you need to download and install it on your Android device. Here are the requirements and precautions you need to follow before downloading and installing the game:

    -

    Requirements and precautions

    -

    Before you download and install Fighting Pride APK Mod, you need to make sure that your device meets the following requirements and precautions:

    -
      -
    • Your device must have Android 4.4 or higher operating system.
    • -
    • Your device must have at least 1 GB of RAM and 500 MB of free storage space.
    • -
    • Your device must have a stable internet connection to play online mode.
    • -
    • You need to enable the installation of apps from unknown sources on your device settings. This is because Fighting Pride APK Mod is not available on the official Google Play Store, but on third-party websites.
    • -
    • You need to disable any antivirus or security software on your device that might interfere with the installation of the game.
    • -
    • You need to backup your data before installing the game, as it might overwrite or delete your existing data.
    • -
    -

    Steps to follow

    -

    After you have met the requirements and precautions, you can follow these steps to download and install Fighting Pride APK Mod on your device:

    -
      -
    1. Go to a reliable website that offers Fighting Pride APK Mod for download. You can search for it on Google or use this link: .
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device. The file size is about 300 MB.
    4. -
    5. Locate the downloaded file on your device storage and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and wait for the installation to be completed.
    8. -
    9. Launch the game from your app drawer or home screen and enjoy playing Fighting Pride APK Mod.
    10. -
    -

    Conclusion

    -

    Fighting Pride APK Mod is a fighting game that features Manny Pacquiao and his rivals. It is a modified version of Fighting Pride - Pacquiao Saga, a mobile game developed by Ranida Games. The mod adds some features and enhancements that make the game more fun and exciting, such as unlimited coins and gems, unlocked all fighters and modes, removed ads, optimized performance and compatibility. You can download and install Fighting Pride APK Mod on your Android device by following some requirements and precautions, and then following some simple steps. If you are a fan of boxing and Manny Pacquiao, you should try Fighting Pride APK Mod today.

    -

    FAQs

    -

    Here are some frequently asked questions about Fighting Pride APK Mod:

    -

    Is Fighting Pride APK Mod safe to use?

    -

    Fighting Pride APK Mod is safe to use as long as you download it from a reliable website that does not contain any viruses or malware. However, you should always be careful when downloading and installing apps from unknown sources, as they might harm your device or compromise your privacy.

    -

    Is Fighting Pride APK Mod legal to use?

    -

    Fighting Pride APK Mod is not legal to use, as it violates the terms and conditions of the original game developer, Ranida Games. By using Fighting Pride APK Mod, you are infringing their intellectual property rights and risking legal action from them. Therefore, we do not endorse or encourage the use of Fighting Pride APK Mod, and we advise you to play the original game instead.

    -

    Does Fighting Pride APK Mod require root access?

    -

    No, Fighting Pride APK Mod does not require root access on your device. You can install and play it without rooting your device.

    -

    Can I play Fighting Pride APK Mod offline?

    -

    Yes, you can play Yes, you can play Fighting Pride APK Mod offline, except for the online mode that requires an internet connection. You can enjoy the other modes without any network issues.

    -

    Can I update Fighting Pride APK Mod?

    -

    No, you cannot update Fighting Pride APK Mod, as it is not compatible with the official updates from the original game developer. If you try to update Fighting Pride APK Mod, you might lose your modded features or face errors and crashes. Therefore, you should avoid updating Fighting Pride APK Mod and stick to the current version.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Genshin Impact Xbox Release Date Will It Ever Happen?.md b/spaces/congsaPfin/Manga-OCR/logs/Genshin Impact Xbox Release Date Will It Ever Happen?.md deleted file mode 100644 index dac150f980165a329a72f7feae77213dda1862e7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Genshin Impact Xbox Release Date Will It Ever Happen?.md +++ /dev/null @@ -1,101 +0,0 @@ -
    -

    Can You Download Genshin Impact on Xbox?

    -

    Genshin Impact is one of the most popular games of 2020 and 2021, attracting millions of players from all over the world with its stunning graphics, engaging gameplay, and charming characters. But if you're an Xbox owner, you might be wondering if you can join the fun too. Is Genshin Impact available on Xbox? And if not, when will it be? And what are some alternatives to Genshin Impact on Xbox that you can play in the meantime?

    -

    can you download genshin impact on xbox


    Download Filehttps://urlca.com/2uO4B1



    -

    In this article, we'll answer all these questions and more. We'll give you a brief introduction to what Genshin Impact is, why it's so popular, and whether it supports cross-platform play. We'll also tell you the current status of Genshin Impact's Xbox release date, based on official statements and rumors. And we'll suggest some similar games that you can play on your Xbox while waiting for Genshin Impact to arrive.

    -

    What is Genshin Impact?

    -

    Genshin Impact is an open-world action role-playing game developed by miHoYo, a Chinese studio that also created Honkai Impact 3rd. The game is set in a fantasy world called Teyvat, where seven nations are ruled by seven gods who grant their people the power to control one of seven elements: Pyro (fire), Hydro (water), Anemo (wind), Electro (lightning), Cryo (ice), Dendro (nature), and Geo (earth).

    -

    You play as a traveler who arrives in Teyvat from another world, looking for your lost sibling. Along the way, you meet a diverse cast of characters who join your party as playable characters or allies. Each character has their own

    own personality, backstory, and combat style. You can switch between four characters in your party at any time, and combine their elemental abilities to create powerful reactions and effects.

    -

    * Is genshin impact coming to xbox one or series x?
    -* How to play genshin impact on xbox with pc controller
    -* When will genshin impact be available on xbox platforms?
    -* Genshin impact xbox release date, rumors, and theories
    -* Why genshin impact is not on xbox and will it ever be?
    -* Genshin impact xbox port: possible or not?
    -* Genshin impact crossplay: can xbox players join the fun?
    -* Genshin impact next-gen consoles: xbox series x and s plans
    -* Genshin impact xbox exclusivity: did sony pay for it?
    -* Genshin impact xbox download: how to get it for free
    -* Genshin impact xbox gameplay: how does it compare to other platforms?
    -* Genshin impact xbox controller support: how to set it up
    -* Genshin impact xbox achievements: are there any?
    -* Genshin impact xbox optimization: how to improve performance
    -* Genshin impact xbox graphics: how good are they?
    -* Genshin impact xbox update: when is the next patch coming?
    -* Genshin impact xbox beta: how to sign up and play
    -* Genshin impact xbox review: is it worth playing?
    -* Genshin impact xbox reddit: where to find the latest news and discussions
    -* Genshin impact xbox discord: how to join and chat with other players
    -* Genshin impact xbox codes: how to redeem and get free rewards
    -* Genshin impact xbox characters: who are the best ones to use?
    -* Genshin impact xbox tier list: how to rank the heroes and weapons
    -* Genshin impact xbox builds: how to optimize your team and gear
    -* Genshin impact xbox guides: how to master the game mechanics and secrets
    -* Genshin impact xbox tips and tricks: how to level up faster and get more resources
    -* Genshin impact xbox glitches and bugs: how to fix them or avoid them
    -* Genshin impact xbox mods and hacks: are there any and are they safe?
    -* Genshin impact xbox cheats and exploits: how to use them or report them
    -* Genshin impact xbox events and quests: what are the current and upcoming ones?
    -* Genshin impact xbox banners and wishes: how to get more primogems and pull rates
    -* Genshin impact xbox reroll guide: how to start over with better luck
    -* Genshin impact xbox multiplayer: how to co-op with friends or strangers
    -* Genshin impact xbox pvp: is there any and how does it work?
    -* Genshin impact xbox endgame: what to do after finishing the story
    -* Genshin impact xbox skins and outfits: how to customize your characters
    -* Genshin impact xbox wallpapers and fan art: where to find and download them
    -* Genshin impact xbox memes and jokes: where to see and share them
    -* Genshin impact xbox merchandise and collectibles: where to buy and sell them
    -* Genshin impact xbox community and fandom: where to join and interact with them

    -

    The game features a vast and beautiful open world that you can explore freely, either on foot, by gliding, or by swimming. You can also climb mountains, ride boats, glide through the air, and even cook delicious meals. The world is filled with secrets, puzzles, treasures, enemies, and quests that will keep you busy for hours.

    -

    Genshin Impact is a free-to-play game that you can download and play on various platforms, such as PC, PlayStation 4, PlayStation 5, iOS, Android, and Nintendo Switch. The game is updated regularly with new content, events, characters, and features.

    -

    Why is Genshin Impact so popular?

    -

    Genshin Impact has been a huge success since its launch in September 2020, earning over $1 billion in revenue in its first six months. The game has also won several awards and accolades, such as the Google Play Best Game of 2020 and the Apple App Store Game of the Year 2020. But what makes Genshin Impact so popular among gamers and critics alike?

    -

    One of the main reasons is the game's stunning graphics and art style. Genshin Impact features a colorful and vibrant anime-inspired aesthetic that appeals to many fans of the genre. The game's world is richly detailed and immersive, with dynamic weather, lighting, and shadows. The game also boasts a high-quality soundtrack and voice acting that enhance the mood and atmosphere of the game.

    -

    Another reason is the game's engaging gameplay and mechanics. Genshin Impact offers a lot of variety and customization in how you play the game. You can choose from over 40 different characters, each with their own unique skills and weapons. You can also mix and match different elements to create different effects and strategies. The game's combat system is fast-paced and fluid, requiring you to switch between characters and elements on the fly. The game also has a lot of content and activities to keep you entertained, such as story quests, side quests, dungeons, bosses, events, mini-games, and more.

    -

    A third reason is the game's cross-platform support. Genshin Impact allows you to play with your friends across different devices, such as PC, PlayStation, iOS, Android, and Nintendo Switch. You can also save your progress across different platforms using the same account. This means you can enjoy the game anytime and anywhere you want.

    -

    Is Genshin Impact cross-platform?

    -

    As mentioned above, Genshin Impact supports cross-play and cross-save between different platforms. This means you can play with your friends regardless of what device they are using. You can also switch between different devices without losing your progress.

    -

    However, there are some limitations to this feature. First of all, cross-play only works within the same server region. There are three server regions in Genshin Impact: America, Europe, and Asia. You can only play with people who are in the same region as you. You can change your server region in the game settings, but this will reset your progress on that server.

    -

    Secondly, cross-save only works between PC, iOS, Android, and Nintendo Switch. It does not work with PlayStation 4 or PlayStation 5. This means you cannot transfer your progress between PlayStation and other platforms. You can still play with other players who are using PlayStation devices via cross-play.

    -

    Is Genshin Impact available on Xbox?

    -

    The short answer is no. Genshin Impact is not available on Xbox One or Xbox Series X/S as of now. The game has not been officially announced or confirmed for Xbox platforms by miHoYo or Microsoft.

    -

    The long answer is maybe. There have been some rumors and speculations that Genshin Impact might come to Xbox in the future. For example,

    For example, according to Reuters, Microsoft met with miHoYo early in Genshin Impact's development but failed to reach an exclusivity deal. Sony would later secure the deal for itself, making the game a PlayStation console exclusive. Microsoft reportedly regretted missing out on the opportunity and is now trying to secure more deals with independent Chinese developers.

    -

    Another example is the rumor that Genshin Impact was listed on the Xbox Game Pass website in October 2020, sparking speculation that the game was coming to Xbox soon. However, this turned out to be a mistake by a third-party vendor and was quickly removed. Microsoft also denied the rumor on Twitter, saying that there were no plans to bring Genshin Impact to Xbox Game Pass at the time.

    -

    So, as of now, there is no official confirmation or announcement that Genshin Impact is coming to Xbox. However, this does not mean that it will never happen. The game is still in development and miHoYo might decide to port it to Xbox in the future, depending on various factors such as market demand, technical feasibility, and business strategy. The game is also coming to Nintendo Switch, which could pave the way for an Xbox version as well.

    -

    What are some alternatives to Genshin Impact on Xbox?

    -

    If you are an Xbox player who wants to experience something similar to Genshin Impact, you might be disappointed by the lack of options. However, there are some games that you can try out that have some resemblance or inspiration from Genshin Impact. Here are some of them:

    -

    The Legend of Zelda: Breath of the Wild

    -

    This is probably the most obvious choice, as Genshin Impact has been widely compared and contrasted with this Nintendo Switch game. Both games feature a vast and beautiful open world that you can explore freely, with various secrets, puzzles, treasures, enemies, and quests. Both games also have a similar art style and aesthetic, with colorful and vibrant graphics and animations. Both games also have an elemental system that affects the gameplay and environment.

    -

    However, there are also some differences between the two games. For one thing, Breath of the Wild has a more realistic and mature tone than Genshin Impact, which has a more anime-like and whimsical tone. Breath of the Wild also has a more minimalist and simple approach to its story and characters, while Genshin Impact has a more complex and detailed approach to its lore and personalities. Breath of the Wild also has a more challenging and rewarding combat system than Genshin Impact, which has a more casual and accessible combat system.

    -

    Immortals Fenyx Rising

    -

    This is another open-world action-adventure game that you can play on Xbox One or Xbox Series X/S. This game is inspired by Greek mythology and features a colorful and cartoonish art style. You play as Fenyx, a winged demigod who must save the gods from the evil titan Typhon. You can explore a large and diverse world that is filled with mythical creatures, puzzles, secrets, and challenges. You can also customize your character's appearance, abilities, and gear.

    -

    Immortals Fenyx Rising has some similarities with Genshin Impact in terms of its gameplay and mechanics. Both games have a lot of variety and freedom in how you play the game. You can switch between different weapons and skills, use your wings or glider to fly around, climb mountains and structures, ride mounts, and craft items. Both games also have an elemental system that affects your combat and environment.

    -

    However, there are also some differences between the two games. For one thing, Immortals Fenyx Rising has a more humorous and lighthearted tone than Genshin Impact, which has a more serious and dramatic tone. Immortals Fenyx Rising also has a more linear and structured story than Genshin Impact, which has a more open-ended and episodic story. Immortals Fenyx Rising also has a more solo-oriented gameplay than Genshin Impact, which has a more multiplayer-oriented gameplay.

    -

    Honkai Impact 3rd

    -

    This is another game developed by miHoYo that you can play on PC or mobile devices. This game is set in a futuristic world where humanity is threatened by mysterious creatures called Honkai. You play as a member of an organization called Valkyries who fight against the Honkai using special suits and weapons. You can choose from over 30 different characters, each with their own skills and styles.

    -

    Honkai Impact 3rd has some similarities with Genshin Impact in terms of its graphics and characters. Both games have a high-quality anime-inspired aesthetic that appeals to many fans of the genre Both games have a high-quality anime-inspired aesthetic that appeals to many fans of the genre. Both games also have a diverse and charismatic cast of characters that you can collect and upgrade. Both games also have a lot of content and events that keep you entertained and engaged. However, there are also some differences between the two games. For one thing, Honkai Impact 3rd has a more sci-fi and action-oriented theme than Genshin Impact, which has a more fantasy and adventure-oriented theme. Honkai Impact 3rd also has a more linear and mission-based gameplay than Genshin Impact, which has a more open-world and exploration-based gameplay. Honkai Impact 3rd also has a more complex and challenging combat system than Genshin Impact, which has a more simple and casual combat system.

    Final Fantasy 14

    -

    This is a massively multiplayer online role-playing game that you can play on PC, PlayStation 4, or PlayStation 5. This game is set in a fantasy world called Eorzea, where you can create your own character and choose from various classes and jobs. You can explore a vast and diverse world that is filled with quests, dungeons, raids, bosses, and other players. You can also join guilds, craft items, customize your appearance, and participate in various events.

    -

    Final Fantasy 14 has some similarities with Genshin Impact in terms of its genre and features. Both games are role-playing games that let you create your own character and choose your own playstyle. Both games also have a lot of variety and options in how you play the game. You can switch between different classes and jobs, use different skills and abilities, and combine different elements to create different effects. Both games also have a lot of content and activities to keep you busy, such as story quests, side quests, dungeons, raids, bosses, events, mini-games, and more.

    -

    However, there are also some differences between the two games. For one thing, Final Fantasy 14 has a more traditional and classic tone than Genshin Impact, which has a more modern and innovative tone. Final Fantasy 14 also has a more subscription-based model than Genshin Impact, which has a more free-to-play model. Final Fantasy 14 also has a more cooperative-oriented gameplay than Genshin Impact, which has a more solo-oriented gameplay.

    -

    Sonic Frontiers

    -

    This is an upcoming open-world action-adventure game that will be released on Xbox One, Xbox Series X/S, PlayStation 4, PlayStation 5, Nintendo Switch, and PC in 2022. This game is based on the popular Sonic the Hedgehog franchise and features Sonic as the main protagonist. You can control Sonic as he runs, jumps, spins, and dashes through a large and dynamic world that is inspired by various locations from the Sonic series. You can also fight against enemies, collect rings, solve puzzles, and discover secrets.

    -

    Sonic Frontiers has some similarities with Genshin Impact in terms of its gameplay and style. Both games are open-world action-adventure games that let you explore freely and interact with the environment. Both games also have a similar art style and aesthetic, with colorful and vibrant graphics and animations. Both games also have an elemental system that affects the gameplay and environment.

    -

    However, there are also some differences between the two games. For one thing, Sonic Frontiers has a more fast-paced and arcade-like tone than Genshin Impact, which has a more slow-paced and immersive tone. Sonic Frontiers also has a more platforming-based gameplay than Genshin Impact, which has a more combat-based gameplay. Sonic Frontiers also has a more single-player-oriented gameplay than Genshin Impact, which has a more multiplayer-oriented gameplay.

    -

    Conclusion

    -

    Genshin Impact is an amazing game that deserves all the praise and popularity it has received. However, if you are an Xbox player who wants to play it too, you might have to wait for a while or look for other options. As of now, there is no official confirmation or announcement that Genshin Impact is coming to Xbox platforms. However, this does not mean that it will never happen. The game is still in development and miHoYo might decide to port it to Xbox in the future.

    -

    In the meantime, you can try out some of the alternatives to Genshin Impact on Xbox that we have suggested in this article. These games are not exactly the same as Genshin Impact

    These games are not exactly the same as Genshin Impact, but they have some similar features and aspects that you might enjoy. They are also great games in their own right, with their own strengths and weaknesses. You might find yourself having a lot of fun with them, even if they are not Genshin Impact.

    -

    We hope this article has been helpful and informative for you. If you have any questions or comments, feel free to share them with us. We would love to hear your thoughts and opinions on Genshin Impact and its alternatives on Xbox. Thank you for reading and have a wonderful day!

    -

    FAQs

    -

    Here are some of the most frequently asked questions about Genshin Impact on Xbox, along with brief answers.

    -

    Q: How do I download Genshin Impact on Xbox?

    -

    A: You can't. Genshin Impact is not available on Xbox platforms as of now. You can only download it on PC, PlayStation 4, PlayStation 5, iOS, Android, and Nintendo Switch.

    -

    Q: How much does Genshin Impact cost on Xbox?

    -

    A: Nothing. Genshin Impact is a free-to-play game that does not require any upfront payment or subscription fee. However, the game does have optional in-game purchases that you can make using real money, such as buying currency, items, or characters.

    -

    Q: Is Genshin Impact online or offline on Xbox?

    -

    A: Online. Genshin Impact is an online game that requires an internet connection to play. You cannot play it offline or without an internet connection.

    -

    Q: Can I play Genshin Impact with my friends on Xbox?

    -

    A: No. Genshin Impact does not support cross-play or cross-save with Xbox platforms as of now. You can only play with your friends who are using the same platform and server region as you.

    -

    Q: Will Genshin Impact ever come to Xbox?

    -

    A: Maybe. There is no official confirmation or announcement that Genshin Impact is coming to Xbox platforms as of now. However, this does not mean that it will never happen. The game is still in development and miHoYo might decide to port it to Xbox in the future, depending on various factors such as market demand, technical feasibility, and business strategy.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Summoners War for PC - Download and Install the Epic Role Playing Game.md b/spaces/congsaPfin/Manga-OCR/logs/Summoners War for PC - Download and Install the Epic Role Playing Game.md deleted file mode 100644 index f4681c0787eae41c2875dca3cc49eed90aeab7f4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Summoners War for PC - Download and Install the Epic Role Playing Game.md +++ /dev/null @@ -1,112 +0,0 @@ - -

    How to Download and Play Summoners War: Sky Arena on PC

    -

    If you are a fan of turn-based, collectible RPGs with over 1,000 different monsters to summon, you might want to check out Summoners War: Sky Arena. This game lets you assemble the greatest team of monsters and fight for victory in the Sky Arena. But did you know that you can also play this game on your PC? In this article, we will show you how to download and play Summoners War: Sky Arena on PC using different methods. We will also tell you why playing on PC is better than playing on mobile devices, and how to check your PC specs and compatibility before downloading the game.

    -

    What is Summoners War: Sky Arena?

    -

    Summoners War: Sky Arena is a classic turn-based, collectible RPG with over 1,000 different monsters to summon. You can customize your monsters with runes, awaken them to unlock new skills and appearances, and evolve them to increase their power. You can also build your own island, decorate it with various buildings and facilities, and invite your friends to visit. The game offers various modes of gameplay, such as story missions, dungeons, raids, arena battles, guild wars, world boss fights, and special events. You can also join a guild and chat with other players from around the world.

    -

    summoners war sky arena download pc


    Download Zip ☆☆☆ https://urlca.com/2uOfWo



    -

    Summoners War: Sky Arena was released in 2014 by Com2uS, a South Korean game developer and publisher. Since then, it has become one of the most popular mobile games in the world, with over 190 million downloads and millions of active players. The game has also spawned several spin-offs, such as Summoners War: Lost Centuria, Summoners War: Chronicles, and Summoners War M. In addition, the game has been adapted into an animated series, comics, books, merchandise, and even a live-action film.

    -

    Why play Summoners War: Sky Arena on PC?

    -

    While Summoners War: Sky Arena is primarily designed for mobile devices, playing it on PC has many advantages. Here are some of them:

    -
      -
    • Bigger screen: Playing on PC allows you to enjoy the game's colorful graphics and animations on a larger screen. You can also see more details and information on the interface without squinting or zooming in.
    • -
    • Better performance: Playing on PC can improve the game's performance by reducing lag, loading time, crashes, and battery drain. You can also adjust the settings to optimize the game for your PC's capabilities.
    • -
    • Keyboard and mouse controls: Playing on PC gives you more options for controlling your monsters and navigating the menus. You can use your keyboard shortcuts or mouse clicks instead of tapping or swiping on your touchscreen.
    • -
    -

    How to download Summoners War: Sky Arena on PC using different methods

    -

    There are several ways to download and play Summoners War: Sky Arena on PC. Here are three of the most common methods:

    -

    Method 1: Using the Microsoft Store

    -

    If you have a Windows 10 PC, you can download Summoners War: Sky Arena from the Microsoft Store app. Here are the steps to do so:

    -
      -
    1. Open the Microsoft Store app on your PC. You can find it on your Start menu or taskbar.
    2. -
    3. Search for Summoners War: Sky Arena in the search box. You can also browse the categories or genres to find it.
    4. -
    5. Click on the game's icon and then click on the Get button. You may need to sign in with your Microsoft account if you haven't already.
    6. -
    7. Wait for the game to download and install on your PC. You can check the progress on the Downloads and updates section of the app.
    8. -
    9. Once the game is installed, you can launch it from the Microsoft Store app or from your Start menu.
    10. -
    -

    Method 2: Using a direct download from the official website

    -

    If you don't have access to the Microsoft Store app, you can also download Summoners War: Sky Arena directly from the official website. Here are the steps to do so:

    -
      -
    1. Go to https://www.summonerswar.com/en/download on your web browser. This is the game's official website.
    2. -
    3. Click on the Download for PC button. This will start downloading a setup file for the game.
    4. -
    5. Locate the setup file on your PC and double-click on it to run it. You may need to grant permission or accept terms and conditions to proceed.
    6. -
    7. Follow the instructions on the setup wizard to install the game on your PC. You can choose where to install it and create a shortcut for it.
    8. -
    9. Once the game is installed, you can launch it from your desktop or Start menu.
    10. -
    -

    Method 3: Using a third-party platform like Steam or Epic Games

    -

    If you prefer to use a third-party platform like Steam or Epic Games, you can also download Summoners War: Sky Arena from there. Here are the steps to do so:

    -

    How to play summoners war sky arena on pc with bluestacks
    -Summoners war sky arena pc download free full version
    -Best monsters and runes for summoners war sky arena on pc
    -Summoners war sky arena official website and community
    -Download summoners war sky arena from the microsoft store
    -Summoners war sky arena tips and tricks for beginners
    -Summoners war sky arena gameplay and features overview
    -Summoners war sky arena review and ratings by pc gamers
    -Summoners war sky arena system requirements and compatibility
    -Summoners war sky arena cheats and hacks for pc
    -Summoners war sky arena update and patch notes for pc
    -Summoners war sky arena events and rewards for pc players
    -Summoners war sky arena guide and walkthrough for dungeons and pvp
    -Summoners war sky arena best emulator and settings for pc
    -Summoners war sky arena mod apk and obb download for pc
    -Summoners war sky arena wallpapers and themes for pc
    -Summoners war sky arena online and offline modes for pc
    -Summoners war sky arena support and customer service for pc
    -Summoners war sky arena forum and reddit discussions for pc
    -Summoners war sky arena wiki and database for pc
    -Summoners war sky arena codes and coupons for pc
    -Summoners war sky arena fan art and videos for pc
    -Summoners war sky arena merchandise and accessories for pc
    -Summoners war sky arena collaborations and partnerships for pc
    -Summoners war sky arena esports and tournaments for pc

    -
      -
    1. Create an account on the platform of your choice if you don't have one already. You can do this on their website or their launcher app.
    2. -
    3. Search for Summoners War: Sky Arena on the platform's store or library. You can also browse the categories or genres to find it.
    4. -
    5. Purchase or get the game for free depending on the platform's offer. You may need to add a payment method or redeem a code if applicable.
    6. -
    7. Wait for the game to download and install on your PC. You can check the progress on the platform's launcher app.
    8. -
    9. Once the game is installed, you can launch it from the platform's launcher app or from your Start menu.
    10. -
    -

    How to check your PC specs and compatibility before downloading Summoners War: Sky Arena

    -

    Before you download Summoners War: Sky Arena on PC, you should check your PC specs and compatibility to make sure that you can run the game smoothly. Here are the minimum and recommended system requirements for the game:

    - - - - - - - - -
    MinimumRecommended
    OSWindows 10 version 18362.0 or higherWindows 10 version 18362.0 or higher
    CPUDual core processor (Intel Core i3-2100 / AMD FX-6300)Quad core processor (Intel Core i5-3470 / AMD FX-8350)
    RAM4 GB8 GB
    GPUNVIDIA GeForce GT 730 / AMD Radeon R7 240 (2 GB VRAM)NVIDIA GeForce GTX 960 / AMD Radeon R9 280 (4 GB VRAM)
    Disk space5 GB available space5 GB available space
    DirectXVersion 11Version 11
    NetworkBroadband internet connectionBroadband internet connection
    -

    To find out your PC's specs, you can follow these steps:

    -
      -
    1. Press the Windows key + R to open the Run dialog box.
    2. -
    3. Type dxdiag and press Enter. This will open the DirectX Diagnostic Tool.
    4. -
    5. On the System tab, you can see your OS, CPU, RAM, and DirectX version.
    6. -
    7. On the Display tab, you can see your GPU and VRAM.
    8. -
    9. On the Sound tab, you can see your audio device and driver.
    10. -
    11. On the Input tab, you can see your keyboard, mouse, and other input devices.
    12. -
    -

    To compare your PC's specs to the game's requirements, you can use a website like https://www.systemrequirementslab.com/cyri. This website can scan your PC and tell you if you can run the game or not. You can also see how well your PC can handle the game's graphics and performance settings.

    -

    Conclusion

    -

    Summoners War: Sky Arena is a fun and addictive game that lets you collect and battle with over 1,000 different monsters. You can also enjoy various modes of gameplay, such as story missions, dungeons, raids, arena battles, guild wars, world boss fights, and special events. You can also join a guild and chat with other players from around the world.

    -

    If you want to play this game on PC, you have several options to download and install it. You can use the Microsoft Store app, a direct download from the official website, or a third-party platform like Steam or Epic Games. You can also check your PC specs and compatibility before downloading the game to make sure that you can run it smoothly.

    -

    So what are you waiting for? Download Summoners War: Sky Arena on PC today and start summoning your monsters!

    -

    FAQs

    -

    Q: Is Summoners War: Sky Arena free to play?

    -

    A: Yes, Summoners War: Sky Arena is free to play. However, it does offer in-app purchases that can enhance your gameplay experience. You can buy items such as crystals, mana stones, energy, scrolls, packs, and bundles with real money. You can also watch ads or complete surveys to earn free rewards.

    -

    Q: Can I play Summoners War: Sky Arena offline?

    -

    A: No, Summoners War: Sky Arena requires an internet connection to play. You need to connect to the game's servers to access your account, data, and content. You also need an internet connection to participate in online modes and events.

    -

    Q: Can I transfer my Summoners War: Sky Arena account from mobile to PC or vice versa?

    -

    A: Yes, you can transfer your Summoners War: Sky Arena account from mobile to PC or vice versa. You just need to link your account to a Com2uS ID or a social media account such as Facebook or Google. Then, you can log in with the same account on any device and continue your progress.

    -

    Q: How do I update Summoners War: Sky Arena on PC?

    -

    A: The method of updating Summoners War: Sky Arena on PC depends on how you downloaded it. If you downloaded it from the Microsoft Store app, you can check for updates on the Downloads and updates section of the app. If you downloaded it from the official website or a third-party platform, you can check for updates on the game's launcher or website.

    -

    Q: How do I contact Summoners War: Sky Arena customer support?

    -

    A: If you have any issues or questions about Summoners War: Sky Arena, you can contact the customer support team by following these steps:

    -
      -
    1. Open the game and go to the Settings menu.
    2. -
    3. Select Support and then Inquiry.
    4. -
    5. Fill out the form with your details and issue description.
    6. -
    7. Submit the form and wait for a response from the customer support team.
    8. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/eFootball PES 2021 APK How to Download and Play the Best Soccer Game on Android.md b/spaces/congsaPfin/Manga-OCR/logs/eFootball PES 2021 APK How to Download and Play the Best Soccer Game on Android.md deleted file mode 100644 index e5a21a299674eafcf483366279d3e8694f078fcc..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/eFootball PES 2021 APK How to Download and Play the Best Soccer Game on Android.md +++ /dev/null @@ -1,93 +0,0 @@ - -

    Download eFootball PES 2021 APK: A Guide for Android Users

    -

    If you are a fan of soccer games, you might have heard of eFootball PES 2021, the latest installment of the popular series by Konami. This game offers a realistic and immersive soccer experience on your mobile device, with stunning graphics, official licenses, online multiplayer, and more. In this article, we will show you how to download eFootball PES 2021 APK for Android, and how to enjoy the game on your smartphone or tablet.

    -

    download efootball pes 2021 apk


    Download Ziphttps://urlca.com/2uOdhm



    -

    What is eFootball PES 2021?

    -

    eFootball PES 2021 is a soccer simulation game that lets you play with, against, or as your favorite players and teams from around the world. You can choose from hundreds of clubs and national teams, including FC Barcelona, Manchester United, FC Bayern München, Juventus, AC Milan, and more. You can also relive and recreate some of the most memorable moments from the careers of soccer legends like David Beckham, Francesco Totti, Diego Maradona, and Fernando Torres.

    -

    Features of eFootball PES 2021

    -

    Here are some of the features that make eFootball PES 2021 one of the best soccer games for Android:

    -

    Realistic gameplay and graphics

    -

    eFootball PES 2021 uses the same console gameplay engine that won the Best Sports Game award at E3 2019. It delivers smooth and responsive controls, realistic physics, and lifelike animations. The game also boasts impressive graphics, with detailed player models, stadiums, weather effects, and lighting.

    -

    How to download eFootball PES 2021 for Android devices
    -eFootball PES 2021 APK file download link and installation guide
    -Download eFootball PES 2021 and play with your friends online
    -eFootball PES 2021 latest version APK free download for Android
    -Best features of eFootball PES 2021 mobile game
    -eFootball PES 2021 review: Is it worth downloading?
    -Download eFootball PES 2021 and enjoy realistic football gameplay
    -eFootball PES 2021 APK mod download with unlimited coins and players
    -Download eFootball PES 2021 and relive iconic moments of football legends
    -eFootball PES 2021 APK download size and system requirements
    -Download eFootball PES 2021 and join the official tournaments and leagues
    -eFootball PES 2021 APK download without OBB file or data
    -Download eFootball PES 2021 and customize your team and players
    -eFootball PES 2021 tips and tricks: How to win every match
    -Download eFootball PES 2021 and get exclusive rewards and bonuses
    -eFootball PES 2021 APK download with commentary and soundtracks
    -Download eFootball PES 2021 and unlock all licensed clubs and players
    -eFootball PES 2021 cheats and hacks: How to get free coins and GP
    -Download eFootball PES 2021 and update your game with the latest patch
    -eFootball PES 2021 vs FIFA Mobile: Which one is better?
    -Download eFootball PES 2021 and play offline without internet connection
    -eFootball PES 2021 APK download for PC using emulator
    -Download eFootball PES 2021 and experience the new graphics and animations
    -eFootball PES 2021 problems and solutions: How to fix common issues
    -Download eFootball PES 2021 and access the in-game store and shop
    -eFootball PES 2021 APK download for iOS devices (iPhone and iPad)
    -Download eFootball PES 2021 and create your own custom league and tournament
    -eFootball PES 2021 ratings and rankings: Who are the best players in the game?
    -Download eFootball PES 2021 and challenge other players in online matches
    -eFootball PES 2021 APK download from FileHippo [^1^]

    -

    Official licenses and iconic moments

    -

    eFootball PES 2021 features official licenses from some of the top leagues and competitions in Europe and South America, such as LaLiga Santander, Serie A TIM, Ligue 1 Uber Eats, UEFA Champions League, UEFA Europa League, Copa Libertadores, and more. You can also enjoy the Iconic Moments series, which lets you play as some of the greatest players in history in their prime.

    -

    Online multiplayer and events

    -

    eFootball PES 2021 allows you to play online matches with your friends or other players from around the world. You can join the eFootball League, a division-based tournament where you can compete for glory and rewards. You can also participate in various events that offer special prizes and challenges.

    -

    How to download eFootball PES 2021 APK for Android?

    -

    If you want to download eFootball PES 2021 APK for Android, you need to follow these steps:

    -

    Requirements and precautions

    -

    Before you download eFootball PES 2021 APK for Android, you need to make sure that your device meets the following requirements:

    -
      -
    • Your device must have Android 7.0 or higher.
    • -
    • Your device must have at least 4 GB of free space.
    • -
    • Your device must have a stable internet connection.
    • -
    -

    You also need to be aware of some precautions:

    -
      -
    • You do not need to have any previous versions of PES installed on your device.
    • -
    • You need to download the APK file from a trusted source.
    • -
    • You need to enable the installation of apps from unknown sources on your device settings.
    • -
    -

    Steps to download and install eFootball PES 2021 APK

    -

    Here are the steps to download and install eFootball PES 2021 APK:

    -
      -
    1. Go to the official website of eFootball PES 2021 and click on the download button for Android.
    2. -
    3. Wait for the APK file to be downloaded on your device.
    4. -
    5. Locate the APK file on your device and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and grant the necessary permissions to the app.
    8. -
    9. Wait for the installation to be completed and launch the app from your home screen or app drawer.
    10. -
    -

    How to enjoy eFootball PES 2021 on your Android device?

    -

    Once you have downloaded and installed eFootball PES 2021 APK on your Android device, you can start enjoying the game in various ways:

    -

    Create your dream team and customize your playstyle

    -

    eFootball PES 2021 lets you create your own team from scratch, using players from different clubs and national teams. You can also sign players from the market, scout new talents, and train them to improve their skills. You can customize your team's name, logo, kit, formation, tactics, and more. You can also choose your preferred camera angle, control scheme, and difficulty level.

    -

    Compete in various modes and tournaments

    -

    eFootball PES 2021 offers a variety of modes and tournaments for you to test your skills and have fun. You can play offline matches against the AI or local friends, or online matches against players from around the world. You can join the eFootball League, a division-based tournament where you can compete for glory and rewards. You can also participate in various events that offer special prizes and challenges. Some of the events include Matchday, where you can support your favorite team and earn points; Iconic Moment Series, where you can relive and recreate some of the most memorable moments from soccer history; and Featured Players, where you can get access to players with boosted stats and skills.

    -

    Update your game regularly and get rewards

    -

    eFootball PES 2021 is constantly updated with new content and features to keep you engaged and entertained. You can expect new players, teams, kits, stadiums, events, and more. You can also get rewards for logging in daily, completing missions, achieving milestones, and more. Some of the rewards include coins, GP, scouts, trainers, contracts, tickets, and more.

    -

    Conclusion

    -

    eFootball PES 2021 is a soccer simulation game that offers a realistic and immersive soccer experience on your mobile device. It features stunning graphics, official licenses, online multiplayer, and more. You can download eFootball PES 2021 APK for Android by following the steps in this article. You can also enjoy the game by creating your dream team, competing in various modes and tournaments, and updating your game regularly. If you are a soccer fan, you should not miss this game.

    -

    FAQs

    -
      -
    • Q: Is eFootball PES 2021 free to play?
    • -
    • A: Yes, eFootball PES 2021 is free to download and play. However, it contains some in-app purchases that can enhance your gaming experience.
    • -
    • Q: Is eFootball PES 2021 compatible with my device?
    • -
    • A: eFootball PES 2021 requires Android 7.0 or higher and at least 4 GB of free space. You can check the compatibility of your device on the official website of eFootball PES 2021.
    • -
    • Q: How can I contact the support team of eFootball PES 2021?
    • -
    • A: You can contact the support team of eFootball PES 2021 by going to Settings > Support > Contact > Inquiry Form. You can also visit the official website of eFootball PES 2021 for more information.
    • -
    • Q: How can I connect my game account to social media?
    • -
    • A: You can connect your game account to social media by going to Settings > Account Linking > Link Data. You can link your game account to Facebook or Google Play Games.
    • -
    • Q: How can I transfer my game data to another device?
    • -
    • A: You can transfer your game data to another device by using the Data Transfer feature. You need to link your game account to Facebook or Google Play Games first. Then, you need to go to Settings > Account Linking > Data Transfer on both devices and follow the instructions.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/CoppercamLicenseCrack.md b/spaces/contluForse/HuggingGPT/assets/CoppercamLicenseCrack.md deleted file mode 100644 index a81a5c995425c271b80c8620ec9448f13e8b1e24..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/CoppercamLicenseCrack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    CoppercamLicenseCrack


    DOWNLOAD >>>>> https://ssurll.com/2uzwo3



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Crazy Little Thing Called Love Full Movie Download 117 - How a 14-Year-Old Girl Wins Her Crushs Heart.md b/spaces/contluForse/HuggingGPT/assets/Crazy Little Thing Called Love Full Movie Download 117 - How a 14-Year-Old Girl Wins Her Crushs Heart.md deleted file mode 100644 index 56cda55bc66ce2be2c174bc4dd7aac8fa0f2a93a..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Crazy Little Thing Called Love Full Movie Download 117 - How a 14-Year-Old Girl Wins Her Crushs Heart.md +++ /dev/null @@ -1,12 +0,0 @@ - -

    83. The ultimate destiny of the universe is in the fullness of God, which has already been attained by the risen Christ, the measure of the maturity of all things.[53] Here we can add yet another argument for rejecting every tyrannical and irresponsible domination of human beings over other creatures. The ultimate purpose of other creatures is not to be found in us. Rather, all creatures are moving forward with us and through us towards a common point of arrival, which is God, in that transcendent fullness where the risen Christ embraces and illumines all things. Human beings, endowed with intelligence and love, and drawn by the fullness of Christ, are called to lead all creatures back to their Creator.

    -

    crazy little thing called love full movie download 117


    Download Filehttps://ssurll.com/2uzvBo



    -

    223. Such sobriety, when lived freely and consciously, is liberating. It is not a lesser life or one lived with less intensity. On the contrary, it is a way of living life to the full. In reality, those who enjoy more and live better each moment are those who have given up dipping here and there, always on the look-out for what they do not have. They experience what it means to appreciate each person and each thing, learning familiarity with the simplest things and how to enjoy them. So they are able to shed unsatisfied needs, reducing their obsessiveness and weariness. Even living on little, they can live a lot, above all when they cultivate other pleasures and find satisfaction in fraternal encounters, in service, in developing their gifts, in music and art, in contact with nature, in prayer. Happiness means knowing how to limit some needs which only diminish us, and being open to the many different possibilities which life can offer.

    -

    232. Not everyone is called to engage directly in political life. Society is also enriched by a countless array of organizations which work to promote the common good and to defend the environment, whether natural or urban. Some, for example, show concern for a public place (a building, a fountain, an abandoned monument, a landscape, a square), and strive to protect, restore, improve or beautify it as something belonging to everyone. Around these community actions, relationships develop or are recovered and a new social fabric emerges. Thus, a community can break out of the indifference induced by consumerism. These actions cultivate a shared identity, with a story which can be remembered and handed on. In this way, the world, and the quality of life of the poorest, are cared for, with a sense of solidarity which is at the same time aware that we live in a common home which God has entrusted to us. These community actions, when they express self-giving love, can also become intense spiritual experiences.

    -

    117. When properly understood, cultural diversity is not a threat to Church unity. The Holy Spirit, sent by the Father and the Son, transforms our hearts and enables us to enter into the perfect communion of the blessed Trinity, where all things find their unity. He builds up the communion and harmony of the people of God. The same Spirit is that harmony, just as he is the bond of love between the Father and the Son.[93] It is he who brings forth a rich variety of gifts, while at the same time creating a unity which is never uniformity but a multifaceted and inviting harmony. Evangelization joyfully acknowledges these varied treasures which the Holy Spirit pours out upon the Church. We would not do justice to the logic of the incarnation if we thought of Christianity as monocultural and monotonous. While it is true that some cultures have been closely associated with the preaching of the Gospel and the development of Christian thought, the revealed message is not identified with any of them; its content is transcultural. Hence in the evangelization of new cultures, or cultures which have not received the Christian message, it is not essential to impose a specific cultural form, no matter how beautiful or ancient it may be, together with the Gospel. The message that we proclaim always has a certain cultural dress, but we in the Church can sometimes fall into a needless hallowing of our own culture, and thus show more fanaticism than true evangelizing zeal.

    -

    Yeah, because you can't find examples of many parallels. When you think about it, you hear that guitar, you hear his incredible keyboard skills, and you realize that, at any given moment on that record, he could do that. He could have filled up that record with virtuoso guitar playing, with virtuoso keyboard playing, and with virtuoso singing. But he'll do ten minutes of just drum machine on "Baby, I'm a Star." Then here's another thing to consider. His lyrics. It's not Leonard Cohen, but think about... he's talking about an "us." I would die for you. Let's go crazy. Take me with you. It's a generous record. He's happy to be alive. He's happy to be 24. He clearly loves people. He's not a sexual predator. He's not talking about "I will conquer you," with that braggadocio of young men. There's us, and we're having fun. That's pretty great. Especially when you consider that he's one guy from north Minneapolis, all alone. He was so alone that he created his own competition. He created The Time and Vanity 6, and it was still all him. He played all the instruments, wrote all their songs, did the whole thing, and then had them come in and do the vocals.

    -

    -

    I'll tell you what went to tape. The Linn LM-1 had little faders on it, mixers, and individual outputs on the back. We'd take the kick out, and the snare, and then the hi-hat usually by itself, usually claps by themselves, but then everything else would come out a stereo mix and go into his Roland Boss pedals, the kind you still see today. That would have flanger and chorus in it. You hear a lot of times the hi-hat is chorused, and it's very wide stereo. So are claps and things on many of these songs. So the chorus, the distortion pedal, the Heavy Metal pedal, the [DD digital] delay, and the flanger. He would click them on and off to dial in what he wanted for the drums. Those effects were printed. The effects on his voice and everything else, no. He really loved delays. Students today, I always see them going for reverb. I instruct them, "Wait, delays! Delays happen before reverb happens." We had several of the Lexicon Prime Time [Model 93 digital delays] with stereo in, stereo out. He loved those things. At that time, we had the Lexicon 240L. We had an EMT 245. At Sunset Sound we had the real echo chambers. But mostly it was delays, real echo chamber, and EMT reverb. That all just went to the mix, but not on the multitrack.

    -

    It changed everything for me. They were obsessed with the question of what music is. I had been obsessed with the question of what sound is, and how sound serves music. Now I started thinking about, "What are we doing? And what can be music?" Tommy Jordan was interested in the human/music impulse. Tommy wanted to be able to make music the way a 3-year-old or a 97 year old would make music, if they could. What is a 3-year-old trying to communicate? What does the 97-year-old think? What happens in between? T Bone Burnett [Tape Op #67] called Tommy one of the five most creative people in the music business. I have never before, and never since, worked with anyone that purely inventive, when it came to manipulating pitch, timbre, duration, loudness, and sound. Tommy loved sound effect records and nature sounds. He always had a little portable cassette recorder with him with a built-in microphone. If he raised up a garage door and it made a squeal, and that sounded good and pleasing, he put it in a sampler. You've got a dog trotting across a tile floor; he'd put that microphone close to the ground and hear his claws on the tile floor, going, "Tick tick, tickity tick." He'd loop that. Tommy had me play drums on the song "Gina," on their second album [Sacred Cow]. Gina was the name of my little dog. I said, "Tommy, I don't play drums." He said, "I'll show you how. Just play this." Because I was Gina's mother, the dog's mother, he said, "Be the mother's voice and express the love of this dog. Have it be steady and constant. That's what drums do." And that connection between human expression, expressed with music, was an epiphany. It influenced every record I've done since then. It took me from being someone who was just trying to keep up, to someone who recognized that I have a voice now. I know what to do when I get in the studio. Tommy's partner was Greg Kurstin. Greg, as I'm sure you know well, his career's on fire. He most recently produced Adele's album [25]. From those two guys I learned the art and the craft of record making in a whole new way.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Race 2 Saif Ali Khan Entry Ringtone and Enjoy the Music.md b/spaces/contluForse/HuggingGPT/assets/Download Race 2 Saif Ali Khan Entry Ringtone and Enjoy the Music.md deleted file mode 100644 index e0bb4ff20153a4cc2eb3137f3740e27c9c9b246c..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Race 2 Saif Ali Khan Entry Ringtone and Enjoy the Music.md +++ /dev/null @@ -1,6 +0,0 @@ -

    race 2 saif ali khan entry ringtone download


    Download » https://ssurll.com/2uzxBq



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py deleted file mode 100644 index b37c79bed4ef9fd8913715e62dbe3fc5cafdc3aa..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle - -from .base import BaseFileHandler - - -class PickleHandler(BaseFileHandler): - - str_like = False - - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path( - filepath, mode='rb', **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('protocol', 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('protocol', 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path( - obj, filepath, mode='wb', **kwargs) diff --git a/spaces/cozyanduofen/bingo/next.config.js b/spaces/cozyanduofen/bingo/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/cxeep/PaddleOCR/README.md b/spaces/cxeep/PaddleOCR/README.md deleted file mode 100644 index a45376a76b706be52d879c24cbd8befb223560bd..0000000000000000000000000000000000000000 --- a/spaces/cxeep/PaddleOCR/README.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: PaddleOCR -emoji: ⚡ -colorFrom: pink -colorTo: green -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: PaddlePaddle/PaddleOCR ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/poser/poser.py b/spaces/cymic/Talking_Head_Anime_3/tha3/poser/poser.py deleted file mode 100644 index e0deeb1fa58f0a47083d44ec4d820ba86ce21da5..0000000000000000000000000000000000000000 --- a/spaces/cymic/Talking_Head_Anime_3/tha3/poser/poser.py +++ /dev/null @@ -1,158 +0,0 @@ -from abc import ABC, abstractmethod -from enum import Enum -from typing import Tuple, List, Optional - -import torch -from torch import Tensor - - -class PoseParameterCategory(Enum): - EYEBROW = 1 - EYE = 2 - IRIS_MORPH = 3 - IRIS_ROTATION = 4 - MOUTH = 5 - FACE_ROTATION = 6 - BODY_ROTATION = 7 - BREATHING = 8 - - -class PoseParameterGroup: - def __init__(self, - group_name: str, - parameter_index: int, - category: PoseParameterCategory, - arity: int = 1, - discrete: bool = False, - default_value: float = 0.0, - range: Optional[Tuple[float, float]] = None): - assert arity == 1 or arity == 2 - if range is None: - range = (0.0, 1.0) - if arity == 1: - parameter_names = [group_name] - else: - parameter_names = [group_name + "_left", group_name + "_right"] - assert len(parameter_names) == arity - - self.parameter_names = parameter_names - self.range = range - self.default_value = default_value - self.discrete = discrete - self.arity = arity - self.category = category - self.parameter_index = parameter_index - self.group_name = group_name - - def get_arity(self) -> int: - return self.arity - - def get_group_name(self) -> str: - return self.group_name - - def get_parameter_names(self) -> List[str]: - return self.parameter_names - - def is_discrete(self) -> bool: - return self.discrete - - def get_range(self) -> Tuple[float, float]: - return self.range - - def get_default_value(self): - return self.default_value - - def get_parameter_index(self): - return self.parameter_index - - def get_category(self) -> PoseParameterCategory: - return self.category - - -class PoseParameters: - def __init__(self, pose_parameter_groups: List[PoseParameterGroup]): - self.pose_parameter_groups = pose_parameter_groups - - def get_parameter_index(self, name: str) -> int: - index = 0 - for parameter_group in self.pose_parameter_groups: - for param_name in parameter_group.parameter_names: - if name == param_name: - return index - index += 1 - raise RuntimeError("Cannot find parameter with name %s" % name) - - def get_parameter_name(self, index: int) -> str: - assert index >= 0 and index < self.get_parameter_count() - - for group in self.pose_parameter_groups: - if index < group.get_arity(): - return group.get_parameter_names()[index] - index -= group.arity - - raise RuntimeError("Something is wrong here!!!") - - def get_pose_parameter_groups(self): - return self.pose_parameter_groups - - def get_parameter_count(self): - count = 0 - for group in self.pose_parameter_groups: - count += group.arity - return count - - class Builder: - def __init__(self): - self.index = 0 - self.pose_parameter_groups = [] - - def add_parameter_group(self, - group_name: str, - category: PoseParameterCategory, - arity: int = 1, - discrete: bool = False, - default_value: float = 0.0, - range: Optional[Tuple[float, float]] = None): - self.pose_parameter_groups.append( - PoseParameterGroup( - group_name, - self.index, - category, - arity, - discrete, - default_value, - range)) - self.index += arity - return self - - def build(self) -> 'PoseParameters': - return PoseParameters(self.pose_parameter_groups) - - -class Poser(ABC): - @abstractmethod - def get_image_size(self) -> int: - pass - - @abstractmethod - def get_output_length(self) -> int: - pass - - @abstractmethod - def get_pose_parameter_groups(self) -> List[PoseParameterGroup]: - pass - - @abstractmethod - def get_num_parameters(self) -> int: - pass - - @abstractmethod - def pose(self, image: Tensor, pose: Tensor, output_index: int = 0) -> Tensor: - pass - - @abstractmethod - def get_posing_outputs(self, image: Tensor, pose: Tensor) -> List[Tensor]: - pass - - def get_dtype(self) -> torch.dtype: - return torch.float diff --git a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp b/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp deleted file mode 100644 index c94575903bdf2eef71ecbe66382375552446e510..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp +++ /dev/null @@ -1,17 +0,0 @@ -#include "libipc/pool_alloc.h" - -#include "libipc/memory/resource.h" - -namespace ipc { -namespace mem { - -void* pool_alloc::alloc(std::size_t size) { - return async_pool_alloc::alloc(size); -} - -void pool_alloc::free(void* p, std::size_t size) { - async_pool_alloc::free(p, size); -} - -} // namespace mem -} // namespace ipc diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/models/__init__.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/models/__init__.py deleted file mode 100644 index 00bde45f003698a5b15d3517ae47b59ef1d86e0c..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/models/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import importlib -from copy import deepcopy -from os import path as osp - -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.registry import MODEL_REGISTRY - -__all__ = ['build_model'] - -# automatically scan and import model modules for registry -# scan all the files under the 'models' folder and collect files ending with -# '_model.py' -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'basicsr.models.{file_name}') for file_name in model_filenames] - - -def build_model(opt): - """Build model from options. - - Args: - opt (dict): Configuration. It must constain: - model_type (str): Model type. - """ - opt = deepcopy(opt) - model = MODEL_REGISTRY.get(opt['model_type'])(opt) - logger = get_root_logger() - logger.info(f'Model [{model.__class__.__name__}] is created.') - return model diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py deleted file mode 100644 index 8a799f19caac706a880218af257f40e9a386b489..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# GRIB stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific GRIB image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:4] == b"GRIB" and prefix[7] == 1 - - -class GribStubImageFile(ImageFile.StubImageFile): - format = "GRIB" - format_description = "GRIB" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(8)): - msg = "Not a GRIB file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "GRIB save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GribStubImageFile.format, GribStubImageFile, _accept) -Image.register_save(GribStubImageFile.format, _save) - -Image.register_extension(GribStubImageFile.format, ".grib") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/transformPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/transformPen.py deleted file mode 100644 index 2e572f612e6a29d0a782a0b278deaed9f98f5127..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/transformPen.py +++ /dev/null @@ -1,111 +0,0 @@ -from fontTools.pens.filterPen import FilterPen, FilterPointPen - - -__all__ = ["TransformPen", "TransformPointPen"] - - -class TransformPen(FilterPen): - - """Pen that transforms all coordinates using a Affine transformation, - and passes them to another pen. - """ - - def __init__(self, outPen, transformation): - """The 'outPen' argument is another pen object. It will receive the - transformed coordinates. The 'transformation' argument can either - be a six-tuple, or a fontTools.misc.transform.Transform object. - """ - super(TransformPen, self).__init__(outPen) - if not hasattr(transformation, "transformPoint"): - from fontTools.misc.transform import Transform - - transformation = Transform(*transformation) - self._transformation = transformation - self._transformPoint = transformation.transformPoint - self._stack = [] - - def moveTo(self, pt): - self._outPen.moveTo(self._transformPoint(pt)) - - def lineTo(self, pt): - self._outPen.lineTo(self._transformPoint(pt)) - - def curveTo(self, *points): - self._outPen.curveTo(*self._transformPoints(points)) - - def qCurveTo(self, *points): - if points[-1] is None: - points = self._transformPoints(points[:-1]) + [None] - else: - points = self._transformPoints(points) - self._outPen.qCurveTo(*points) - - def _transformPoints(self, points): - transformPoint = self._transformPoint - return [transformPoint(pt) for pt in points] - - def closePath(self): - self._outPen.closePath() - - def endPath(self): - self._outPen.endPath() - - def addComponent(self, glyphName, transformation): - transformation = self._transformation.transform(transformation) - self._outPen.addComponent(glyphName, transformation) - - -class TransformPointPen(FilterPointPen): - """PointPen that transforms all coordinates using a Affine transformation, - and passes them to another PointPen. - - >>> from fontTools.pens.recordingPen import RecordingPointPen - >>> rec = RecordingPointPen() - >>> pen = TransformPointPen(rec, (2, 0, 0, 2, -10, 5)) - >>> v = iter(rec.value) - >>> pen.beginPath(identifier="contour-0") - >>> next(v) - ('beginPath', (), {'identifier': 'contour-0'}) - >>> pen.addPoint((100, 100), "line") - >>> next(v) - ('addPoint', ((190, 205), 'line', False, None), {}) - >>> pen.endPath() - >>> next(v) - ('endPath', (), {}) - >>> pen.addComponent("a", (1, 0, 0, 1, -10, 5), identifier="component-0") - >>> next(v) - ('addComponent', ('a', ), {'identifier': 'component-0'}) - """ - - def __init__(self, outPointPen, transformation): - """The 'outPointPen' argument is another point pen object. - It will receive the transformed coordinates. - The 'transformation' argument can either be a six-tuple, or a - fontTools.misc.transform.Transform object. - """ - super().__init__(outPointPen) - if not hasattr(transformation, "transformPoint"): - from fontTools.misc.transform import Transform - - transformation = Transform(*transformation) - self._transformation = transformation - self._transformPoint = transformation.transformPoint - - def addPoint(self, pt, segmentType=None, smooth=False, name=None, **kwargs): - self._outPen.addPoint( - self._transformPoint(pt), segmentType, smooth, name, **kwargs - ) - - def addComponent(self, baseGlyphName, transformation, **kwargs): - transformation = self._transformation.transform(transformation) - self._outPen.addComponent(baseGlyphName, transformation, **kwargs) - - -if __name__ == "__main__": - from fontTools.pens.basePen import _TestPen - - pen = TransformPen(_TestPen(None), (2, 0, 0.5, 2, -10, 0)) - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25), (0, 0)) - pen.closePath() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/code.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/code.py deleted file mode 100644 index 134877a4036db4c07fa4427c29694ebc680f65aa..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/code.py +++ /dev/null @@ -1,157 +0,0 @@ -"""gr.Code() component""" - -from __future__ import annotations - -from typing import Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import IOComponent, _Keywords -from gradio.events import Changeable, Inputable - -set_documentation_group("component") - - -@document("languages") -class Code(Changeable, Inputable, IOComponent, StringSerializable): - """ - Creates a Code editor for entering, editing or viewing code. - Preprocessing: passes a {str} of code into the function. - Postprocessing: expects the function to return a {str} of code or a single-elment {tuple}: (string filepath,) - """ - - languages = [ - "python", - "markdown", - "json", - "html", - "css", - "javascript", - "typescript", - "yaml", - "dockerfile", - "shell", - "r", - None, - ] - - def __init__( - self, - value: str | tuple[str] | None = None, - language: Literal[ - "python", - "markdown", - "json", - "html", - "css", - "javascript", - "typescript", - "yaml", - "dockerfile", - "shell", - "r", - ] - | None = None, - *, - lines: int = 5, - label: str | None = None, - interactive: bool | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Default value to show in the code editor. If callable, the function will be called whenever the app loads to set the initial value of the component. - language: The language to display the code as. Supported languages listed in `gr.Code.languages`. - label: component name in interface. - interactive: Whether user should be able to enter code or only view it. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - assert language in Code.languages, f"Language {language} not supported." - self.language = language - self.lines = lines - IOComponent.__init__( - self, - label=label, - interactive=interactive, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - return { - "value": self.value, - "language": self.language, - "lines": self.lines, - **IOComponent.get_config(self), - } - - def postprocess(self, y): - if y is None: - return None - elif isinstance(y, tuple): - with open(y[0]) as file_data: - return file_data.read() - else: - return y.strip() - - @staticmethod - def update( - value: str - | tuple[str] - | None - | Literal[_Keywords.NO_VALUE] = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - language: Literal[ - "python", - "markdown", - "json", - "html", - "css", - "javascript", - "typescript", - "yaml", - "dockerfile", - "shell", - "r", - ] - | None = None, - interactive: bool | None = None, - ): - return { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "language": language, - "interactive": interactive, - "__type__": "update", - } diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-f6ff5ad4.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-f6ff5ad4.js deleted file mode 100644 index c1622faa87eff63800d5e96efe5a7f15bf9a77e0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-f6ff5ad4.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as R,e as Y,s as q,f as D,g as _,h as L,j as b,n as Z,k as M,m as p,t as N,o as C,Y as V,K as I,x as A,C as T,I as F,P as U,Z as y,p as x,F as S,G as j,w as h,u as H,H as E,V as $,ae as ee,Q as le,R as te,r as G,v as K,E as ne}from"./index-9e76ffee.js";import{B as se}from"./Button-30a08c0b.js";import{B as ae}from"./BlockLabel-9545c6da.js";import{E as ie}from"./Empty-8e3485c0.js";function ce(s){let e,t;return{c(){e=D("svg"),t=D("path"),_(t,"fill","currentColor"),_(t,"d","M4 2H2v26a2 2 0 0 0 2 2h26v-2H4v-3h22v-8H4v-4h14V5H4Zm20 17v4H4v-4ZM16 7v4H4V7Z"),_(e,"xmlns","http://www.w3.org/2000/svg"),_(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),_(e,"aria-hidden","true"),_(e,"role","img"),_(e,"class","iconify iconify--carbon"),_(e,"width","100%"),_(e,"height","100%"),_(e,"preserveAspectRatio","xMidYMid meet"),_(e,"viewBox","0 0 32 32")},m(l,n){L(l,e,n),b(e,t)},p:Z,i:Z,o:Z,d(l){l&&M(e)}}}class W extends R{constructor(e){super(),Y(this,e,null,ce,q,{})}}function P(s,e,t){const l=s.slice();return l[5]=e[t],l[7]=t,l}function Q(s){let e,t=F(s[0].confidences),l=[];for(let n=0;n{n("select",{index:o,value:g.label})};return s.$$set=o=>{"value"in o&&t(0,l=o.value),"color"in o&&t(1,a=o.color),"selectable"in o&&t(2,i=o.selectable)},[l,a,i,n,f]}class re extends R{constructor(e){super(),Y(this,e,oe,fe,q,{value:0,color:1,selectable:2})}}function O(s){let e,t;return e=new ae({props:{Icon:W,label:s[5],disable:s[6]===!1}}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,n){const a={};n&32&&(a.label=l[5]),n&64&&(a.disable=l[6]===!1),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function ue(s){let e,t;return e=new ie({props:{unpadded_box:!0,$$slots:{default:[de]},$$scope:{ctx:s}}}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,n){const a={};n&65536&&(a.$$scope={dirty:n,ctx:l}),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function _e(s){let e,t;return e=new re({props:{selectable:s[11],value:s[4],color:s[3]}}),e.$on("select",s[14]),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,n){const a={};n&2048&&(a.selectable=l[11]),n&16&&(a.value=l[4]),n&8&&(a.color=l[3]),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function de(s){let e,t;return e=new W({}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function me(s){let e,t,l,n,a,i,f;const o=[s[9]];let g={};for(let c=0;c{u=null}),K());let v=n;n=B(c),n===v?m[n].p(c,d):(G(),H(m[v],1,1,()=>{m[v]=null}),K(),a=m[n],a?a.p(c,d):(a=m[n]=k[n](c),a.c()),h(a,1),a.m(i.parentNode,i))},i(c){f||(h(e.$$.fragment,c),h(u),h(a),f=!0)},o(c){H(e.$$.fragment,c),H(u),H(a),f=!1},d(c){c&&(M(t),M(l),M(i)),E(e,c),u&&u.d(c),m[n].d(c)}}}function be(s){let e,t;return e=new se({props:{test_id:"label",visible:s[2],elem_id:s[0],elem_classes:s[1],container:s[6],scale:s[7],min_width:s[8],padding:!1,$$slots:{default:[me]},$$scope:{ctx:s}}}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,[n]){const a={};n&4&&(a.visible=l[2]),n&1&&(a.elem_id=l[0]),n&2&&(a.elem_classes=l[1]),n&64&&(a.container=l[6]),n&128&&(a.scale=l[7]),n&256&&(a.min_width=l[8]),n&73336&&(a.$$scope={dirty:n,ctx:l}),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function ge(s,e,t){let l,n,{elem_id:a=""}=e,{elem_classes:i=[]}=e,{visible:f=!0}=e,{color:o=void 0}=e,{value:g={}}=e,{label:u="Label"}=e,{container:k=!0}=e,{scale:m=null}=e,{min_width:B=void 0}=e,{loading_status:c}=e,{show_label:d=!0}=e,{selectable:w=!1}=e;const v=T();function X(r){ne.call(this,s,r)}return s.$$set=r=>{"elem_id"in r&&t(0,a=r.elem_id),"elem_classes"in r&&t(1,i=r.elem_classes),"visible"in r&&t(2,f=r.visible),"color"in r&&t(3,o=r.color),"value"in r&&t(4,g=r.value),"label"in r&&t(5,u=r.label),"container"in r&&t(6,k=r.container),"scale"in r&&t(7,m=r.scale),"min_width"in r&&t(8,B=r.min_width),"loading_status"in r&&t(9,c=r.loading_status),"show_label"in r&&t(10,d=r.show_label),"selectable"in r&&t(11,w=r.selectable)},s.$$.update=()=>{s.$$.dirty&16&&t(13,{confidences:l,label:n}=g,l,(t(12,n),t(4,g))),s.$$.dirty&12288&&v("change")},[a,i,f,o,g,u,k,m,B,c,d,w,n,l,X]}class ve extends R{constructor(e){super(),Y(this,e,ge,be,q,{elem_id:0,elem_classes:1,visible:2,color:3,value:4,label:5,container:6,scale:7,min_width:8,loading_status:9,show_label:10,selectable:11})}}const Le=ve,Me=["static"];export{Le as Component,Me as modes}; -//# sourceMappingURL=index-f6ff5ad4.js.map diff --git a/spaces/declare-lab/tango/diffusers/utils/get_modified_files.py b/spaces/declare-lab/tango/diffusers/utils/get_modified_files.py deleted file mode 100644 index 650c61ccb21eff8407147563b103733b472546cd..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/utils/get_modified_files.py +++ /dev/null @@ -1,34 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# this script reports modified .py files under the desired list of top-level sub-dirs passed as a list of arguments, e.g.: -# python ./utils/get_modified_files.py utils src tests examples -# -# it uses git to find the forking point and which files were modified - i.e. files not under git won't be considered -# since the output of this script is fed into Makefile commands it doesn't print a newline after the results - -import re -import subprocess -import sys - - -fork_point_sha = subprocess.check_output("git merge-base main HEAD".split()).decode("utf-8") -modified_files = subprocess.check_output(f"git diff --name-only {fork_point_sha}".split()).decode("utf-8").split() - -joined_dirs = "|".join(sys.argv[1:]) -regex = re.compile(rf"^({joined_dirs}).*?\.py$") - -relevant_modified_files = [x for x in modified_files if regex.match(x)] -print(" ".join(relevant_modified_files), end="") diff --git a/spaces/dfassaf/newbingChatAI/Dockerfile b/spaces/dfassaf/newbingChatAI/Dockerfile deleted file mode 100644 index 79bad8428d74a1dc4c6c28993899e6f39c788897..0000000000000000000000000000000000000000 --- a/spaces/dfassaf/newbingChatAI/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="aaheiduiwhuievhuhiqwhvrqwiovhqvhrqwuhqoivrhvioqwhvriqohvqiorvhiqov" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] diff --git a/spaces/diacanFperku/AutoGPT/Bad Piggies 1.3.0 !LINK! Crack [PC] Hack Tooll.md b/spaces/diacanFperku/AutoGPT/Bad Piggies 1.3.0 !LINK! Crack [PC] Hack Tooll.md deleted file mode 100644 index 1fbedcf763f2c02b142269bfca3826fce1a8d94e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Bad Piggies 1.3.0 !LINK! Crack [PC] Hack Tooll.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -

    getting into a cyber cafe isnt really a difficult task for most people these days. while its a great idea for hackers to cover their tracks when theyre in a cyber cafe, thats not always a feasible option for them. they could be on a public wi-fi network or they could be behind a router that doesnt allow them to control the network traffic. either way, theres a good chance hackers can find a computer that they can access.

    -

    when you log into your ebay account, your password is a piece of information thats safe to store in your head. but if youre also storing your credit card info, and your birth date, and your email address, and possibly even your childrens names on that piece of plastic, a hacker could easily get access to all those goodies too. and there are thousands of people out there who would love to get their hands on your birth date. where do you keep it?

    -

    Bad Piggies 1.3.0 Crack [PC] Hack Tooll


    Download --->>> https://gohhs.com/2uFTCd



    -

    i kid you not when i say that hackers will sniff out your wi-fi network and monitor your traffic. theyll often time lag and then make sure youre looking at something before they send you a virus. if youre not careful, they can open up a brand new window and send you a piece of malware. they can hide as a bit torrent client, and try to make themselves into a popular download. if youre not watching out for that bit torrent client, you may unwittingly download a piece of malware. plus, you may not even realize that its running in the background.

    -

    once a hacker has gathered all this data about you, well go back to our favorite part of the scenario. theyll use it to hack into your banking, credit card, email, and social media accounts. the free bonuses that theyre getting are more than theyll ever have to pay for, and theyll get paid every time you click on one of those bonus offers. if that online shopping app really has a deal on a headband for 3 extra bucks, why not?

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Kasumi Rebirth V3.1 Full Version.md b/spaces/diacanFperku/AutoGPT/Kasumi Rebirth V3.1 Full Version.md deleted file mode 100644 index 6a00e75eac02e82dcb1e92664730d46de8f46a46..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Kasumi Rebirth V3.1 Full Version.md +++ /dev/null @@ -1,56 +0,0 @@ -

    Kasumi rebirth v3.1 full version


    Download File > https://gohhs.com/2uFVbC



    - -.0 - -bias ex.3.4 - -bias ex.4 - -.1.3 - -.3 - -.1 - -1 - -.3.1 - -.4.1 - -.4 - -.4.3 - -.1.3 - -.1.1 - -.1.4 - -.1.1 - -.4.4 - -.4.4 - -.4.5 - -.4.5 - -.3.2 - -.3.2 - -.4.6 - -.4.7 - -.4.8 - -.4.9 - -.5 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/NewsLeecher 3.9 Final(with Keygen).md b/spaces/diacanFperku/AutoGPT/NewsLeecher 3.9 Final(with Keygen).md deleted file mode 100644 index af403feaefa717ca9254a89754c9c9dfbba091fd..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/NewsLeecher 3.9 Final(with Keygen).md +++ /dev/null @@ -1,16 +0,0 @@ -

    NewsLeecher 3.9 Final(with keygen)


    Download Ziphttps://gohhs.com/2uFVxG



    -
    -lnTEr-0N_cAPa-W0.NewsLeecher 3.9 Final(with Keygen) Download Pc. NOTE: The Leecher Keygen generator free here works with no problem and without a problem! And it's safe to use. aejujii - -Download and Install Leecher 3.9 Final(with Keygen). NOTE: The Leecher Keygen generator free here works with no problem and without a problem! And it's safe to use. aejujii - -How to Download Leecher 3.9 Final(with Keygen) for Windows? aejujii - -No longer seek virus infections when using Leecher 3.9 Final(with Keygen). The free software you'll find here is safe to use. aejujii - -NewsLeecher 3.9 Final(with Keygen) Download Pc. NEWSLEECHER 3.9 FINAL(WITH KEYGEN): DOWNLOAD: 071427268e. lnTEr-0N_cAPa-W0.NewsLeecher 3.9 Final(with Keygen) Download Pc. NOTE: The Leecher Keygen generator free here works with no problem and without a problem! And it's safe to use. aejujii - -Download and Install Leecher 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Pc Tools Performance Toolkit 2.1.0 Keygen Generator VERIFIED.md b/spaces/diacanFperku/AutoGPT/Pc Tools Performance Toolkit 2.1.0 Keygen Generator VERIFIED.md deleted file mode 100644 index 613601916d4d7d884544f2b2addf01aa395bffb2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pc Tools Performance Toolkit 2.1.0 Keygen Generator VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    pc tools performance toolkit 2.1.0 keygen generator


    Download >>>>> https://gohhs.com/2uFVLX



    -
    -adware-removal-tool, Bitdefender Adware Removal Tool for Mac, 1.1.8918. aegisub, Aegisub ... bino, Bino, 1.6.6. biopassfido, BioPass FIDO2 Manager, 2.1.0. 1fdad05405
    -
    -
    -

    diff --git a/spaces/dquisi/StoryGenerator/README.md b/spaces/dquisi/StoryGenerator/README.md deleted file mode 100644 index eed4db5e68e7e36d26a8fb401f91af13d2fab7be..0000000000000000000000000000000000000000 --- a/spaces/dquisi/StoryGenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: StoryGenerator -emoji: 🚀 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.1.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/util/slio.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/util/slio.py deleted file mode 100644 index 72c1f0f7b82cdc931d381feef64fe15815ba657e..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/util/slio.py +++ /dev/null @@ -1,177 +0,0 @@ -# ========================================================== -# Modified from mmcv -# ========================================================== - -import json -import pickle -from abc import ABCMeta, abstractmethod -from pathlib import Path - -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - - -# =========================== -# Rigister handler -# =========================== - - -class BaseFileHandler(metaclass=ABCMeta): - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode="r", **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode="w", **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) - - -class JsonHandler(BaseFileHandler): - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - return json.dumps(obj, **kwargs) - - -class PickleHandler(BaseFileHandler): - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path(filepath, mode="rb", **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault("protocol", 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault("protocol", 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path(obj, filepath, mode="wb", **kwargs) - - -class YamlHandler(BaseFileHandler): - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault("Loader", Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault("Dumper", Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault("Dumper", Dumper) - return yaml.dump(obj, **kwargs) - - -file_handlers = { - "json": JsonHandler(), - "yaml": YamlHandler(), - "yml": YamlHandler(), - "pickle": PickleHandler(), - "pkl": PickleHandler(), -} - -# =========================== -# load and dump -# =========================== - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def slload(file, file_format=None, **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and is_str(file): - file_format = file.split(".")[-1] - if file_format not in file_handlers: - raise TypeError(f"Unsupported format: {file_format}") - - handler = file_handlers[file_format] - if is_str(file): - obj = handler.load_from_path(file, **kwargs) - elif hasattr(file, "read"): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def sldump(obj, file=None, file_format=None, **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dump to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if is_str(file): - file_format = file.split(".")[-1] - elif file is None: - raise ValueError("file_format must be specified since file is None") - if file_format not in file_handlers: - raise TypeError(f"Unsupported format: {file_format}") - - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif is_str(file): - handler.dump_to_path(obj, file, **kwargs) - elif hasattr(file, "write"): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') diff --git a/spaces/ds520/bingo/src/components/ui/alert-dialog.tsx b/spaces/ds520/bingo/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
    - {children} -
    -
    -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/effluxriad/YouTube-comments-generator/README.md b/spaces/effluxriad/YouTube-comments-generator/README.md deleted file mode 100644 index a647fbbc5df1d687e4763bfa371e28125b230b88..0000000000000000000000000000000000000000 --- a/spaces/effluxriad/YouTube-comments-generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YouTube Comments Generator -emoji: 📈 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/emc348/faces-through-time/utils/alignment.py b/spaces/emc348/faces-through-time/utils/alignment.py deleted file mode 100644 index 3d4a6977fc73d70ed12ccf7043289649f3a6f99c..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/utils/alignment.py +++ /dev/null @@ -1,114 +0,0 @@ -import numpy as np -import PIL -import PIL.Image -import scipy -import scipy.ndimage -import dlib - - -def get_landmark(filepath, predictor): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - - img = dlib.load_rgb_image(filepath) - dets = detector(img, 1) - - for k, d in enumerate(dets): - shape = predictor(img, d) - - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - return lm - - -def align_face(filepath, predictor, output_size): - """ - :param filepath: str - :return: PIL Image - """ - - lm = get_landmark(filepath, predictor) - - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - img = PIL.Image.open(filepath) - - transform_size = output_size - enable_padding = True - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Return aligned image. - return img diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/footer.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/footer.tsx deleted file mode 100644 index 4bd5fe02e903fbdbb2f3f521069b6bbc0c995fb6..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/footer.tsx +++ /dev/null @@ -1,3 +0,0 @@ -export const Footer = () => ( -