diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4rabet App for PC How to Get It and Why You Need It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4rabet App for PC How to Get It and Why You Need It.md
deleted file mode 100644
index 1e9f3860cda79d20b74e5828ae06fb7b58735e0e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4rabet App for PC How to Get It and Why You Need It.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Download 4rabet App for PC and Enjoy Online Betting and Casino
-
4rabet is a popular online platform that offers sports betting and casino games for Indian players. It has a user-friendly interface, a wide range of markets and events, attractive bonuses and promotions, and convenient payment methods. However, if you want to enjoy 4rabet on your PC, you may face some difficulties. The official website does not have a separate app for PC, but only for Android and iOS devices. So, how can you download 4rabet app for PC and access all its features?
In this article, we will show you how to download 4rabet app for PC using an emulator software that allows you to run mobile apps on your computer. We will also explain why you should use 4rabet app for PC instead of the website version and what benefits you can get from it.
-
Why Use 4rabet App for PC?
-
Using 4rabet app for PC has several advantages over using the website version. Here are some of them:
-
-
You can get the latest version of 4rabet app faster than waiting for the website update. The developers of 4rabet often post new versions of the app on their website or Github, which are then shared by users on Reddit. These versions may have new features, bug fixes, or performance improvements that are not yet available on the website.
-
You can get feedback and advice from other users who have tried the same version of 4rabet app. You can read their comments, reviews, suggestions, and warnings about the download link and the app itself. You can also ask questions or report problems if you encounter any issues with the download or the installation.
-
You can get access to additional plugins or extensions that enhance the functionality of 4rabet app. For example, you can download a plugin that enables QuickSync support for faster encoding on Intel CPUs. You can also download a mod that improves the graphics of Need for Speed: Underground 2 using 4rabet app.
-
You can enjoy a better user experience and performance on your PC. The app is designed to run smoothly and efficiently on mobile devices, so it will work even better on your PC. You can also adjust the settings and preferences of the app according to your needs and preferences.
-
-
How to Download 4rabet App for PC?
-
Downloading 4rabet app for PC is easy and straightforward. Here are the steps to follow:
-
-
-
Go to r/4raBet and browse through the posts. Look for posts that have a download link or a Github link in the title or the content.
-
Click on the link and check the source. Make sure it is from a reputable website or a verified developer. Avoid links that are from unknown or suspicious sources.
-
Download the file and scan it with your antivirus program before opening it. Make sure it is free of malware or viruses.
-
Install or run the file according to the instructions provided by the developer or the user who posted the link.
-
Enjoy using 4rabet app on your PC!
-
-
Conclusion
-
In this article, we have shown you how to download 4rabet app for PC and why you should use it instead of the website version. Downloading 4rabet app for PC can give you access to the latest version of the app, feedback and advice from other users, additional plugins or extensions that enhance its functionality, and a better user experience and performance on your PC. However, you should always be careful and cautious when downloading anything from the internet. Make sure you check the source, scan the file, and follow the instructions properly.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack AutoCAD Mechanical 2018 Crack LINK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack AutoCAD Mechanical 2018 Crack LINK.md
deleted file mode 100644
index b7e8a7fc422a37e9c0c4af4021497fac47537980..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack AutoCAD Mechanical 2018 Crack LINK.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
How to Crack AutoCAD Mechanical 2018
-
If you are a mechanical engineer or a designer who needs a powerful and versatile software for creating and editing mechanical drawings, you might have heard of AutoCAD Mechanical 2018. This is one of the most popular and widely used applications in the field of mechanical design and engineering. But what if you don't have enough money to buy a license for this software? Is there a way to use it for free without compromising its quality and functionality? The answer is yes, you can crack AutoCAD Mechanical 2018 and enjoy its full features without paying a dime. In this article, we will show you how to do that in a few simple steps. But before we get into that, let's first understand what AutoCAD Mechanical 2018 is and why you might want to crack it.
AutoCAD Mechanical 2018 is a software that is designed specifically for mechanical engineering and design. It is part of the Autodesk family of products, which are known for their high-quality and innovative solutions for various industries. AutoCAD Mechanical 2018 includes all the features and functions of AutoCAD, plus a comprehensive library of standards-based parts and tools for automating common mechanical drawing tasks. With AutoCAD Mechanical 2018, you can:
-
-
Create accurate and detailed mechanical drawings with ease.
-
Use predefined parts from various standards, such as ANSI, ISO, DIN, JIS, etc.
-
Generate bills of materials (BOMs) and annotations automatically.
-
Edit and modify parts and drawings with intuitive commands and tools.
-
Collaborate and share your work with other engineers and designers using cloud services.
-
Integrate your work with other Autodesk products, such as Inventor, Fusion 360, etc.
-
-
AutoCAD Mechanical 2018 is compatible with Windows 7, Windows 8.1, and Windows 10 operating systems. It also supports both 32-bit and 64-bit architectures.
-
Features and benefits of AutoCAD Mechanical 2018
-
Some of the main features and benefits of AutoCAD Mechanical 2018 are:
-
-
It has a user-friendly interface that is similar to AutoCAD, so you can easily switch between them.
-
It has a powerful drawing engine that can handle complex geometries and large assemblies.
-
It has a smart dimensioning system that can automatically create accurate dimensions based on your drawing context.
-
It has a layer management system that can help you organize your drawings and control their visibility.
-
It has a content browser that can help you find and insert parts from various sources, such as local files, online libraries, etc.
-
It has a design calculation tool that can help you perform various calculations related to mechanical design, such as force, torque, stress, etc.
-
It has a documentation tool that can help you create professional-looking reports and presentations with your drawings.
-
It has a customization tool that can help you tailor the software to your specific needs and preferences.
-
-
System requirements for AutoCAD Mechanical 2018
-
The minimum system requirements for running AutoCAD Mechanical 2018 are:
-
-
Processor
1 GHz or faster
-
Memory
4 GB (32-bit) or 8 GB (64-bit)
-
Disk space
6 GB
-
Display
1360 x 768 resolution with True Color
-
Graphics card
Windows display adapter capable of DirectX®9 or DirectX®11 compliant card recommended
-
Internet connection
Necessary for installation and activation
-
-
Why do you need to crack AutoCAD Mechanical 2018?
-
If you are wondering why you need to crack AutoCAD Mechanical 2018, there are two main reasons: cost and convenience. Let's explain them in more detail.
-
The advantages of cracking AutoCAD Mechanical 2018
-
The first reason why you might want to crack AutoCAD Mechanical 2018 is cost. As you may know, this software is not cheap. According to the official website of Autodesk, the price of a single-user license for one year is $1,610. That means you have to pay this amount every year if you want to keep using the software. If you want to buy a perpetual license, which means you can use the software forever without paying annual fees, the price is even higher: $4,195. That's a lot of money for most people, especially if you are a student or a freelancer who doesn't have a stable income source. By cracking AutoCAD Mechanical 2018, you can save yourself from these expenses and use the software for free.
-
The risks of cracking AutoCAD Mechanical 2018
-
The second reason why you might want to crack AutoCAD Mechanical 2018 is convenience. As you may know, this software requires an internet connection for installation and activation. That means you have to connect your computer to the internet every time you want to install or activate the software. This can be inconvenient if you don't have access to a reliable internet connection or if you want to use the software offline. By cracking AutoCAD Mechanical 2018, you can bypass this requirement and use the software offline without any hassle.
-
However, before you decide to crack AutoCAD Mechanical 2018, you should also be aware of the risks involved. Cracking any software is illegal and unethical. You are violating the terms and conditions of Autodesk by doing so. You are also exposing your computer to potential viruses and malware that may come with the crack file. You are also losing access to some features and services that are only available for licensed users, such as updates, support, cloud storage, etc. You are also risking legal actions from Autodesk if they find out that you are using their software illegally. Therefore, we do not recommend or endorse cracking AutoCAD Mechanical 2018 or any other software. We are only providing this information for educational purposes only.
-
How to crack AutoCAD Mechanical 2018 for free
-AutoCAD Mechanical 2018 crack download link
-AutoCAD Mechanical 2018 activation code generator
-AutoCAD Mechanical 2018 license key crack
-AutoCAD Mechanical 2018 serial number crack
-AutoCAD Mechanical 2018 keygen crack
-AutoCAD Mechanical 2018 patch crack
-AutoCAD Mechanical 2018 full version crack
-AutoCAD Mechanical 2018 offline installer crack
-AutoCAD Mechanical 2018 xforce crack
-AutoCAD Mechanical 2018 torrent crack
-AutoCAD Mechanical 2018 direct download crack
-AutoCAD Mechanical 2018 portable crack
-AutoCAD Mechanical 2018 cracked by team os
-AutoCAD Mechanical 2018 cracked by cgpersia
-AutoCAD Mechanical 2018 cracked by core x
-AutoCAD Mechanical 2018 cracked by r2r
-AutoCAD Mechanical 2018 cracked by skidrow
-AutoCAD Mechanical 2018 cracked by codex
-AutoCAD Mechanical 2018 cracked by reloaded
-AutoCAD Mechanical 2018 cracked by plaza
-AutoCAD Mechanical 2018 cracked by hoodlum
-AutoCAD Mechanical 2018 cracked by razor1911
-AutoCAD Mechanical 2018 cracked by fitgirl
-AutoCAD Mechanical 2018 cracked by dodi
-AutoCAD Mechanical 2018 crack fix
-AutoCAD Mechanical 2018 crack only
-AutoCAD Mechanical 2018 crack reddit
-AutoCAD Mechanical 2018 crack youtube
-AutoCAD Mechanical 2018 crack forum
-AutoCAD Mechanical 2018 crack blogspot
-AutoCAD Mechanical 2018 crack quora
-AutoCAD Mechanical 2018 crack medium
-AutoCAD Mechanical 2018 crack github
-AutoCAD Mechanical 2018 crack stackoverflow
-AutoCAD Mechanical 2018 crack tutorial
-AutoCAD Mechanical 2018 crack guide
-AutoCAD Mechanical 2018 crack instructions
-AutoCAD Mechanical 2018 crack tips and tricks
-AutoCAD Mechanical 2018 crack review
-AutoCAD Mechanical 2018 crack comparison
-AutoCAD Mechanical 2018 crack alternatives
-AutoCAD Mechanical 2018 crack pros and cons
-AutoCAD Mechanical 2018 crack benefits and drawbacks
-AutoCAD Mechanical 2018 crack features and specifications
-AutoCAD Mechanical 2018 crack system requirements
-AutoCAD Mechanical 2018 crack installation steps
-AutoCAD Mechanical 2018 crack troubleshooting steps
-AutoCAD Mechanical 2018 crack support and helpdesk
-
How to crack AutoCAD Mechanical 2018 step by step
-
If you still want to proceed with cracking AutoCAD Mechanical 2018 despite the risks involved, here are the steps that you need to follow:
-
Step 1: Download the software and the crack file
-
The first step is to download the software and the crack file from reliable sources. You can find many websites that offer these files online, but be careful not to download any files that contain viruses or malware. One of the websites that we found that offers these files is https://iggtech.com/download-x-force-2018/. This website provides both the software installer (in ISO format) and the crack file (in ZIP format) for various Autodesk products, including AutoCAD Mechanical 2018. You can download these files by clicking on their respective links on this website.
-
Step 2: Install the software and disable the internet connection
-
The second step is to install the software on your computer. To do this, you need to mount or extract the ISO file that contains the software installer using an appropriate tool (such as WinRAR or Daemon Tools). Then run setup.exe file from within this folder. Follow the instructions on screen until you reach the product key page. On this page, you need to enter the product key for AutoCAD Mechanical 2018, which is 206J1. You can find the product key for other Autodesk products on the same website. After entering the product key, click Next and follow the rest of the instructions until the installation is complete. The next step is to disable your internet connection. This is important to prevent the software from contacting Autodesk servers and verifying your license. You can do this by unplugging your ethernet cable, turning off your Wi-Fi, or disabling your network adapter from the Control Panel.
Step 3: Run the crack file and generate the product key
-
The third step is to run the crack file that you downloaded in step 1. This file is called X-force 2018 and it is a keygen that can generate product keys for all Autodesk products. To run this file, you need to extract it from the ZIP archive using an appropriate tool (such as WinRAR or 7-Zip). Then right-click on it and choose Run as administrator. You should see a window like this:
-
-X-force 2018 window
-
On this window, you need to select AutoCAD Mechanical 2018 from the drop-down menu and click on Generate. This will create a product key that you will use to activate the software. Copy this product key and keep it somewhere safe.
-
Step 4: Activate the software with the product key
-
The fourth step is to activate the software with the product key that you generated in step 3. To do this, you need to launch AutoCAD Mechanical 2018 on your computer. You should see a window like this:
-
-AutoCAD Mechanical 2018 activation window
-
On this window, you need to click on Enter a Serial Number and then click on I Agree. You should see another window like this:
-
-AutoCAD Mechanical 2018 serial number window
-
On this window, you need to enter any serial number that consists of six groups of four digits each. For example, you can enter 666-69696969 or 111-11111111. Then you need to enter the product key that you copied in step 3. After entering these values, click Next and then click on Close.
-
Step 5: Enjoy the full version of AutoCAD Mechanical 2018
-
The final step is to enjoy the full version of AutoCAD Mechanical 2018 without any limitations or restrictions. You can now use all the features and functions of this software for your mechanical design and engineering projects. You can also update the software if there are any available updates from Autodesk.
-
Conclusion
-
In this article, we have shown you how to crack AutoCAD Mechanical 2018 and use it for free without paying any fees or subscriptions. We have also explained what AutoCAD Mechanical 2018 is and why you might want to crack it. However, we have also warned you about the risks and consequences of cracking any software, which are illegal and unethical. Therefore, we do not recommend or endorse cracking AutoCAD Mechanical 2018 or any other software. We are only providing this information for educational purposes only.
-
FAQs
-
-
What is AutoCAD Mechanical 2018?
-AutoCAD Mechanical 2018 is a software that is designed specifically for mechanical engineering and design. It includes all the features and functions of AutoCAD, plus a comprehensive library of standards-based parts and tools for automating common mechanical drawing tasks.
-
Why do I need to crack AutoCAD Mechanical 2018?
-You might want to crack AutoCAD Mechanical 2018 if you don't have enough money to buy a license for this software or if you want to use it offline without an internet connection.
-
How do I crack AutoCAD Mechanical 2018?
-You can crack AutoCAD Mechanical 2018 by following these steps: download the software and the crack file, install the software and disable the internet connection, run the crack file and generate the product key, activate the software with the product key, and enjoy the full version of AutoCAD Mechanical 2018.
-
What are the risks of cracking AutoCAD Mechanical 2018?
-The risks of cracking AutoCAD Mechanical 2018 are: violating the terms and conditions of Autodesk, exposing your computer to viruses and malware, losing access to some features and services, and risking legal actions from Autodesk.
-
What are the alternatives to cracking AutoCAD Mechanical 2018?
-The alternatives to cracking AutoCAD Mechanical 2018 are: buying a license for this software from Autodesk or an authorized reseller, using a free trial version of this software for a limited time, using a free or open-source software that has similar features and functions.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Imyfone Ibypasser Cracked.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Imyfone Ibypasser Cracked.md
deleted file mode 100644
index 523cf2d6a3f80a21caeb8e04973298fbb6c61b6a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Imyfone Ibypasser Cracked.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
How to Download and Use iMyFone iBypasser Cracked Version
-
iMyFone iBypasser is a software that can help you bypass iCloud activation lock on your iPhone, iPad, or iPod touch. It can also remove screen lock, Apple ID, and MDM lock from your iOS devices. However, iMyFone iBypasser is not a free software, and you need to pay a fee to use it for one device.
If you want to use iMyFone iBypasser for free, you might be tempted to download and use a cracked version of it from the internet. However, this is not a good idea, as cracked versions of iMyFone iBypasser can contain malware, viruses, or spyware that can harm your computer or your iOS devices. Moreover, cracked versions of iMyFone iBypasser can also have compatibility issues, bugs, or errors that can affect your bypassing process.
-
Therefore, the best way to use iMyFone iBypasser for free is to download and use the official trial version of it from the official website. The trial version of iMyFone iBypasser allows you to check if your device is supported and preview the bypassing result before you pay. After that, you can either buy a license or uninstall it from your computer.
-
Here are the steps to download and use the trial version of iMyFone iBypasser:
Save the file to your computer and run it to start the installation process.
-
Follow the instructions on the screen and complete the installation.
-
Launch iMyFone iBypasser and connect your iOS device to your computer with a USB cable.
-
Follow the steps on the software to check if your device is supported and preview the bypassing result.
-
-
Note: Do not use any crack, patch, keygen, or serial number to activate iMyFone iBypasser, as they can damage your computer or your iOS devices. Always use the official version of iMyFone iBypasser from the official website.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With BEST Full Level.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With BEST Full Level.md
deleted file mode 100644
index f6782ef3f67edb3143c5fd56d51978367b1b71d5..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With BEST Full Level.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
How to Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With Full Level
-
If you are a fan of Marvel comics and video games, you might have played Marvel Ultimate Alliance 2, a role-playing game that lets you create your own team of superheroes and villains. But if you want to enjoy the game to the fullest, you might want to unlock all the characters and level them up to their maximum potential. That's where a save game file comes in handy.
-
A save game file is a data file that stores your progress and settings in a video game. By downloading and using a save game file, you can skip the tedious process of unlocking and leveling up characters, and jump right into the action. In this article, we will show you how to download save game Marvel Ultimate Alliance 2 pc all character unlock with full level, and how to use it on your computer.
-
Download Save Game Marvel Ultimate Alliance 2 Pc All Character Unlock With Full Level
The first step is to find and download a save game file that suits your needs. There are many websites that offer save game files for various video games, but not all of them are reliable or safe. You should always scan any file you download with an antivirus software before opening it.
-
One of the websites that we recommend for downloading save game files is SaveGameFiles.com. This website has a large collection of save game files for various platforms and games, including Marvel Ultimate Alliance 2. You can browse the files by category or search for them by name.
-
To download the save game file for Marvel Ultimate Alliance 2 pc all character unlock with full level, follow these steps:
Scroll down and find the file named "Marvel Ultimate Alliance 2 - All Characters Unlocked + Max Level".
-
Click on the "Download" button next to the file name.
-
Wait for the file to be downloaded to your computer. The file size is about 1 MB.
-
Extract the file from the zip archive using a program like WinRAR or 7-Zip.
-
You should see a folder named "Marvel Ultimate Alliance 2" with two subfolders named "Data" and "Save".
-
-
Step 2: Backup Your Original Save Game File
-
Before you use the downloaded save game file, you should backup your original save game file in case something goes wrong or you want to revert to your previous progress. To backup your original save game file, follow these steps:
-
-
Go to the folder where your Marvel Ultimate Alliance 2 game is installed on your computer. The default location is C:\Program Files (x86)\Activision\Marvel - Ultimate Alliance 2.
-
Find and open the folder named "Data".
-
Copy the folder named "Save" and paste it somewhere safe on your computer, such as your desktop or an external drive.
-
Rename the copied folder to something like "Save Backup" or "Original Save".
-
-
Step 3: Replace Your Original Save Game File with the Downloaded One
-
Now that you have backed up your original save game file, you can replace it with the downloaded one. To do this, follow these steps:
-
-
Go back to the folder where you extracted the downloaded save game file.
-
Copy the folder named "Marvel Ultimate Alliance 2".
-
Go back to the folder where your Marvel Ultimate Alliance 2 game is installed on your computer.
-
Paste the copied folder and overwrite the existing one.
-
You should see a message asking if you want to replace the files in the destination. Click on "Yes" or "Replace All".
-
-
Step 4: Enjoy Your New Save Game File
-
Congratulations! You have successfully downloaded and used a save game file for Marvel Ultimate Alliance 2 pc all character unlock with full level. Now
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 7 Crack File Download How to Save Money and Ruin Your Computer.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 7 Crack File Download How to Save Money and Ruin Your Computer.md
deleted file mode 100644
index adc435954a1575a807911f1d9e95c9fe84f6cb58..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 7 Crack File Download How to Save Money and Ruin Your Computer.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download and Install Edius 7 Crack File for Free
-
Edius 7 is a powerful video editing software that can handle multiple formats and resolutions. It offers a range of features and tools to create professional-looking videos with ease. However, Edius 7 is not a free software and requires a license key to activate it. If you want to use Edius 7 without paying for it, you might be tempted to download and install a crack file from the internet. But is it safe and legal to do so? In this article, we will explain what a crack file is, how it works, and what are the risks and consequences of using it.
A crack file is a modified version of an original software file that bypasses or removes the security features that prevent unauthorized use. A crack file can be an executable file, a patch, a keygen, or a serial number generator. A crack file is usually created by hackers or crackers who want to break the protection of a software and distribute it for free or for profit.
-
How does a crack file work?
-
A crack file works by altering the code or data of the original software file in order to disable or fool the activation process. For example, a crack file might replace the original license key verification function with a fake one that always returns true, or it might change the expiration date of the trial version to never expire. A crack file can also inject malicious code into the software that can harm your computer or steal your personal information.
-
What are the risks and consequences of using a crack file?
-
Using a crack file is not only illegal but also dangerous. Here are some of the risks and consequences of using a crack file:
-
-
-
You might violate the intellectual property rights of the software developer and face legal action.
-
You might expose your computer to viruses, malware, spyware, ransomware, or other harmful programs that can damage your system or compromise your security.
-
You might lose your data or files due to corruption, deletion, encryption, or theft by the malicious code in the crack file.
-
You might experience poor performance, errors, crashes, or compatibility issues with the cracked software or other programs on your computer.
-
You might miss out on important updates, bug fixes, features, or support from the software developer.
-
-
How to download and install Edius 7 legally?
-
The best way to download and install Edius 7 is to purchase it from the official website or an authorized reseller. You will get a genuine license key that will activate your software and grant you access to all its features and benefits. You will also get regular updates, technical support, and customer service from the software developer. You will also avoid any legal or ethical issues that might arise from using a crack file.
-
To purchase Edius 7, visit https://www.edius.net/ and choose the edition that suits your needs. You can also download a free trial version for 30 days before you buy. Follow the instructions on the website to complete your order and download your software. Once you have downloaded Edius 7, run the installer and enter your license key when prompted. Enjoy editing your videos with Edius 7!
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1line/AutoGPT/autogpt/chat.py b/spaces/1line/AutoGPT/autogpt/chat.py
deleted file mode 100644
index 1f6bca96eb216c667656b50f131006b83c681065..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/chat.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import time
-
-from openai.error import RateLimitError
-
-from autogpt import token_counter
-from autogpt.config import Config
-from autogpt.llm_utils import create_chat_completion
-from autogpt.logs import logger
-
-cfg = Config()
-
-
-def create_chat_message(role, content):
- """
- Create a chat message with the given role and content.
-
- Args:
- role (str): The role of the message sender, e.g., "system", "user", or "assistant".
- content (str): The content of the message.
-
- Returns:
- dict: A dictionary containing the role and content of the message.
- """
- return {"role": role, "content": content}
-
-
-def generate_context(prompt, relevant_memory, full_message_history, model):
- current_context = [
- create_chat_message("system", prompt),
- create_chat_message(
- "system", f"The current time and date is {time.strftime('%c')}"
- ),
- create_chat_message(
- "system",
- f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
- ),
- ]
-
- # Add messages from the full message history until we reach the token limit
- next_message_to_add_index = len(full_message_history) - 1
- insertion_index = len(current_context)
- # Count the currently used tokens
- current_tokens_used = token_counter.count_message_tokens(current_context, model)
- return (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- )
-
-
-# TODO: Change debug from hardcode to argument
-def chat_with_ai(
- prompt, user_input, full_message_history, permanent_memory, token_limit
-):
- """Interact with the OpenAI API, sending the prompt, user input, message history,
- and permanent memory."""
- while True:
- try:
- """
- Interact with the OpenAI API, sending the prompt, user input,
- message history, and permanent memory.
-
- Args:
- prompt (str): The prompt explaining the rules to the AI.
- user_input (str): The input from the user.
- full_message_history (list): The list of all messages sent between the
- user and the AI.
- permanent_memory (Obj): The memory object containing the permanent
- memory.
- token_limit (int): The maximum number of tokens allowed in the API call.
-
- Returns:
- str: The AI's response.
- """
- model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
- # Reserve 1000 tokens for the response
-
- logger.debug(f"Token limit: {token_limit}")
- send_token_limit = token_limit - 1000
-
- relevant_memory = (
- ""
- if len(full_message_history) == 0
- else permanent_memory.get_relevant(str(full_message_history[-9:]), 10)
- )
-
- logger.debug(f"Memory Stats: {permanent_memory.get_stats()}")
-
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(prompt, relevant_memory, full_message_history, model)
-
- while current_tokens_used > 2500:
- # remove memories until we are under 2500 tokens
- relevant_memory = relevant_memory[:-1]
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(
- prompt, relevant_memory, full_message_history, model
- )
-
- current_tokens_used += token_counter.count_message_tokens(
- [create_chat_message("user", user_input)], model
- ) # Account for user input (appended later)
-
- while next_message_to_add_index >= 0:
- # print (f"CURRENT TOKENS USED: {current_tokens_used}")
- message_to_add = full_message_history[next_message_to_add_index]
-
- tokens_to_add = token_counter.count_message_tokens(
- [message_to_add], model
- )
- if current_tokens_used + tokens_to_add > send_token_limit:
- break
-
- # Add the most recent message to the start of the current context,
- # after the two system prompts.
- current_context.insert(
- insertion_index, full_message_history[next_message_to_add_index]
- )
-
- # Count the currently used tokens
- current_tokens_used += tokens_to_add
-
- # Move to the next most recent message in the full message history
- next_message_to_add_index -= 1
-
- # Append user input, the length of this is accounted for above
- current_context.extend([create_chat_message("user", user_input)])
-
- # Calculate remaining tokens
- tokens_remaining = token_limit - current_tokens_used
- # assert tokens_remaining >= 0, "Tokens remaining is negative.
- # This should never happen, please submit a bug report at
- # https://www.github.com/Torantulino/Auto-GPT"
-
- # Debug print the current context
- logger.debug(f"Token limit: {token_limit}")
- logger.debug(f"Send Token Count: {current_tokens_used}")
- logger.debug(f"Tokens remaining for response: {tokens_remaining}")
- logger.debug("------------ CONTEXT SENT TO AI ---------------")
- for message in current_context:
- # Skip printing the prompt
- if message["role"] == "system" and message["content"] == prompt:
- continue
- logger.debug(f"{message['role'].capitalize()}: {message['content']}")
- logger.debug("")
- logger.debug("----------- END OF CONTEXT ----------------")
-
- # TODO: use a model defined elsewhere, so that model can contain
- # temperature and other settings we care about
- assistant_reply = create_chat_completion(
- model=model,
- messages=current_context,
- max_tokens=tokens_remaining,
- )
-
- # Update full message history
- full_message_history.append(create_chat_message("user", user_input))
- full_message_history.append(
- create_chat_message("assistant", assistant_reply)
- )
-
- return assistant_reply
- except RateLimitError:
- # TODO: When we switch to langchain, this is built in
- print("Error: ", "API Rate Limit Reached. Waiting 10 seconds...")
- time.sleep(10)
diff --git a/spaces/1line/AutoGPT/tests/unit/test_browse_scrape_links.py b/spaces/1line/AutoGPT/tests/unit/test_browse_scrape_links.py
deleted file mode 100644
index 0a3340e7397a997da96b8ab9828954230e1a3c20..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/tests/unit/test_browse_scrape_links.py
+++ /dev/null
@@ -1,118 +0,0 @@
-# Generated by CodiumAI
-
-# Dependencies:
-# pip install pytest-mock
-import pytest
-
-from autogpt.commands.web_requests import scrape_links
-
-"""
-Code Analysis
-
-Objective:
-The objective of the 'scrape_links' function is to scrape hyperlinks from a
-given URL and return them in a formatted way.
-
-Inputs:
-- url: a string representing the URL to be scraped.
-
-Flow:
-1. Send a GET request to the given URL using the requests library and the user agent header from the config file.
-2. Check if the response contains an HTTP error. If it does, return "error".
-3. Parse the HTML content of the response using the BeautifulSoup library.
-4. Remove any script and style tags from the parsed HTML.
-5. Extract all hyperlinks from the parsed HTML using the 'extract_hyperlinks' function.
-6. Format the extracted hyperlinks using the 'format_hyperlinks' function.
-7. Return the formatted hyperlinks.
-
-Outputs:
-- A list of formatted hyperlinks.
-
-Additional aspects:
-- The function uses the 'requests' and 'BeautifulSoup' libraries to send HTTP
-requests and parse HTML content, respectively.
-- The 'extract_hyperlinks' function is called to extract hyperlinks from the parsed HTML.
-- The 'format_hyperlinks' function is called to format the extracted hyperlinks.
-- The function checks for HTTP errors and returns "error" if any are found.
-"""
-
-
-class TestScrapeLinks:
- # Tests that the function returns a list of formatted hyperlinks when
- # provided with a valid url that returns a webpage with hyperlinks.
- def test_valid_url_with_hyperlinks(self):
- url = "https://www.google.com"
- result = scrape_links(url)
- assert len(result) > 0
- assert isinstance(result, list)
- assert isinstance(result[0], str)
-
- # Tests that the function returns correctly formatted hyperlinks when given a valid url.
- def test_valid_url(self, mocker):
- # Mock the requests.get() function to return a response with sample HTML containing hyperlinks
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = (
- "Google"
- )
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a valid URL
- result = scrape_links("https://www.example.com")
-
- # Assert that the function returns correctly formatted hyperlinks
- assert result == ["Google (https://www.google.com)"]
-
- # Tests that the function returns "error" when given an invalid url.
- def test_invalid_url(self, mocker):
- # Mock the requests.get() function to return an HTTP error response
- mock_response = mocker.Mock()
- mock_response.status_code = 404
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with an invalid URL
- result = scrape_links("https://www.invalidurl.com")
-
- # Assert that the function returns "error"
- assert "Error:" in result
-
- # Tests that the function returns an empty list when the html contains no hyperlinks.
- def test_no_hyperlinks(self, mocker):
- # Mock the requests.get() function to return a response with sample HTML containing no hyperlinks
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = "
No hyperlinks here
"
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a URL containing no hyperlinks
- result = scrape_links("https://www.example.com")
-
- # Assert that the function returns an empty list
- assert result == []
-
- # Tests that scrape_links() correctly extracts and formats hyperlinks from
- # a sample HTML containing a few hyperlinks.
- def test_scrape_links_with_few_hyperlinks(self, mocker):
- # Mock the requests.get() function to return a response with a sample HTML containing hyperlinks
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = """
-
-
-
-
-
- """
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function being tested
- result = scrape_links("https://www.example.com")
-
- # Assert that the function returns a list of formatted hyperlinks
- assert isinstance(result, list)
- assert len(result) == 3
- assert result[0] == "Google (https://www.google.com)"
- assert result[1] == "GitHub (https://github.com)"
- assert result[2] == "CodiumAI (https://www.codium.ai)"
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Arceus X How to Run PC Scripts on Your iOS or Android Phone.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Arceus X How to Run PC Scripts on Your iOS or Android Phone.md
deleted file mode 100644
index 79a5aab8ed30fa73381cdc4d92aead0e166ecadb..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Arceus X How to Run PC Scripts on Your iOS or Android Phone.md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
Arceus X: How to Download and Play the Ultimate Roblox Mod Menu on iOS
-
If you are a fan of Roblox, you might have heard of Arceus X, a mod menu that allows you to exploit your favorite games with features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, and more. Arceus X is a first and one of the most widely used Roblox Mod Menu/exploit specially developed for Android. But what if you want to play it on your iOS device? Is it possible to download and install Arceus X on iOS? The answer is yes, and in this article, we will show you how to do it step by step.
-
What is Arceus X?
-
Arceus X is a first Android Roblox Mod Menu/Exploit to improve the gameplay. It allows you to use features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. Arceus X APK is developed using Node.js, C++, JAVA. It’s an Android application that has floating Menu to execute scripts while you are in the game.
Some of the features that make Arceus X stand out from other Roblox mod menus are:
-
-
Android LuaU Execution: You can run any Lua script on your Android device without any limitations.
-
Infinite Jump: You can jump as high as you want in any game.
-
Super Speed: You can move faster than normal in any game.
-
Btools: You can delete or modify any object in any game.
-
Script Hub: You can access a collection of scripts for various games from the mod menu.
-
More!: You can also use features such as Fly, Noclip, ESP, Aimbot, God Mode, and more.
-
-
Requirements for Arceus X
-
To download and play Arceus X on your iOS device, you will need:
-
-
An iOS device with iOS 10 or later.
-
An Android device or an emulator to get the Arceus X APK file.
-
A file manager app on your iOS device to transfer the APK file.
-
An iOS emulator app on your iOS device to run the APK file.
-
A Roblox account to play the games.
-
-
How to Download Arceus X on iOS
-
Now that you know what Arceus X is and what you need to play it on your iOS device, let's get started with the download process. Here are the steps you need to follow:
-
Step 1: Get the Arceus X APK file
-
The first step is to get the Arceus X APK file from a reliable source. You can either use an Android device or an emulator on your PC to do this. Here are some options for getting the APK file:
-
-
You can download it from the official website of Arceus X. Just click on the download button and complete the verification process. The APK file will be downloaded automatically.
-
You can watch a tutorial video on YouTube that shows you how to download and install Arceus X on your Android device. Just follow the instructions in the video and get the APK file.
You can join the Discord server of Arceus X and ask for the APK file from the developers or other users. You might need to verify your identity and follow some rules to get access to the file.
-
-
Once you have the APK file, you need to transfer it to your iOS device. You can use a USB cable, Bluetooth, Wi-Fi, or any other method that works for you. Just make sure you have a file manager app on your iOS device to locate the APK file.
-
Step 2: Install an iOS emulator
-
The next step is to install an iOS emulator app on your iOS device that can run Android apps. An emulator is a software that mimics the behavior of another device or platform. There are many iOS emulators available on the App Store, but not all of them can run Arceus X smoothly. Here are some of the best iOS emulators that we recommend for Arceus X:
-
-
iAndroid: This is one of the most popular and reliable iOS emulators that can run Android apps without any hassle. It has a simple interface and supports most of the Android features. You can download it from the App Store for free.
-
Cider: This is another iOS emulator that can run Android apps with ease. It has a fast performance and supports many Android games. You can download it from the official website for free.
-
Appetize.io: This is an online iOS emulator that can run Android apps on your browser. You don't need to install anything on your device, just upload the APK file and start playing. It has a high compatibility and supports many Android features. You can use it for free for 100 minutes per month, or upgrade to a paid plan for more time.
-
-
Once you have installed an iOS emulator of your choice, you need to launch it and grant it the necessary permissions to access your device's storage, camera, microphone, etc.
-
Step 3: Run the Arceus X APK file on the emulator
-
The final step is to run the Arceus X APK file on the emulator and start playing. Here are the steps you need to follow:
-
arceus x v3 download tutorial
-arceus x apk official
-arceus x roblox mod menu
-arceus x v3.1.0 public beta
-arceus x android roblox exploit
-arceus x ios 16.0.4 install
-arceus x script executor for mobile
-arceus x apk without linkvertise
-arceus x roblox hack android
-arceus x v3 update download
-arceus x mod menu apk
-arceus x roblox cheat ios
-arceus x verification process
-arceus x apk free download
-arceus x roblox exploit ios
-arceus x v3 mod menu tutorial
-arceus x apk latest version
-arceus x roblox script hub
-arceus x verification bypass
-arceus x apk no ads
-arceus x roblox infinite jump
-arceus x v3 install guide
-arceus x apk direct download
-arceus x roblox super speed
-arceus x verification completed
-arceus x apk no verification
-arceus x roblox btools hack
-arceus x v3 download link
-arceus x apk easy download
-arceus x roblox luau execution
-arceus x verification failed fix
-arceus x apk no root
-arceus x roblox android modding
-arceus x v3 features overview
-arceus x apk fast download
-arceus x roblox exploit features
-arceus x verification steps explained
-arceus x apk safe download
-arceus x roblox pc scripts support
-arceus x v3 release date ios
-arceus x apk working download
-arceus x roblox exploit review
-arceus x verification code generator
-arceus x apk virus free download
-arceus x roblox exploit comparison
-
-
Open the file manager app on your iOS device and locate the Arceus X APK file that you transferred earlier.
-
Tap on the APK file and select the option to open it with the emulator app that you installed.
-
The emulator will launch and install the Arceus X app on its virtual environment.
-
Once the installation is complete, you will see the Arceus X icon on the emulator's home screen.
-
Tap on the icon and log in with your Roblox account credentials.
-
You will see a floating mod menu on your screen with various options to exploit your favorite games.
-
-
Step 4: Enjoy the game
-
Congratulations! You have successfully downloaded and installed Arceus X on your iOS device. Now you can enjoy playing Roblox with unlimited features and fun. You can access the mod menu anytime by tapping on it and selecting the options you want to use. You can also use the script hub to find and execute scripts for different games. Just be careful not to abuse the mod menu or get reported by other players, as you might get banned by Roblox.
-
Tips and Tricks for Arceus X
-
To make the most out of Arceus X, here are some tips and tricks that you should know:
-
How to use the script hub
-
The script hub is a feature that allows you to access a collection of scripts for various games from the mod menu. You can use these scripts to enhance your gameplay or perform certain actions that are not possible otherwise. Here are some steps to use the script hub:
-
-
Tap on the mod menu and select the script hub option.
-
You will see a list of games that have scripts available for them.
-
Select the game that you want to play and tap on it.
-
You will see a list of scripts that you can use for that game.
-
Select the script that you want to use and tap on it.
-
The script will be executed automatically and you will see its effects in the game.
-
-
How to customize the mod menu
-
The mod menu is a feature that allows you to customize various aspects of Arceus X, such as its appearance, position, size, transparency, etc. You can also enable or disable certain features or change their settings according to your preference. Here are some steps to customize the mod menu:
-
-
Tap on the mod menu and select the settings option.
-
You will see a list of options that you can change, such as color, size, position, transparency, etc.
-
Select the option that you want to change and adjust it according to your liking.
-
You can also enable or disable certain features or change their settings by tapping on them.
-
Once you are done, tap on the save button to apply the changes.
-
-
How to avoid getting banned
-
While Arceus X is a fun and powerful mod menu, it is also a risky one. If you use it too much or too blatantly, you might get detected and banned by Roblox. To avoid this, here are some tips that you should follow:
-
-
Use the mod menu sparingly and discreetly. Don't use it in every game or every round. Don't use it in front of other players or moderators. Don't use it to ruin the game for others.
-
Use the anti-ban feature. This feature is designed to prevent Roblox from detecting your mod menu and banning you. It does this by changing your device ID, IP address, and other information that Roblox uses to identify you. You can enable this feature from the mod menu settings.
-
Use a VPN service. A VPN service is a tool that encrypts your internet traffic and hides your IP address and location. This can help you avoid getting banned by Roblox, as they won't be able to trace your activity or location. You can use any VPN service that works for you, but make sure it is reliable and secure.
-
-
Conclusion
-
In this article, we have shown you how to download and play Arceus X on your iOS device. Arceus X is a first Android Roblox Mod Menu/Exploit that allows you to exploit your favorite games with features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. To play it on your iOS device, you need to get the Arceus X APK file from a reliable source, install an iOS emulator app on your device, run the APK file on the emulator, and enjoy the game. We have also given you some tips and tricks for using Arceus X, such as how to use the script hub, how to customize the mod menu, and how to avoid getting banned. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some of the frequently asked questions about Arceus X:
-
-
Is Arceus X safe to use?
-
Arceus X is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, there is always a risk of getting banned by Roblox if you use it too much or too blatantly. To minimize this risk, use the anti-ban feature and a VPN service.
-
Is Arceus X free to use?
-
Yes, Arceus X is free to use and does not require any payment or subscription. However, you might need to complete some verification steps or watch some ads before downloading it.
-
Does Arceus X work on all games?
-
No, Arceus X does not work on all games. Some games have anti-cheat systems or scripts that prevent Arceus X from working properly. You can check the script hub for the list of games that have scripts available for them.
-
Can I use Arceus X on other devices?
-
Yes, you can use Arceus X on other devices besides iOS. You can use it on Android devices directly without any emulator. You can also use it on PC devices with an Android emulator such as BlueStacks or Nox Player.
-
Where can I get more information about Arceus X?
-
You can get more information about Arceus X from its official website, its YouTube channel, or its Discord server. You can also contact the developers or other users for support or feedback.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Blockman Go v2.40.1 APK with Unlimited Money and Gems.md b/spaces/1phancelerku/anime-remove-background/Download Blockman Go v2.40.1 APK with Unlimited Money and Gems.md
deleted file mode 100644
index 58e1ca6f6f11f4cffb6eefa34e7c29ddf38adce8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Blockman Go v2.40.1 APK with Unlimited Money and Gems.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Blockman Go v2.40.1 APK: A Fun and Creative Sandbox Game
-
If you are looking for a game that offers you a variety of fun and creative minigames, as well as a platform to chat and make friends with other players, then you should check out Blockman Go v2.40.1 APK. This is the latest version of the popular sandbox game that has millions of fans around the world.
Blockman Go is a free app that lets you play various block style minigames with different themes and genres. You can join the games by a simple tap, or create your own games with your own rules and settings. You can also chat and make friends with other players in the game, and join clans and parties to play together.
-
A free app with various minigames
-
Blockman Go offers you a wide range of minigames to choose from, such as Bed Wars, Egg Wars, Sky Block, Murder Mystery, Survival Games, Capture Flag, Snowball Battle, Bow Spleef, TNT Run, and many more. Each minigame has its own gameplay, objectives, and challenges that will keep you entertained and engaged.
-
A platform to chat and make friends
-
Blockman Go is not just a game, but also a social platform where you can chat and make friends with other players from all over the world. You can use the chat function to communicate and cooperate with your teammates, or to have fun conversations with other players. You can also join clans and parties to play together, or send gifts and messages to your friends.
-
A way to customize your avatar and show your style
-
Blockman Go allows you to customize your avatar with various items and accessories that you can buy with gold or Gcubes (the premium currency of the game). You can change your hair, eyes, clothes, hats, glasses, masks, wings, tails, pets, weapons, vehicles, and more. You can also create your own skins and upload them to the game. With Blockman Go, you can show your unique style and personality to the world.
-
What's New in Blockman Go v2.40.1 APK?
-
Blockman Go v2.40.1 APK is the latest version of the game that was released on June 20th 2023. This version brings some new minigames and features, as well as some improvements and bug fixes.
-
blockman go 2.40.1 apk download
-blockman go 2.40.1 mod apk
-blockman go 2.40.1 apk free download
-blockman go 2.40.1 apk latest version
-blockman go 2.40.1 apk pure
-blockman go 2.40.1 apk for android
-blockman go 2.40.1 apk hack
-blockman go 2.40.1 apk unlimited money
-blockman go 2.40.1 apk old version
-blockman go 2.40.1 apk offline
-blockman go 2.40.1 apk update
-blockman go 2.40.1 apk no ads
-blockman go 2.40.1 apk premium
-blockman go 2.40.1 apk full version
-blockman go 2.40.1 apk cracked
-blockman go 2.40.1 apk mod menu
-blockman go 2.40.1 apk obb
-blockman go 2.40.1 apk revdl
-blockman go 2.40.1 apk rexdl
-blockman go 2.40.1 apk mirror
-blockman go 2.40.1 apk uptodown
-blockman go 2.40.1 apk apkpure.com[^1^]
-blockman go 2.40.1 apk android oyun club
-blockman go 2.40.1 apk happymod
-blockman go 2.40.1 apk modded
-blockman go 2.40.1 apk original
-blockman go 2.40.1 apk file download
-blockman go 2.40.1 apk install
-blockman go 2.40.1 apk direct link
-blockman go 2.40.1 apk google play
-blockman go 2.40.1 apk data download
-blockman go 2.40.1 apk unlocked all features
-blockman go 2.40.1 apk pro version
-blockman go 2.40.1 apk mega mod
-blockman go 2.40.1 apk cheat codes
-blockman go 2.40.1 apk unlimited cubes
-blockman go 2.40.1 apk unlimited gems
-blockman go 2,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
-
New minigames and features
-
Some of the new minigames and features that are added in this version are:
-
-
Party Street: A new minigame where you can collect graffitis from all over the city and spray them to your heart's content. You can experience this super cool street style in the Party Street and hop into a random party with other players.
-
Blockman Go Studio: A new feature where you can create your own videos and share them with other players. You can use various tools and effects to make your videos more interesting and creative. You can also watch and like other players' videos and get rewards.
-
Blockman Go Music: A new feature where you can listen to music and podcasts from different genres and categories. You can also create your own playlists and share them with your friends. You can also discover new songs and artists from the recommendations and rankings.
-
-
Improved game experience and performance
-
Some of the improvements and optimizations that are made in this version are:
-
-
Reduced loading time and lag: The game has been optimized to reduce the loading time and lag when entering or switching between minigames. The game also has a smoother performance and a better compatibility with different devices.
-
Enhanced graphics and sound effects: The game has been enhanced to improve the graphics and sound effects of the minigames. The game also has a more realistic and immersive atmosphere and a better user interface.
-
Updated content and rewards: The game has been updated to add more content and rewards for the players. The game also has a more balanced and fair gameplay and a better feedback system.
-
-
Bug fixes and optimizations
-
Some of the bug fixes and optimizations that are done in this version are:
-
-
Fixed some crashes and errors: The game has been fixed to solve some crashes and errors that occurred in some minigames or features. The game also has a more stable and secure performance.
-
Fixed some glitches and exploits: The game has been fixed to remove some glitches and exploits that affected the gameplay or the fairness of the minigames. The game also has a more consistent and reliable performance.
-
Fixed some typos and translations: The game has been fixed to correct some typos and translations that appeared in some texts or dialogs. The game also has a more accurate and clear language.
-
-
How to Download and Install Blockman Go v2.40.1 APK?
-
If you want to download and install Blockman Go v2.40.1 APK on your device, you can follow these simple steps:
-
Download the APK file from a trusted source
-
The first step is to download the APK file from a trusted source, such as [APKPure] or [Uptodown]. You can use the links below to download the file directly:
The file size is about 150 MB, so make sure you have enough space on your device before downloading it.
-
Enable unknown sources on your device settings
-
The second step is to enable unknown sources on your device settings, so that you can install the APK file without any problems. You can do this by following these steps:
-
-
Go to your device settings and look for the security or privacy option.
-
Find the option that says unknown sources or allow installation from unknown sources.
-
Toggle it on or check it to enable it.
-
You may get a warning message, but you can ignore it or confirm it.
-
-
This will allow you to install apps that are not from the official app store, such as Blockman Go v2.40.1 APK.
Install the APK file and enjoy the game
-
The third and final step is to install the APK file and enjoy the game. You can do this by following these steps:
-
-
Locate the APK file that you downloaded on your device storage or file manager.
-
Tap on the file to open it and start the installation process.
-
You may get a prompt asking you to confirm the installation or grant some permissions. Just follow the instructions and allow what is necessary.
-
Wait for the installation to finish and then tap on the open or done button.
-
-
Congratulations, you have successfully installed Blockman Go v2.40.1 APK on your device. You can now launch the game and enjoy the new features and minigames.
-
Tips and Tricks to Play Blockman Go Better
-
If you want to play Blockman Go better and have more fun, you can use some of these tips and tricks:
-
Choose the right minigame for your preference and skill level
-
Blockman Go has a lot of minigames to choose from, but not all of them may suit your preference or skill level. You can browse through the categories and genres of the minigames and find the ones that you like and are good at. You can also check the ratings, reviews, and descriptions of the minigames to get an idea of what they are about and how to play them.
-
Use the chat function to communicate and cooperate with other players
-
Blockman Go is a social game where you can chat and make friends with other players. You can use the chat function to communicate and cooperate with your teammates, or to have fun conversations with other players. You can also use emojis, stickers, voice messages, and gifs to express yourself better. You can also join clans and parties to play together, or send gifts and messages to your friends.
-
Earn gold by playing minigames and use it to buy items and accessories
-
Blockman Go allows you to earn gold by playing minigames and use it to buy items and accessories for your avatar. You can also earn Gcubes, which are the premium currency of the game, by completing tasks, watching ads, or buying them with real money. You can use gold and Gcubes to buy various items and accessories that will make your avatar more stylish and unique. You can also create your own skins and upload them to the game.
-
Conclusion
-
Blockman Go v2.40.1 APK is a fun and creative sandbox game that offers you a variety of block style minigames with different themes and genres. You can also chat and make friends with other players in the game, and customize your avatar with various items and accessories. You can download and install Blockman Go v2.40.1 APK on your device by following the simple steps above. You can also use some tips and tricks to play Blockman Go better and have more fun.
-
FAQs
-
Here are some frequently asked questions about Blockman Go v2.40.1 APK:
-
-
Q: Is Blockman Go v2.40.1 APK safe to download and install?
A: Yes, Blockman Go v2.40.1 APK is safe to download and install, as long as you download it from a trusted source, such as [APKPure] or [Uptodown]. You should also enable unknown sources on your device settings before installing it.
-
Q: What are the requirements to play Blockman Go v2.40.1 APK?
A: Blockman Go v2.40.1 APK requires Android 4.4 or higher, as well as a stable internet connection. The game also requires about 150 MB of free space on your device.
-
Q: How can I update Blockman Go v2.40.1 APK?
A: You can update Blockman Go v2.40.1 APK by downloading the latest version from a trusted source, such as [APKPure] or [Uptodown], and installing it over the existing one. You can also check for updates within the game settings.
-
Q: How can I contact Blockman Go support team?
A: You can contact Blockman Go support team by sending an email to blockymods@sandboxol.com, or by visiting their official website at https://www .blockmango.net/.
-
Q: How can I get more gold and Gcubes in Blockman Go?
A: You can get more gold and Gcubes in Blockman Go by playing minigames and completing tasks, watching ads and videos, inviting friends and joining events, or buying them with real money.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Messenger on Your Desktop with the Official Windows App.md b/spaces/1phancelerku/anime-remove-background/Enjoy Messenger on Your Desktop with the Official Windows App.md
deleted file mode 100644
index 2ad786f80d25fa47da9a15d549cf1b2b13005fd3..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Messenger on Your Desktop with the Official Windows App.md
+++ /dev/null
@@ -1,183 +0,0 @@
-
-
How to Download Messenger for Windows
-
Do you want to stay connected with your friends and family on Messenger, but don't want to use your phone or browser? If so, you might be interested in downloading Messenger for Windows, a desktop app that lets you use Messenger on your PC or Mac. In this article, we will show you how to download, install, and use Messenger for Windows, as well as how to troubleshoot some common issues. Let's get started!
-
What is Messenger for Windows?
-
Messenger for Windows is a desktop app that lets you use Messenger on your Windows or Mac computer. It is similar to the mobile app, but with some additional features that make it more convenient and enjoyable to use on a larger screen. With Messenger for Windows, you can:
Text, audio, and video chat with anyone on Messenger, Facebook, or Instagram.
-
Make group calls with up to 50 people.
-
Share photos, videos, GIFs, stickers, and more.
-
Watch videos together with Watch Together.
-
Play games with your friends.
-
Create rooms where you can hang out with anyone you want.
-
Access your chats across devices.
-
Use dark mode, themes, and other customization options.
-
-
Messenger for Windows is free to download and use. All you need is a Facebook account or a phone number.
-
Why Download Messenger for Windows?
-
There are many benefits of using Messenger for Windows instead of your phone or browser. Here are some of them:
-
-
You can enjoy a larger and clearer view of your chats and calls.
-
You can use your keyboard and mouse to type faster and navigate easier.
-
You can multitask without switching between tabs or apps.
-
You can get notifications on your desktop without opening your browser.
-
You can save battery life on your phone.
-
-
If you want to experience these benefits, read on to learn how to download and install Messenger for Windows.
-
How to Download and Install Messenger for Windows
-
There are three ways to download and install Messenger for Windows: from Messenger.com, from Microsoft Store, or from Mac App Store. We will explain each method below.
-
Download from Messenger.com
-
This is the easiest way to get the app. Here are the steps:
-
How to download messenger for windows 10
-Download messenger for windows desktop app
-Download messenger for windows without facebook account
-Download messenger for windows 7 free
-Download messenger for windows 8.1
-Download messenger for windows laptop
-Download messenger for windows offline installer
-Download messenger for windows 10 64 bit
-Download messenger for windows and mac
-Download messenger for windows from microsoft store
-Download messenger for windows latest version
-Download messenger for windows pc free
-Download messenger for windows xp
-Download messenger for windows phone
-Download messenger for windows 10 pro
-Download messenger for windows with video call
-Download messenger for windows full version
-Download messenger for windows 7 32 bit
-Download messenger for windows vista
-Download messenger for windows 10 home
-Download messenger for windows without bluestacks
-Download messenger for windows with dark mode
-Download messenger for windows 10 update
-Download messenger for windows 7 ultimate
-Download messenger for windows old version
-Download messenger for windows using qr code
-Download messenger for windows with stickers
-Download messenger for windows 10 laptop
-Download messenger for windows 7 professional
-Download messenger for windows not working
-Download messenger for windows with games
-Download messenger for windows 10 free download
-Download messenger for windows 7 starter
-Download messenger for windows error code 1603
-Download messenger for windows with voice messages
-Download messenger for windows 10 mobile
-Download messenger for windows 7 home premium
-Download messenger for windows installation failed
-Download messenger for windows with emojis
-Download messenger for windows 10 pc free download
Select the app with the blue logo and the name Messenger for macOS.
-
Click on Get.
-
The app will start downloading and installing on your device.
-
Launch the app and log in with your Facebook account or phone number.
-
-
Congratulations, you have successfully downloaded and installed Messenger for Windows from Mac App Store!
-
How to Use Messenger for Windows
-
Now that you have downloaded and installed Messenger for Windows, you might be wondering how to use it effectively. Here are some tips and tricks for using the app:
-
How to Log in and Out
-
To log in to Messenger for Windows, you need to enter your Facebook account or phone number and password. If you don't have a Facebook account, you can create one by clicking on Create New Account. You can also choose to stay logged in by checking the box next to Keep me signed in.
-
To log out of Messenger for Windows, you need to click on your profile picture in the top left corner of the app. Then, click on Log Out. You can also switch accounts by clicking on Switch Account.
-
How to Chat and Call
-
To chat with someone on Messenger for Windows, you need to click on their name in the left sidebar of the app. You can also search for someone by typing their name or phone number in the search bar at the top of the app. You can then type your message in the text box at the bottom of the chat window. You can also send photos, videos, GIFs, stickers, emojis, and more by clicking on the icons next to the text box.
-
To make a voice or video call with someone on Messenger for Windows, you need to click on their name in the left sidebar of the app. Then, click on the phone or camera icon at the top right corner of the chat window. You can also join a group call by clicking on the group name in the left sidebar of the app. Then, click on Join Call. You can also create a room where you can invite anyone you want to join by clicking on Create a Room.
-
How to Manage Notifications and Settings
-
To manage your notifications and settings on Messenger for Windows, you need to click on your profile picture in the top left corner of the app. Then, click on Preferences. You can then customize your preferences and privacy options, such as:
-
-
Mute notifications for specific chats or all chats.
-
Show or hide message previews.
-
Show or hide active status.
-
Show or hide chat heads.
-
Change theme, color, emoji, or nickname for specific chats.
-
Block or report someone.
-
Delete or archive a chat.
-
Edit your profile information.
-
Switch between light mode and dark mode.
-
-
Troubleshooting Messenger for Windows
-
Sometimes, you might encounter some issues while using Messenger for Windows. Here are some common issues and solutions for using the app:
-
How to Update the App
-
To update Messenger for Windows, you need to follow these steps:
The latest version of the app will automatically download based on the desktop device you are using. If it doesn't start automatically, click on Restart.
-
Once the download is complete, click on the installer file to run it.
-
Follow the instructions on the screen to complete the installation.
-
Launch the app and log in with your Facebook account or phone number.
-
-
You can also check for updates manually by clicking on your profile picture in the top left corner of the app. Then, click on About Messenger. If there is an update available, you will see a notification and a button to download it.
-
How to Uninstall the App
-
To uninstall Messenger for Windows, you need to follow these steps:
-
-
Close the app if it is running.
-
Go to Control Panel on your Windows device or Finder on your Mac device.
-
Select Programs and Features on Windows or Applications on Mac.
-
Find and select Messenger for Windows.
-
Click on Uninstall on Windows or drag the app to the Trash on Mac.
-
Follow the instructions on the screen to complete the uninstallation.
-
-
Note that uninstalling the app will not delete your chats or account. You can still access them on your phone or browser.
-
How to Contact Support
-
If you have any questions or issues that are not covered in this article, you can contact the Messenger support team or community for help. Here are some ways to do that:
-
-
Go to Messenger Help Center. You can find answers to common questions, report a problem, or give feedback.
-
Go to Messenger Community Forum. You can join discussions with other users, ask questions, or share tips and tricks.
-
Contact Messenger on Facebook or Twitter. You can send a message or tweet to their official accounts and get a response from their support team.
-
-
Conclusion
-
Messenger for Windows is a great way to stay connected with your friends and family on Messenger, without using your phone or browser. It offers many features and benefits that make it more convenient and enjoyable to use on a larger screen. In this article, we showed you how to download, install, and use Messenger for Windows, as well as how to troubleshoot some common issues. We hope you found this article helpful and informative. If you did, please share it with your friends and family who might also be interested in downloading Messenger for Windows. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions and answers about Messenger for Windows:
-
Q: Is Messenger for Windows safe?
-
A: Yes, Messenger for Windows is safe to use. It uses encryption to protect your messages and calls from hackers and third parties. It also lets you control your privacy settings and block or report anyone who bothers you.
-
Q: Is Messenger for Windows compatible with my device?
-
A: Messenger for Windows is compatible with Windows 10 devices and Mac devices running macOS 10.10 or higher. It is not compatible with older versions of Windows or Mac, or other operating systems such as Linux.
-
Q: How much space does Messenger for Windows take up?
-
A: Messenger for Windows takes up about 200 MB of space on your device. This may vary depending on your device model and settings.
-
Q: How do I watch videos together with my friends on Messenger for Windows?
-
A: To watch videos together with your friends on Messenger for Windows, you need to use the Watch Together feature. Here are the steps:
-
-
Start a video call with one or more friends.
-
Click on the TV icon at the bottom of the call window.
-
Select a video from the suggested list or search for one.
-
Click on Watch Together.
-
You and your friends can now watch the video together and chat at the same time.
-
-
Q: How do I play games with my friends on Messenger for Windows?
A: To play games with your friends on Messenger for Windows, you need to use the Games feature. Here are the steps:
-
-
Click on the game controller icon in the left sidebar of the app.
-
Select a game from the list or search for one.
-
Click on Play.
-
You can play the game solo or challenge your friends to beat your score.
-
You can also chat with your friends while playing the game.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/audio.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/audio.py
deleted file mode 100644
index 2fcb77ad1d3a85f523e24f84691886736a5686cb..0000000000000000000000000000000000000000
--- a/spaces/44ov41za8i/FreeVC/speaker_encoder/audio.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from scipy.ndimage.morphology import binary_dilation
-from speaker_encoder.params_data import *
-from pathlib import Path
-from typing import Optional, Union
-import numpy as np
-import webrtcvad
-import librosa
-import struct
-
-int16_max = (2 ** 15) - 1
-
-
-def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray],
- source_sr: Optional[int] = None):
- """
- Applies the preprocessing operations used in training the Speaker Encoder to a waveform
- either on disk or in memory. The waveform will be resampled to match the data hyperparameters.
-
- :param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not
- just .wav), either the waveform as a numpy array of floats.
- :param source_sr: if passing an audio waveform, the sampling rate of the waveform before
- preprocessing. After preprocessing, the waveform's sampling rate will match the data
- hyperparameters. If passing a filepath, the sampling rate will be automatically detected and
- this argument will be ignored.
- """
- # Load the wav from disk if needed
- if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path):
- wav, source_sr = librosa.load(fpath_or_wav, sr=None)
- else:
- wav = fpath_or_wav
-
- # Resample the wav if needed
- if source_sr is not None and source_sr != sampling_rate:
- wav = librosa.resample(wav, source_sr, sampling_rate)
-
- # Apply the preprocessing: normalize volume and shorten long silences
- wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True)
- wav = trim_long_silences(wav)
-
- return wav
-
-
-def wav_to_mel_spectrogram(wav):
- """
- Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform.
- Note: this not a log-mel spectrogram.
- """
- frames = librosa.feature.melspectrogram(
- y=wav,
- sr=sampling_rate,
- n_fft=int(sampling_rate * mel_window_length / 1000),
- hop_length=int(sampling_rate * mel_window_step / 1000),
- n_mels=mel_n_channels
- )
- return frames.astype(np.float32).T
-
-
-def trim_long_silences(wav):
- """
- Ensures that segments without voice in the waveform remain no longer than a
- threshold determined by the VAD parameters in params.py.
-
- :param wav: the raw waveform as a numpy array of floats
- :return: the same waveform with silences trimmed away (length <= original wav length)
- """
- # Compute the voice detection window size
- samples_per_window = (vad_window_length * sampling_rate) // 1000
-
- # Trim the end of the audio to have a multiple of the window size
- wav = wav[:len(wav) - (len(wav) % samples_per_window)]
-
- # Convert the float waveform to 16-bit mono PCM
- pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16))
-
- # Perform voice activation detection
- voice_flags = []
- vad = webrtcvad.Vad(mode=3)
- for window_start in range(0, len(wav), samples_per_window):
- window_end = window_start + samples_per_window
- voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2],
- sample_rate=sampling_rate))
- voice_flags = np.array(voice_flags)
-
- # Smooth the voice detection with a moving average
- def moving_average(array, width):
- array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2)))
- ret = np.cumsum(array_padded, dtype=float)
- ret[width:] = ret[width:] - ret[:-width]
- return ret[width - 1:] / width
-
- audio_mask = moving_average(voice_flags, vad_moving_average_width)
- audio_mask = np.round(audio_mask).astype(np.bool)
-
- # Dilate the voiced regions
- audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1))
- audio_mask = np.repeat(audio_mask, samples_per_window)
-
- return wav[audio_mask == True]
-
-
-def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False):
- if increase_only and decrease_only:
- raise ValueError("Both increase only and decrease only are set")
- dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2))
- if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only):
- return wav
- return wav * (10 ** (dBFS_change / 20))
diff --git a/spaces/A666sxr/Genshin_TTS/text/mandarin.py b/spaces/A666sxr/Genshin_TTS/text/mandarin.py
deleted file mode 100644
index a9ce0c4b223cd7fbb00e8332d2dd53de4c7cea09..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/text/mandarin.py
+++ /dev/null
@@ -1,328 +0,0 @@
-import os
-import sys
-import re
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba
-import cn2an
-
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (romaji, ipa) pairs:
-_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ʃy', 'ʃ'),
- ('ʧʰy', 'ʧʰ'),
- ('ʧ⁼y', 'ʧ⁼'),
- ('NN', 'n'),
- ('Ng', 'ŋ'),
- ('y', 'j'),
- ('h', 'x')
-]]
-
-# List of (bopomofo, ipa) pairs:
-_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'x'),
- ('ㄐ', 'tʃ⁼'),
- ('ㄑ', 'tʃʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ts`⁼'),
- ('ㄔ', 'ts`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ts⁼'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'ɥæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'ɥn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'əŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (bopomofo, ipa2) pairs:
-_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'pwo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'tɕ'),
- ('ㄑ', 'tɕʰ'),
- ('ㄒ', 'ɕ'),
- ('ㄓ', 'tʂ'),
- ('ㄔ', 'tʂʰ'),
- ('ㄕ', 'ʂ'),
- ('ㄖ', 'ɻ'),
- ('ㄗ', 'ts'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ɤ'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'yæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'yn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'ɤŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'y'),
- ('ˉ', '˥'),
- ('ˊ', '˧˥'),
- ('ˇ', '˨˩˦'),
- ('ˋ', '˥˩'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def number_to_chinese(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- return text
-
-
-def chinese_to_bopomofo(text, taiwanese=False):
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
- words = jieba.lcut(text, cut_all=False)
- text = ''
- for word in words:
- bopomofos = lazy_pinyin(word, BOPOMOFO)
- if not re.search('[\u4e00-\u9fff]', word):
- text += word
- continue
- for i in range(len(bopomofos)):
- bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
- if text != '':
- text += ' '
- if taiwanese:
- text += '#'+'#'.join(bopomofos)
- else:
- text += ''.join(bopomofos)
- return text
-
-
-def latin_to_bopomofo(text):
- for regex, replacement in _latin_to_bopomofo:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_romaji(text):
- for regex, replacement in _bopomofo_to_romaji:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa(text):
- for regex, replacement in _bopomofo_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa2(text):
- for regex, replacement in _bopomofo_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_romaji(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_romaji(text)
- text = re.sub('i([aoe])', r'y\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_lazy_ipa(text):
- text = chinese_to_romaji(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_ipa(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa(text)
- text = re.sub('i([aoe])', r'j\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_ipa2(text, taiwanese=False):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text, taiwanese)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa2(text)
- text = re.sub(r'i([aoe])', r'j\1', text)
- text = re.sub(r'u([aoəe])', r'w\1', text)
- text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
- text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
- return text
diff --git a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/README.md b/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/README.md
deleted file mode 100644
index 658908b1ae58eca835fb7f73086332f3c2173fd0..0000000000000000000000000000000000000000
--- a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: AI.Dashboard.PHQ9.GAD7.SDOH
-emoji: 🏢
-colorFrom: red
-colorTo: gray
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/AIWaves/Debate/src/agents/Agent/__init__.py b/spaces/AIWaves/Debate/src/agents/Agent/__init__.py
deleted file mode 100644
index 5919811a5cec1b9d44051cdb1e9ac26a21ee3064..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Debate/src/agents/Agent/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .Agent import Agent
\ No newline at end of file
diff --git a/spaces/AIZero2HeroBootcamp/3DHuman/app.py b/spaces/AIZero2HeroBootcamp/3DHuman/app.py
deleted file mode 100644
index 06fd1947c7e9be88f0e449f073d510ed754a739b..0000000000000000000000000000000000000000
--- a/spaces/AIZero2HeroBootcamp/3DHuman/app.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import time
-import gradio as gr
-import os
-
-
-def load_mesh(mesh_file_name):
- time.sleep(2)
- return mesh_file_name
-
-description="3D Virtual Food 🥐🥑🥒🥓🥔🥕🥖🥗🥘🥙🥚🥛🥜🥝🥞🥟🥠🥡🥢🥣🥤🥥🥦🥧🥨🥩🥪🥫🥬🥭🥮🥯"
-
-inputs = gr.Model3D()
-outputs = gr.Model3D(clear_color=[0.8, 0.2, 0.2, 1.0])
-
-demo = gr.Interface(
- fn=load_mesh,
- inputs=inputs,
- outputs=outputs,
- examples=[
- [os.path.join(os.path.dirname(__file__), "FinalBaseMesh.obj")],
- [os.path.join(os.path.dirname(__file__), "BEAR_BLK.OBJ")]
- ],
- description=description,
- cache_examples=True,
-)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/AIatUIUC/CodeLATS/lats/lats_main.py b/spaces/AIatUIUC/CodeLATS/lats/lats_main.py
deleted file mode 100644
index 0cb4c12f36e4556e8a04614daadcf0193a1054d0..0000000000000000000000000000000000000000
--- a/spaces/AIatUIUC/CodeLATS/lats/lats_main.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import os
-import argparse
-
-from lats import run_lats
-
-
-def get_args():
- parser = argparse.ArgumentParser()
- parser.add_argument("--run_name", type=str, help="The name of the run")
- parser.add_argument("--root_dir", type=str,
- help="The root logging directory", default="root")
- parser.add_argument("--dataset_path", type=str,
- help="The path to the benchmark dataset", default="root")
- parser.add_argument("--strategy", type=str,
- help="Strategy: `simple`, `reflexion`")
- parser.add_argument("--language", type=str, help="Strategy: `py` or `rs`")
- parser.add_argument(
- "--model", type=str, help="OpenAI models only for now. For best results, use GPT-4")
- parser.add_argument("--pass_at_k", type=int,
- help="Pass@k metric", default=1)
- parser.add_argument("--max_iters", type=int,
- help="The maximum number of self-improvement iterations", default=10)
- parser.add_argument("--expansion_factor", type=int,
- help="The expansion factor for the reflexion UCS and A* strategy", default=3)
- parser.add_argument("--verbose", action='store_true',
- help="To print live logs")
- parser.add_argument("--instruction", type=str,
- help="text string", default="")
- parser.add_argument("--n_samples", type=int,
- help="The number of nodes added during expansion", default=3)
- parser.add_argument("--depth", type=int,
- help="Tree depth", default=5)
-
- # TODO: implement this
- # parser.add_argument("--is_resume", action='store_true', help="To resume run")
- # parser.add_argument("--resume_dir", type=str, help="If resume, the logging directory", default="")
- args = parser.parse_args()
- return args
-
-
-def strategy_factory(strategy: str):
- def kwargs_wrapper_gen(func, delete_keys=[]):
- def kwargs_wrapper(**kwargs):
- for key in delete_keys:
- del kwargs[key]
- return func(**kwargs)
- return kwargs_wrapper
-
- return kwargs_wrapper_gen(run_lats, delete_keys=[])
-
-
-def lats_main(args):
-
- # check if the strategy is valid
- run_strategy = strategy_factory(args.strategy)
-
- # start the run
- # evaluate with pass@k
- x = run_strategy(
- model_name=args.model,
- language=args.language,
- max_iters=args.max_iters,
- verbose=args.verbose,
- instruction=args.instruction,
- n_samples=args.n_samples,
- depth=args.depth
- )
-
- return x
-
-
-
-def main(args):
- lats_main(args)
-
-if __name__ == "__main__":
- args = get_args()
- main(args)
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/AiService.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/AiService.py
deleted file mode 100644
index 9b41e3c82261585d4eb2114665cc2b88354ee45b..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/AiService.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from __future__ import annotations
-
-import requests
-
-from ...typing import Any, CreateResult
-from ..base_provider import BaseProvider
-
-
-class AiService(BaseProvider):
- url = "https://aiservice.vercel.app/"
- working = False
- supports_gpt_35_turbo = True
-
- @staticmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool,
- **kwargs: Any,
- ) -> CreateResult:
- base = "\n".join(f"{message['role']}: {message['content']}" for message in messages)
- base += "\nassistant: "
-
- headers = {
- "accept": "*/*",
- "content-type": "text/plain;charset=UTF-8",
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "same-origin",
- "Referer": "https://aiservice.vercel.app/chat",
- }
- data = {"input": base}
- url = "https://aiservice.vercel.app/api/chat/answer"
- response = requests.post(url, headers=headers, json=data)
- response.raise_for_status()
- yield response.json()["data"]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.d.ts
deleted file mode 100644
index b938fd0546c80efdd2f9a971d900de644992259e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.d.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import CircularProgress from './CircularProgress';
-
-export default function (
- config?: CircularProgress.IConfig
-): CircularProgress;
-
-export default function (
- x?: number, y?: number,
- radius?: number,
- barColor?: string | number,
- value?: number,
- config?: CircularProgress.IConfig
-): CircularProgress;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/Dialog.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/Dialog.js
deleted file mode 100644
index d0731dd19b21505a95c46ce39cf63cdee77f2175..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/Dialog.js
+++ /dev/null
@@ -1,306 +0,0 @@
-import Sizer from '../sizer/Sizer.js';
-import OverlapSizer from '../overlapsizer/OverlapSizer.js';
-import Buttons from '../buttons/Buttons.js';
-import FixWidthButtons from '../fixwidthbuttons/FixWidthButtons.js';
-import GridButtons from '../gridbuttons/GridButtons.js';
-import Methods from './methods/Methods.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-class Dialog extends Sizer {
- constructor(scene, config) {
- if (config === undefined) {
- config = {};
- }
- // Create sizer
- config.orientation = 1; // Top to bottom
- super(scene, config);
- this.type = 'rexDialog';
- this.eventEmitter = GetValue(config, 'eventEmitter', this);
-
- // Add elements
- var background = GetValue(config, 'background', undefined);
- var title = GetValue(config, 'title', undefined);
- var toolbar = GetValue(config, 'toolbar', undefined);
- var toolbarBackground = GetValue(config, 'toolbarBackground', undefined);
- var leftToolbar = GetValue(config, 'leftToolbar', undefined);
- var leftToolbarBackground = GetValue(config, 'leftToolbarBackground', undefined);
- var content = GetValue(config, 'content', undefined);
- var description = GetValue(config, 'description', undefined);
- var choicesSizer;
- var choices = GetValue(config, 'choices', undefined);
- var choicesBackground = GetValue(config, 'choicesBackground', undefined);
- var actionsSizer;
- var actions = GetValue(config, 'actions', undefined);
- var actionsBackground = GetValue(config, 'actionsBackground', undefined);
- var clickConfig = GetValue(config, 'click', undefined);
-
- if (background) {
- this.addBackground(background);
- }
-
- var toolbarSizer;
- if (toolbar) {
- toolbarSizer = new Buttons(scene, {
- groupName: 'toolbar',
- background: toolbarBackground,
- buttons: toolbar,
- orientation: 0, // Left-right
- space: { item: GetValue(config, 'space.toolbarItem', 0) },
- click: clickConfig,
- eventEmitter: this.eventEmitter,
- });
- }
-
- var leftToolbarSizer;
- if (leftToolbar) {
- leftToolbarSizer = new Buttons(scene, {
- groupName: 'leftToolbar',
- background: leftToolbarBackground,
- buttons: leftToolbar,
- orientation: 0, // Left-right
- space: { item: GetValue(config, 'space.leftToolbarItem', 0) },
- click: clickConfig,
- eventEmitter: this.eventEmitter,
- });
- }
-
- // title or toolbar or leftToolbar
- if (title || toolbar || leftToolbar) {
- var titleExpandWidth = !!title && GetValue(config, 'expand.title', true);
- var titleAlign = GetValue(config, 'align.title', 'center');
- var useOverlapSizer =
- // Has title, title is not exapnd-width, title align to center
- (title && !titleExpandWidth && (titleAlign === 'center')) ||
- // No title
- (!title && (toolbar || leftToolbar));
- var useSizer = !useOverlapSizer;
-
- var titleSizer;
- if (useSizer) {
- titleSizer = new Sizer(scene, { orientation: 0 });
- } else {
- titleSizer = new OverlapSizer(scene);
- }
-
- var titleChildExpand = (useSizer) ? true : { height: true };
-
- // Add leftToolbar
- if (leftToolbarSizer) {
- titleSizer.add(
- leftToolbarSizer,
- { align: 'left', expand: titleChildExpand }
- );
- }
-
- // Add title
- if (title) {
- // Add space if not expand, align to right
- if (useSizer && !titleExpandWidth && (titleAlign === 'right')) {
- titleSizer.addSpace();
- }
-
- var padding = {
- left: GetValue(config, 'space.titleLeft', 0),
- right: GetValue(config, 'space.titleRight', 0)
- }
- var proportion = (titleExpandWidth) ? 1 : 0;
- titleSizer.add(
- title,
- { align: titleAlign, proportion: proportion, expand: titleChildExpand, padding: padding }
- );
-
- // Add space if not expand, align to left
- if (useSizer && !titleExpandWidth && (titleAlign === 'left')) {
- titleSizer.addSpace();
- }
- }
-
- // Add toolbar
- if (toolbarSizer) {
- // Add space if not title
- if (useSizer && !title) {
- titleSizer.addSpace();
- }
- titleSizer.add(
- toolbarSizer,
- { align: 'right', expand: titleChildExpand }
- );
- }
-
- // Add sizer to dialog
- var titleSpace = GetValue(config, 'space.title', 0);
- var padding;
- if (content || description || choices || actions) {
- padding = { bottom: titleSpace };
- }
- var proportion = GetValue(config, 'proportion.title', 0);
- this.add(
- titleSizer,
- { padding: padding, proportion: proportion, expand: true }
- );
- }
-
- if (content) {
- var align = GetValue(config, 'align.content', 'center');
- var contentSpace = GetValue(config, 'space.content', 0);
- var padding = {
- left: GetValue(config, 'space.contentLeft', 0),
- right: GetValue(config, 'space.contentRight', 0),
- bottom: ((description || choices || actions) ? contentSpace : 0)
- }
- var proportion = GetValue(config, 'proportion.content', 0);
- var expand = GetValue(config, 'expand.content', true);
- this.add(
- content,
- { align: align, padding: padding, proportion: proportion, expand: expand }
- );
- }
-
- if (description) {
- var align = GetValue(config, 'align.description', 'center');
- var descriptionSpace = GetValue(config, 'space.description', 0);
- var padding = {
- left: GetValue(config, 'space.descriptionLeft', 0),
- right: GetValue(config, 'space.descriptionRight', 0),
- bottom: ((choices || actions) ? descriptionSpace : 0)
- }
- var proportion = GetValue(config, 'proportion.description', 0);
- var expand = GetValue(config, 'expand.description', true);
- this.add(
- description,
- { align: align, padding: padding, proportion: proportion, expand: expand }
- );
- }
-
- if (choices) {
- var choicesType = GetValue(config, 'choicesType', '').split('-');
- var ButtonsClass = Contains(choicesType, 'wrap') ? FixWidthButtons :
- Contains(choicesType, 'grid') ? GridButtons :
- Buttons;
- var buttonsType = Contains(choicesType, 'radio') ? 'radio' :
- Contains(choicesType, 'checkboxes') ? 'checkboxes' : undefined;
-
- var space = {
- left: GetValue(config, 'space.choicesBackgroundLeft', 0),
- right: GetValue(config, 'space.choicesBackgroundRight', 0),
- top: GetValue(config, 'space.choicesBackgroundTop', 0),
- bottom: GetValue(config, 'space.choicesBackgroundBottom', 0),
- };
- var itemSpace = GetValue(config, 'space.choice', 0);
- if (ButtonsClass === Buttons) {
- space.item = itemSpace;
- } else if (ButtonsClass === FixWidthButtons) {
- space.item = itemSpace;
- space.line = GetValue(config, 'space.choiceLine', itemSpace);
- } else { // GridButtons
- space.column = GetValue(config, 'space.choiceColumn', itemSpace);
- space.row = GetValue(config, 'space.choiceRow', itemSpace);
- }
-
- var choicesConfig = {
- width: GetValue(config, 'choicesWidth', undefined),
- height: GetValue(config, 'choicesHeight', undefined),
- groupName: 'choices',
- buttonsType: buttonsType,
- background: choicesBackground,
- buttons: choices,
- space: space,
- click: clickConfig,
- eventEmitter: this.eventEmitter,
- setValueCallback: GetValue(config, 'choicesSetValueCallback', undefined),
- setValueCallbackScope: GetValue(config, 'choicesSetValueCallbackScope', undefined)
- };
-
- if (ButtonsClass === Buttons) {
- choicesConfig.orientation = Contains(choicesType, 'x') ? 0 : 1;
- }
-
- choicesSizer = new ButtonsClass(scene, choicesConfig);
- var choicesSpace = GetValue(config, 'space.choices', 0);
- var padding = {
- left: GetValue(config, 'space.choicesLeft', 0),
- right: GetValue(config, 'space.choicesRight', 0),
- bottom: ((actions) ? choicesSpace : 0)
- }
- var align = GetValue(config, 'align.choices', 'center');
- var proportion = GetValue(config, 'proportion.choices', 0);
- var expand = GetValue(config, 'expand.choices', true);
- this.add(
- choicesSizer,
- { align: align, padding: padding, proportion: proportion, expand: expand }
- );
-
- this.buttonsType = buttonsType;
- }
-
- if (actions) {
- actionsSizer = new Buttons(scene, {
- groupName: 'actions',
- background: actionsBackground,
- buttons: actions,
- orientation: 0, // Left-right
- space: { item: GetValue(config, 'space.action', 0) },
- expand: GetValue(config, 'expand.actions', false),
- align: GetValue(config, 'align.actions', 'center'),
- click: clickConfig,
- eventEmitter: this.eventEmitter,
- })
- var padding = {
- left: GetValue(config, 'space.actionsLeft', 0),
- right: GetValue(config, 'space.actionsRight', 0)
- }
- var proportion = GetValue(config, 'proportion.action', 0);
- this.add(
- actionsSizer,
- { align: 'center', padding: padding, proportion: proportion, expand: true }
- );
- }
-
- EmitButtonEvent(this, 'click');
- EmitButtonEvent(this, 'over');
- EmitButtonEvent(this, 'out');
- EmitButtonEvent(this, 'enable');
- EmitButtonEvent(this, 'disable');
-
- this.addChildrenMap('background', background);
- this.addChildrenMap('title', title);
- this.addChildrenMap('toolbar', toolbar);
- this.addChildrenMap('leftToolbar', leftToolbar);
- this.addChildrenMap('content', content);
- this.addChildrenMap('description', description);
- this.addChildrenMap('choices', (choicesSizer) ? choicesSizer.buttons : undefined);
- this.addChildrenMap('actions', (actionsSizer) ? actionsSizer.buttons : undefined);
- this.addChildrenMap('choicesSizer', choicesSizer);
- this.addChildrenMap('actionsSizer', actionsSizer);
- this.addChildrenMap('toolbarSizer', toolbarSizer);
- this.addChildrenMap('leftToolbarSizer', leftToolbarSizer);
- }
-}
-
-var Contains = function (arr, item) {
- return arr.indexOf(item) !== -1;
-}
-
-var ButtonsGroupEventNameMap = {
- actions: 'action',
- choices: 'choice',
- toolbar: 'toolbar',
- leftToolbar: 'leftToolbar'
-}
-
-var EmitButtonEvent = function (dialog, postEventName) {
- dialog.on(`button.${postEventName}`, function (button, groupName, index, pointer, event) {
- if (!ButtonsGroupEventNameMap.hasOwnProperty(groupName)) {
- return
- }
- dialog.emit(`${ButtonsGroupEventNameMap[groupName]}.${postEventName}`, button, index, pointer, event);
- })
-}
-
-Object.assign(
- Dialog.prototype,
- Methods
-);
-
-export default Dialog;
\ No newline at end of file
diff --git a/spaces/Ahmedmewloud/Depplearnig/Makefile b/spaces/Ahmedmewloud/Depplearnig/Makefile
deleted file mode 100644
index f080a464de5241653a9ea1335062dcccb4d681c4..0000000000000000000000000000000000000000
--- a/spaces/Ahmedmewloud/Depplearnig/Makefile
+++ /dev/null
@@ -1,28 +0,0 @@
-install:
- pip install --upgrade pip &&\
- pip install -r requirements.txt
-
-test:
- python -m pytest -vvv --cov=hello --cov=greeting \
- --cov=smath --cov=web tests
- python -m pytest --nbval notebook.ipynb #tests our jupyter notebook
- #python -m pytest -v tests/test_web.py #if you just want to test web
-
-debug:
- python -m pytest -vv --pdb #Debugger is invoked
-
-one-test:
- python -m pytest -vv tests/test_greeting.py::test_my_name4
-
-debugthree:
- #not working the way I expect
- python -m pytest -vv --pdb --maxfail=4 # drop to PDB for first three failures
-
-format:
- black *.py
-
-lint:
- pylint --disable=R,C *.py
-
-all: install lint test format
-
diff --git a/spaces/AiPalsDev/Translate_It/README.md b/spaces/AiPalsDev/Translate_It/README.md
deleted file mode 100644
index 92585f9f09bde3105258f48263b447cbe4fd45d1..0000000000000000000000000000000000000000
--- a/spaces/AiPalsDev/Translate_It/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Translate It
-emoji: 🔥
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ajaymaurya1008/meme-identifier/README.md b/spaces/Ajaymaurya1008/meme-identifier/README.md
deleted file mode 100644
index 22071664bff479d4614c3922869d55f165005263..0000000000000000000000000000000000000000
--- a/spaces/Ajaymaurya1008/meme-identifier/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Hrishikesh332 Autotrain Meme Classification 42897109437
-emoji: 😻
-colorFrom: pink
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/options/inference_options.py b/spaces/Alpaca233/SadTalker/src/face3d/options/inference_options.py
deleted file mode 100644
index c453965959ab4cfb31acbc424f994db68c3d4df5..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/options/inference_options.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from face3d.options.base_options import BaseOptions
-
-
-class InferenceOptions(BaseOptions):
- """This class includes test options.
-
- It also includes shared options defined in BaseOptions.
- """
-
- def initialize(self, parser):
- parser = BaseOptions.initialize(self, parser) # define shared options
- parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')
- parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')
-
- parser.add_argument('--input_dir', type=str, help='the folder of the input files')
- parser.add_argument('--keypoint_dir', type=str, help='the folder of the keypoint files')
- parser.add_argument('--output_dir', type=str, default='mp4', help='the output dir to save the extracted coefficients')
- parser.add_argument('--save_split_files', action='store_true', help='save split files or not')
- parser.add_argument('--inference_batch_size', type=int, default=8)
-
- # Dropout and Batchnorm has different behavior during training and test.
- self.isTrain = False
- return parser
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/README.md
deleted file mode 100644
index ef50d423e68ff5c641e4419bd30f84787aebf839..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
-# Research projects
-
-This folder contains various research projects using 🧨 Diffusers.
-They are not really maintained by the core maintainers of this library and often require a specific version of Diffusers that is indicated in the requirements file of each folder.
-Updating them to the most recent version of the library will require some work.
-
-To use any of them, just run the command
-
-```
-pip install -r requirements.txt
-```
-inside the folder of your choice.
-
-If you need help with any of those, please open an issue where you directly ping the author(s), as indicated at the top of the README of each folder.
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/conversion_ldm_uncond.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/conversion_ldm_uncond.py
deleted file mode 100644
index d2ebb3934b6696fd427c9bf09eb051cf7befe7f4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/conversion_ldm_uncond.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import argparse
-
-import OmegaConf
-import torch
-
-from diffusers import DDIMScheduler, LDMPipeline, UNetLDMModel, VQModel
-
-
-def convert_ldm_original(checkpoint_path, config_path, output_path):
- config = OmegaConf.load(config_path)
- state_dict = torch.load(checkpoint_path, map_location="cpu")["model"]
- keys = list(state_dict.keys())
-
- # extract state_dict for VQVAE
- first_stage_dict = {}
- first_stage_key = "first_stage_model."
- for key in keys:
- if key.startswith(first_stage_key):
- first_stage_dict[key.replace(first_stage_key, "")] = state_dict[key]
-
- # extract state_dict for UNetLDM
- unet_state_dict = {}
- unet_key = "model.diffusion_model."
- for key in keys:
- if key.startswith(unet_key):
- unet_state_dict[key.replace(unet_key, "")] = state_dict[key]
-
- vqvae_init_args = config.model.params.first_stage_config.params
- unet_init_args = config.model.params.unet_config.params
-
- vqvae = VQModel(**vqvae_init_args).eval()
- vqvae.load_state_dict(first_stage_dict)
-
- unet = UNetLDMModel(**unet_init_args).eval()
- unet.load_state_dict(unet_state_dict)
-
- noise_scheduler = DDIMScheduler(
- timesteps=config.model.params.timesteps,
- beta_schedule="scaled_linear",
- beta_start=config.model.params.linear_start,
- beta_end=config.model.params.linear_end,
- clip_sample=False,
- )
-
- pipeline = LDMPipeline(vqvae, unet, noise_scheduler)
- pipeline.save_pretrained(output_path)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--checkpoint_path", type=str, required=True)
- parser.add_argument("--config_path", type=str, required=True)
- parser.add_argument("--output_path", type=str, required=True)
- args = parser.parse_args()
-
- convert_ldm_original(args.checkpoint_path, args.config_path, args.output_path)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py
deleted file mode 100644
index 50df4e2db500d575eaddd7538b49cc808e30b50e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco.py'
-model = dict(
- pretrained='open-mmlab://res2net101_v1d_26w_4s',
- backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py
deleted file mode 100644
index a6a668c4e33611e2b69009741558d83558cc9b4f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py
+++ /dev/null
@@ -1,53 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_caffe_c4.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- type='TridentFasterRCNN',
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- type='TridentResNet',
- trident_dilations=(1, 2, 3),
- num_branch=3,
- test_branch_idx=1),
- roi_head=dict(type='TridentRoIHead', num_branch=3, test_branch_idx=1),
- train_cfg=dict(
- rpn_proposal=dict(max_per_img=500),
- rcnn=dict(
- sampler=dict(num=128, pos_fraction=0.5,
- add_gt_as_proposals=False))))
-
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/README.md b/spaces/Andy1621/uniformer_image_segmentation/README.md
deleted file mode 100644
index e7fc71b41bc1cfe47578010d4116bc4e297fce2b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Uniformer_image_segmentation
-emoji: ⚡
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.0.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py
deleted file mode 100644
index eb57ed1519f82adb79a3d2377e1f286df9d8ef6b..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-import re
-import sys
-from typing import List, Optional
-
-from pip._internal.locations import site_packages, user_site
-from pip._internal.utils.virtualenv import (
- running_under_virtualenv,
- virtualenv_no_global,
-)
-
-__all__ = [
- "egg_link_path_from_sys_path",
- "egg_link_path_from_location",
-]
-
-
-def _egg_link_name(raw_name: str) -> str:
- """
- Convert a Name metadata value to a .egg-link name, by applying
- the same substitution as pkg_resources's safe_name function.
- Note: we cannot use canonicalize_name because it has a different logic.
- """
- return re.sub("[^A-Za-z0-9.]+", "-", raw_name) + ".egg-link"
-
-
-def egg_link_path_from_sys_path(raw_name: str) -> Optional[str]:
- """
- Look for a .egg-link file for project name, by walking sys.path.
- """
- egg_link_name = _egg_link_name(raw_name)
- for path_item in sys.path:
- egg_link = os.path.join(path_item, egg_link_name)
- if os.path.isfile(egg_link):
- return egg_link
- return None
-
-
-def egg_link_path_from_location(raw_name: str) -> Optional[str]:
- """
- Return the path for the .egg-link file if it exists, otherwise, None.
-
- There's 3 scenarios:
- 1) not in a virtualenv
- try to find in site.USER_SITE, then site_packages
- 2) in a no-global virtualenv
- try to find in site_packages
- 3) in a yes-global virtualenv
- try to find in site_packages, then site.USER_SITE
- (don't look in global location)
-
- For #1 and #3, there could be odd cases, where there's an egg-link in 2
- locations.
-
- This method will just return the first one found.
- """
- sites: List[str] = []
- if running_under_virtualenv():
- sites.append(site_packages)
- if not virtualenv_no_global() and user_site:
- sites.append(user_site)
- else:
- if user_site:
- sites.append(user_site)
- sites.append(site_packages)
-
- egg_link_name = _egg_link_name(raw_name)
- for site in sites:
- egglink = os.path.join(site, egg_link_name)
- if os.path.isfile(egglink):
- return egglink
- return None
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_null_file.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_null_file.py
deleted file mode 100644
index b659673ef3c1d5431e6699898ae4d073b4be764b..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_null_file.py
+++ /dev/null
@@ -1,69 +0,0 @@
-from types import TracebackType
-from typing import IO, Iterable, Iterator, List, Optional, Type
-
-
-class NullFile(IO[str]):
- def close(self) -> None:
- pass
-
- def isatty(self) -> bool:
- return False
-
- def read(self, __n: int = 1) -> str:
- return ""
-
- def readable(self) -> bool:
- return False
-
- def readline(self, __limit: int = 1) -> str:
- return ""
-
- def readlines(self, __hint: int = 1) -> List[str]:
- return []
-
- def seek(self, __offset: int, __whence: int = 1) -> int:
- return 0
-
- def seekable(self) -> bool:
- return False
-
- def tell(self) -> int:
- return 0
-
- def truncate(self, __size: Optional[int] = 1) -> int:
- return 0
-
- def writable(self) -> bool:
- return False
-
- def writelines(self, __lines: Iterable[str]) -> None:
- pass
-
- def __next__(self) -> str:
- return ""
-
- def __iter__(self) -> Iterator[str]:
- return iter([""])
-
- def __enter__(self) -> IO[str]:
- pass
-
- def __exit__(
- self,
- __t: Optional[Type[BaseException]],
- __value: Optional[BaseException],
- __traceback: Optional[TracebackType],
- ) -> None:
- pass
-
- def write(self, text: str) -> int:
- return 0
-
- def flush(self) -> None:
- pass
-
- def fileno(self) -> int:
- return -1
-
-
-NULL_FILE = NullFile()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_ext.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_ext.py
deleted file mode 100644
index cbfe3ec1c28529aade613b000d5b051807287deb..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_ext.py
+++ /dev/null
@@ -1,383 +0,0 @@
-import os
-import sys
-import itertools
-from importlib.machinery import EXTENSION_SUFFIXES
-from importlib.util import cache_from_source as _compiled_file_name
-from typing import Dict, Iterator, List, Tuple
-
-from distutils.command.build_ext import build_ext as _du_build_ext
-from distutils.ccompiler import new_compiler
-from distutils.sysconfig import customize_compiler, get_config_var
-from distutils import log
-
-from setuptools.errors import BaseError
-from setuptools.extension import Extension, Library
-
-try:
- # Attempt to use Cython for building extensions, if available
- from Cython.Distutils.build_ext import build_ext as _build_ext
- # Additionally, assert that the compiler module will load
- # also. Ref #1229.
- __import__('Cython.Compiler.Main')
-except ImportError:
- _build_ext = _du_build_ext
-
-# make sure _config_vars is initialized
-get_config_var("LDSHARED")
-from distutils.sysconfig import _config_vars as _CONFIG_VARS # noqa
-
-
-def _customize_compiler_for_shlib(compiler):
- if sys.platform == "darwin":
- # building .dylib requires additional compiler flags on OSX; here we
- # temporarily substitute the pyconfig.h variables so that distutils'
- # 'customize_compiler' uses them before we build the shared libraries.
- tmp = _CONFIG_VARS.copy()
- try:
- # XXX Help! I don't have any idea whether these are right...
- _CONFIG_VARS['LDSHARED'] = (
- "gcc -Wl,-x -dynamiclib -undefined dynamic_lookup")
- _CONFIG_VARS['CCSHARED'] = " -dynamiclib"
- _CONFIG_VARS['SO'] = ".dylib"
- customize_compiler(compiler)
- finally:
- _CONFIG_VARS.clear()
- _CONFIG_VARS.update(tmp)
- else:
- customize_compiler(compiler)
-
-
-have_rtld = False
-use_stubs = False
-libtype = 'shared'
-
-if sys.platform == "darwin":
- use_stubs = True
-elif os.name != 'nt':
- try:
- import dl
- use_stubs = have_rtld = hasattr(dl, 'RTLD_NOW')
- except ImportError:
- pass
-
-
-def if_dl(s):
- return s if have_rtld else ''
-
-
-def get_abi3_suffix():
- """Return the file extension for an abi3-compliant Extension()"""
- for suffix in EXTENSION_SUFFIXES:
- if '.abi3' in suffix: # Unix
- return suffix
- elif suffix == '.pyd': # Windows
- return suffix
-
-
-class build_ext(_build_ext):
- editable_mode: bool = False
- inplace: bool = False
-
- def run(self):
- """Build extensions in build directory, then copy if --inplace"""
- old_inplace, self.inplace = self.inplace, 0
- _build_ext.run(self)
- self.inplace = old_inplace
- if old_inplace:
- self.copy_extensions_to_source()
-
- def _get_inplace_equivalent(self, build_py, ext: Extension) -> Tuple[str, str]:
- fullname = self.get_ext_fullname(ext.name)
- filename = self.get_ext_filename(fullname)
- modpath = fullname.split('.')
- package = '.'.join(modpath[:-1])
- package_dir = build_py.get_package_dir(package)
- inplace_file = os.path.join(package_dir, os.path.basename(filename))
- regular_file = os.path.join(self.build_lib, filename)
- return (inplace_file, regular_file)
-
- def copy_extensions_to_source(self):
- build_py = self.get_finalized_command('build_py')
- for ext in self.extensions:
- inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext)
-
- # Always copy, even if source is older than destination, to ensure
- # that the right extensions for the current Python/platform are
- # used.
- if os.path.exists(regular_file) or not ext.optional:
- self.copy_file(regular_file, inplace_file, level=self.verbose)
-
- if ext._needs_stub:
- inplace_stub = self._get_equivalent_stub(ext, inplace_file)
- self._write_stub_file(inplace_stub, ext, compile=True)
- # Always compile stub and remove the original (leave the cache behind)
- # (this behaviour was observed in previous iterations of the code)
-
- def _get_equivalent_stub(self, ext: Extension, output_file: str) -> str:
- dir_ = os.path.dirname(output_file)
- _, _, name = ext.name.rpartition(".")
- return f"{os.path.join(dir_, name)}.py"
-
- def _get_output_mapping(self) -> Iterator[Tuple[str, str]]:
- if not self.inplace:
- return
-
- build_py = self.get_finalized_command('build_py')
- opt = self.get_finalized_command('install_lib').optimize or ""
-
- for ext in self.extensions:
- inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext)
- yield (regular_file, inplace_file)
-
- if ext._needs_stub:
- # This version of `build_ext` always builds artifacts in another dir,
- # when "inplace=True" is given it just copies them back.
- # This is done in the `copy_extensions_to_source` function, which
- # always compile stub files via `_compile_and_remove_stub`.
- # At the end of the process, a `.pyc` stub file is created without the
- # corresponding `.py`.
-
- inplace_stub = self._get_equivalent_stub(ext, inplace_file)
- regular_stub = self._get_equivalent_stub(ext, regular_file)
- inplace_cache = _compiled_file_name(inplace_stub, optimization=opt)
- output_cache = _compiled_file_name(regular_stub, optimization=opt)
- yield (output_cache, inplace_cache)
-
- def get_ext_filename(self, fullname):
- so_ext = os.getenv('SETUPTOOLS_EXT_SUFFIX')
- if so_ext:
- filename = os.path.join(*fullname.split('.')) + so_ext
- else:
- filename = _build_ext.get_ext_filename(self, fullname)
- so_ext = get_config_var('EXT_SUFFIX')
-
- if fullname in self.ext_map:
- ext = self.ext_map[fullname]
- use_abi3 = getattr(ext, 'py_limited_api') and get_abi3_suffix()
- if use_abi3:
- filename = filename[:-len(so_ext)]
- so_ext = get_abi3_suffix()
- filename = filename + so_ext
- if isinstance(ext, Library):
- fn, ext = os.path.splitext(filename)
- return self.shlib_compiler.library_filename(fn, libtype)
- elif use_stubs and ext._links_to_dynamic:
- d, fn = os.path.split(filename)
- return os.path.join(d, 'dl-' + fn)
- return filename
-
- def initialize_options(self):
- _build_ext.initialize_options(self)
- self.shlib_compiler = None
- self.shlibs = []
- self.ext_map = {}
- self.editable_mode = False
-
- def finalize_options(self):
- _build_ext.finalize_options(self)
- self.extensions = self.extensions or []
- self.check_extensions_list(self.extensions)
- self.shlibs = [ext for ext in self.extensions
- if isinstance(ext, Library)]
- if self.shlibs:
- self.setup_shlib_compiler()
- for ext in self.extensions:
- ext._full_name = self.get_ext_fullname(ext.name)
- for ext in self.extensions:
- fullname = ext._full_name
- self.ext_map[fullname] = ext
-
- # distutils 3.1 will also ask for module names
- # XXX what to do with conflicts?
- self.ext_map[fullname.split('.')[-1]] = ext
-
- ltd = self.shlibs and self.links_to_dynamic(ext) or False
- ns = ltd and use_stubs and not isinstance(ext, Library)
- ext._links_to_dynamic = ltd
- ext._needs_stub = ns
- filename = ext._file_name = self.get_ext_filename(fullname)
- libdir = os.path.dirname(os.path.join(self.build_lib, filename))
- if ltd and libdir not in ext.library_dirs:
- ext.library_dirs.append(libdir)
- if ltd and use_stubs and os.curdir not in ext.runtime_library_dirs:
- ext.runtime_library_dirs.append(os.curdir)
-
- if self.editable_mode:
- self.inplace = True
-
- def setup_shlib_compiler(self):
- compiler = self.shlib_compiler = new_compiler(
- compiler=self.compiler, dry_run=self.dry_run, force=self.force
- )
- _customize_compiler_for_shlib(compiler)
-
- if self.include_dirs is not None:
- compiler.set_include_dirs(self.include_dirs)
- if self.define is not None:
- # 'define' option is a list of (name,value) tuples
- for (name, value) in self.define:
- compiler.define_macro(name, value)
- if self.undef is not None:
- for macro in self.undef:
- compiler.undefine_macro(macro)
- if self.libraries is not None:
- compiler.set_libraries(self.libraries)
- if self.library_dirs is not None:
- compiler.set_library_dirs(self.library_dirs)
- if self.rpath is not None:
- compiler.set_runtime_library_dirs(self.rpath)
- if self.link_objects is not None:
- compiler.set_link_objects(self.link_objects)
-
- # hack so distutils' build_extension() builds a library instead
- compiler.link_shared_object = link_shared_object.__get__(compiler)
-
- def get_export_symbols(self, ext):
- if isinstance(ext, Library):
- return ext.export_symbols
- return _build_ext.get_export_symbols(self, ext)
-
- def build_extension(self, ext):
- ext._convert_pyx_sources_to_lang()
- _compiler = self.compiler
- try:
- if isinstance(ext, Library):
- self.compiler = self.shlib_compiler
- _build_ext.build_extension(self, ext)
- if ext._needs_stub:
- build_lib = self.get_finalized_command('build_py').build_lib
- self.write_stub(build_lib, ext)
- finally:
- self.compiler = _compiler
-
- def links_to_dynamic(self, ext):
- """Return true if 'ext' links to a dynamic lib in the same package"""
- # XXX this should check to ensure the lib is actually being built
- # XXX as dynamic, and not just using a locally-found version or a
- # XXX static-compiled version
- libnames = dict.fromkeys([lib._full_name for lib in self.shlibs])
- pkg = '.'.join(ext._full_name.split('.')[:-1] + [''])
- return any(pkg + libname in libnames for libname in ext.libraries)
-
- def get_outputs(self) -> List[str]:
- if self.inplace:
- return list(self.get_output_mapping().keys())
- return sorted(_build_ext.get_outputs(self) + self.__get_stubs_outputs())
-
- def get_output_mapping(self) -> Dict[str, str]:
- """See :class:`setuptools.commands.build.SubCommand`"""
- mapping = self._get_output_mapping()
- return dict(sorted(mapping, key=lambda x: x[0]))
-
- def __get_stubs_outputs(self):
- # assemble the base name for each extension that needs a stub
- ns_ext_bases = (
- os.path.join(self.build_lib, *ext._full_name.split('.'))
- for ext in self.extensions
- if ext._needs_stub
- )
- # pair each base with the extension
- pairs = itertools.product(ns_ext_bases, self.__get_output_extensions())
- return list(base + fnext for base, fnext in pairs)
-
- def __get_output_extensions(self):
- yield '.py'
- yield '.pyc'
- if self.get_finalized_command('build_py').optimize:
- yield '.pyo'
-
- def write_stub(self, output_dir, ext, compile=False):
- stub_file = os.path.join(output_dir, *ext._full_name.split('.')) + '.py'
- self._write_stub_file(stub_file, ext, compile)
-
- def _write_stub_file(self, stub_file: str, ext: Extension, compile=False):
- log.info("writing stub loader for %s to %s", ext._full_name, stub_file)
- if compile and os.path.exists(stub_file):
- raise BaseError(stub_file + " already exists! Please delete.")
- if not self.dry_run:
- f = open(stub_file, 'w')
- f.write(
- '\n'.join([
- "def __bootstrap__():",
- " global __bootstrap__, __file__, __loader__",
- " import sys, os, pkg_resources, importlib.util" +
- if_dl(", dl"),
- " __file__ = pkg_resources.resource_filename"
- "(__name__,%r)"
- % os.path.basename(ext._file_name),
- " del __bootstrap__",
- " if '__loader__' in globals():",
- " del __loader__",
- if_dl(" old_flags = sys.getdlopenflags()"),
- " old_dir = os.getcwd()",
- " try:",
- " os.chdir(os.path.dirname(__file__))",
- if_dl(" sys.setdlopenflags(dl.RTLD_NOW)"),
- " spec = importlib.util.spec_from_file_location(",
- " __name__, __file__)",
- " mod = importlib.util.module_from_spec(spec)",
- " spec.loader.exec_module(mod)",
- " finally:",
- if_dl(" sys.setdlopenflags(old_flags)"),
- " os.chdir(old_dir)",
- "__bootstrap__()",
- "" # terminal \n
- ])
- )
- f.close()
- if compile:
- self._compile_and_remove_stub(stub_file)
-
- def _compile_and_remove_stub(self, stub_file: str):
- from distutils.util import byte_compile
-
- byte_compile([stub_file], optimize=0,
- force=True, dry_run=self.dry_run)
- optimize = self.get_finalized_command('install_lib').optimize
- if optimize > 0:
- byte_compile([stub_file], optimize=optimize,
- force=True, dry_run=self.dry_run)
- if os.path.exists(stub_file) and not self.dry_run:
- os.unlink(stub_file)
-
-
-if use_stubs or os.name == 'nt':
- # Build shared libraries
- #
- def link_shared_object(
- self, objects, output_libname, output_dir=None, libraries=None,
- library_dirs=None, runtime_library_dirs=None, export_symbols=None,
- debug=0, extra_preargs=None, extra_postargs=None, build_temp=None,
- target_lang=None):
- self.link(
- self.SHARED_LIBRARY, objects, output_libname,
- output_dir, libraries, library_dirs, runtime_library_dirs,
- export_symbols, debug, extra_preargs, extra_postargs,
- build_temp, target_lang
- )
-else:
- # Build static libraries everywhere else
- libtype = 'static'
-
- def link_shared_object(
- self, objects, output_libname, output_dir=None, libraries=None,
- library_dirs=None, runtime_library_dirs=None, export_symbols=None,
- debug=0, extra_preargs=None, extra_postargs=None, build_temp=None,
- target_lang=None):
- # XXX we need to either disallow these attrs on Library instances,
- # or warn/abort here if set, or something...
- # libraries=None, library_dirs=None, runtime_library_dirs=None,
- # export_symbols=None, extra_preargs=None, extra_postargs=None,
- # build_temp=None
-
- assert output_dir is None # distutils build_ext doesn't pass this
- output_dir, filename = os.path.split(output_libname)
- basename, ext = os.path.splitext(filename)
- if self.library_filename("x").startswith('lib'):
- # strip 'lib' prefix; this is kludgy if some platform uses
- # a different prefix
- basename = basename[3:]
-
- self.create_static_lib(
- objects, basename, output_dir, debug, target_lang
- )
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/optim.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/optim.py
deleted file mode 100644
index d39d3aaa546c17e831d21d1758b69e8c1609415e..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/optim.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import torch
-
-from detectron2.config import LazyCall as L
-from detectron2.solver.build import get_default_optimizer_params
-
-SGD = L(torch.optim.SGD)(
- params=L(get_default_optimizer_params)(
- # params.model is meant to be set to the model object, before instantiating
- # the optimizer.
- weight_decay_norm=0.0
- ),
- lr=0.02,
- momentum=0.9,
- weight_decay=1e-4,
-)
diff --git a/spaces/Benson/text-generation/Examples/Captulo 5 Matemticas Clase 12 Pdf.md b/spaces/Benson/text-generation/Examples/Captulo 5 Matemticas Clase 12 Pdf.md
deleted file mode 100644
index c6b41e6828662c0444dab8e2edd1fbb9fc6f5734..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Captulo 5 Matemticas Clase 12 Pdf.md
+++ /dev/null
@@ -1,183 +0,0 @@
-
-
Capítulo 5 Matemáticas Clase 12 PDF Descargar
-
¿Está buscando una manera confiable y fácil de prepararse para su examen de matemáticas CBSE Class 12? ¿Quieres acceder al mejor material de estudio para el Capítulo 5 Continuidad y diferenciabilidad? Si es así, entonces has venido al lugar correcto. En este artículo, te diremos cómo descargar el PDF del Capítulo 5 de la Clase de Matemáticas 12 y por qué es beneficioso para la preparación de tu examen. También le proporcionaremos el plan de estudios, preguntas importantes y soluciones para el capítulo 5 de la clase de matemáticas 12. Así que sigue leyendo y prepárate para aprobar tu examen.
Capítulo 5 Continuidad y diferenciabilidad es uno de los capítulos más importantes del programa de matemáticas de la clase 12 del CBSE. Se trata de los conceptos de continuidad y diferenciabilidad de funciones, sus propiedades algebraicas, derivadas de compuesto, implícito, trigonométrica inversa, exponencial, y funciones logarítmicas, diferenciación logarítmica, derivadas de funciones en formas paramétricas, derivadas de segundo orden, teorema del valor medio, y el teorema de Rolle. Este capítulo tiene una ponderación de aproximadamente 8 calificaciones en el examen de la junta y también es útil para exámenes competitivos como JEE y NEET.
-
Para dominar este capítulo, necesita entender la teoría, practicar los ejercicios y resolver las preguntas del año anterior. Sin embargo, puede ser difícil llevar todos los libros y notas a todas partes. Es por eso que descargar el PDF del Capítulo 5 Matemáticas Clase 12 es una idea inteligente. Te ayudará a acceder al capítulo en cualquier momento y en cualquier lugar de tu dispositivo.
Hay muchas razones por las que deberías descargar el PDF del Capítulo 5 de la Clase de Matemáticas 12. Algunas de ellas son:
-
Beneficios del Capítulo 5 Matemáticas Clase 12 PDF
-
-
Es gratis y fácil de descargar de fuentes confiables como el sitio web NCERT o Vedantu.
-
Es compatible con cualquier dispositivo como portátil, tableta o teléfono inteligente.
-
-
Le ayuda a revisar el capítulo de forma rápida y eficaz.
-
Le proporciona el contenido más reciente y actualizado según el plan de estudios de la CBSE.
-
Mejora tu experiencia de aprendizaje con funciones interactivas como diagramas, gráficos, ejemplos y ejercicios.
-
-
¿Cómo descargar el Capítulo 5 Matemáticas Clase 12 PDF?
-
Para descargar el PDF del Capítulo 5 Matemáticas Clase 12, puedes seguir estos sencillos pasos:
-
-
Ir al sitio web de NCERT o al sitio web de Vedantu.
-
Seleccione la clase, el tema y el nombre del libro.
-
Haga clic en el nombre del capítulo y ábralo en una nueva pestaña.
-
Haga clic en el botón de descarga o guardar como opción.
-
Elija la ubicación donde desea guardar el archivo.
-
Abre el archivo y empieza a estudiar.
-
-
Capítulo 5 Matemáticas Clase 12 Plan de estudios
-
Antes de empezar a estudiar el Capítulo 5 Continuidad y diferenciabilidad, debe conocer el plan de estudios de la clase CBSE [asistente](#mensaje) Algunas oraciones adicionales son 12 Matemáticas. El plan de estudios de CBSE clase 12 Matemáticas se divide en seis unidades, a saber, Relaciones y Funciones, Álgebra, Cálculo, Vectores y Geometría Tridimensional, Programación Lineal y Probabilidad. Las calificaciones totales del examen de la junta son 100, de los cuales 80 son para el documento de teoría y 20 son para la evaluación interna. La duración del trabajo teórico es de tres horas.
-
Resumen del Capítulo 5 Matemáticas Clase 12 Plan de estudios
-
-
Distribución de marcas por unidad
-
La siguiente tabla muestra la distribución de las marcas unitarias para el programa de matemáticas de la clase 12 del CBSE:
-
-
-
-
Unidad
-
Marcas
-
-
-
Relaciones y funciones
-
8
-
-
-
Álgebra
-
10
-
-
-
Cálculo
-
35
-
-
-
Vectores y geometría tridimensional
-
14
-
-
-
Programación lineal
-
5
-
-
-
Probabilidad
-
8
-
-
-
Total
-
80
-
-
-
Temas y subtemas tratados
-
La siguiente tabla muestra los temas y subtemas cubiertos en el Capítulo 5 Continuidad y diferenciabilidad:
-
-
-
Tema
-
Subtema
-
-
-
Continuidad
-
Continuidad en un punto y en un intervalo.
-
-
-
Álgebra de funciones continuas.
-
-
-
Teorema del valor intermedio.
-
-
-
Diferenciabilidad
-
Diferenciabilidad en un punto y en un intervalo.
-
-
-
Álgebra de funciones diferenciables.
-
-
-
Derivados de funciones compuestas.
-
-
-
Derivadas de funciones implícitas.
-
-
-
Derivadas de funciones trigonométricas inversas.
-
-
-
Derivadas de funciones exponenciales y logarítmicas.
-
-
-
Diferenciación logarítmica.
-
-
-
Derivadas de funciones en formas paramétricas.
-
-
-
Derivados de segundo orden.
-
-
Teoremas del valor medio
-
Teorema del valor medio.
-
-
-
Teorema de Rolle.
-
-
-
Capítulo 5 Matemáticas Clase 12 Preguntas importantes
-
-
¿Cuáles son las preguntas importantes para el capítulo 5 de la clase de matemáticas 12?
-
Preguntas importantes para el Capítulo 5 Matemáticas Clase 12 son las preguntas que ponen a prueba su comprensión de los conceptos, fórmulas y métodos del capítulo. Pueden ser de diferentes tipos, como respuesta corta, respuesta larga, opción múltiple, llenar los espacios en blanco, verdadero o falso, coincidir con lo siguiente, etc. También pueden variar en el nivel de dificultad, de fácil a moderado a difícil.
-
Tipos de preguntas importantes para el capítulo 5 Matemáticas Clase 12
-
Algunos de los tipos de preguntas importantes para el Capítulo 5 Matemáticas Clase 12 son:
-
-
Preguntas basadas en la definición y ejemplos de continuidad y diferenciabilidad de una función en un punto y en un intervalo.
-
Preguntas basadas en el álgebra de funciones continuas y diferenciables, como encontrar la suma, diferencia, producto, cociente o composición de dos o más funciones.
-
Preguntas basadas en encontrar las derivadas de varios tipos de funciones, tales como funciones compuestas, implícitas, trigonométricas inversas, exponenciales y logarítmicas.
-
Preguntas basadas en la aplicación de la diferenciación logarítmica para encontrar las derivadas de funciones que involucran poderes, productos o cocientes.
-
Preguntas basadas en encontrar las derivadas de funciones en formas paramétricas, como curvas o ecuaciones que involucran dos o más variables.
-
Preguntas basadas en encontrar las derivadas de segundo orden de las funciones y sus aplicaciones.
-
Preguntas basadas en verificar o aplicar el teorema del valor medio o el teorema de Rolle a una función o ecuación dada.
-
-
Fuentes de preguntas importantes para el Capítulo 5 Matemáticas Clase 12
-
Algunas de las fuentes de preguntas importantes para el Capítulo 5 Matemáticas Clase 12 son:
-
-
El libro de texto NCERT y el libro de ejemplo para la clase 12 Matemáticas.
-
El año anterior documentos de preguntas y documentos de muestra para el examen de la Junta de Matemáticas CBSE Clase 12.
-
-
Los libros de referencia y guías para CBSE clase 12 Matemáticas como R.D. Sharma, R.S. Aggarwal, etc.
-
-
Capítulo 5 Matemáticas Clase 12 Soluciones
-
Otra forma de prepararse bien para su examen de Matemáticas CBSE Class 12 es consultar las soluciones para el Capítulo 5 Continuidad y diferenciabilidad. Estas son las explicaciones paso a paso y las respuestas a las preguntas y ejercicios dados en el libro de texto del NCERT y otras fuentes. Leer estas soluciones le ayudará a entender mejor los conceptos, métodos y fórmulas del capítulo. También te ayudarán a revisar tus respuestas, aclarar tus dudas y mejorar tu precisión.
-
¿Cuáles son las soluciones para el capítulo 5 de la clase de matemáticas 12?
-
Soluciones para el Capítulo 5 Matemáticas Clase 12 son las soluciones detalladas y precisas a las preguntas y ejercicios dados en el libro de texto de NCERT y otras fuentes para el Capítulo 5 Continuidad y diferenciabilidad. Están escritos por profesores expertos y expertos en la materia que tienen años de experiencia en la enseñanza de CBSE clase 12 Matemáticas. Siguen el último programa de estudios y el esquema de marcado del CBSE y se adhieren a las directrices del CBSE.
-
Características de las soluciones para el capítulo 5 Matemáticas Clase 12
-
Algunas de las características de las soluciones para el Capítulo 5 Matemáticas Clase 12 son:
-
-
Cubren todos los temas y subtemas del capítulo de una manera sistemática y lógica.
-
Proporcionan explicaciones claras y concisas con ejemplos y diagramas relevantes siempre que sea necesario.
-
Utilizan un lenguaje simple y fácil de entender que es adecuado para los estudiantes de la clase 12 del CBSE.
-
Muestran todos los pasos y cálculos involucrados en la solución de un problema con el razonamiento y la justificación adecuados.
-
Resaltan los puntos importantes, las fórmulas y los consejos para recordar al resolver un problema.
-
También proporcionan métodos alternativos o atajos para resolver un problema siempre que sea posible.
-
-
Fuentes de soluciones para el capítulo 5 Matemáticas Clase 12
-
-
-
Las soluciones de NCERT para la clase 12 Matemáticas Capítulo 5 Continuidad y diferenciabilidad disponibles en el sitio web de NCERT o Vedantu.
-
Las soluciones de RD Sharma para la clase 12 Matemáticas Capítulo 5 Continuidad y diferenciabilidad disponibles en el sitio web de Vedantu u otras plataformas en línea.
-
Las soluciones de RS Aggarwal para la clase 12 Matemáticas Capítulo 5 Continuidad y diferenciabilidad disponibles en el sitio web de Vedantu u otras plataformas en línea.
-
Las conferencias en video y las clases en vivo de profesores y tutores expertos en YouTube, Vedantu, Toppr, etc.
-
-
Conclusión
-
En este artículo, le hemos proporcionado toda la información que necesita para descargar el PDF del capítulo 5 de la clase de matemáticas 12 y prepararse para su examen de matemáticas CBSE Class 12. También le hemos dado el plan de estudios, preguntas importantes y soluciones para el Capítulo 5 Continuidad y diferenciabilidad. Esperamos que este artículo te haya ayudado a entender mejor el capítulo y aumentar tu confianza. Te deseamos todo lo mejor para tu examen.
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre el capítulo 5 Matemáticas Clase 12:
-
-
¿Cuál es la diferencia entre continuidad y diferenciabilidad de una función?
-
Una función es continua en un punto si el límite de la función en ese punto es igual al valor de la función en ese punto. Una función es diferenciable en un punto si la derivada de la función en ese punto existe y es finita. Una función puede ser continua pero no diferenciable en un punto, pero si una función es diferenciable en un punto, entonces también es continua en ese punto.
-
¿Cuáles son las condiciones para que el teorema de Rolle y el teorema del valor medio sean aplicables?
-
-
¿Cómo encontrar las derivadas de las funciones trigonométricas inversas?
-
Las derivadas de las funciones trigonométricas inversas se pueden encontrar utilizando el método de diferenciación implícita. Por ejemplo, para encontrar la derivada de y = sin(x), podemos escribir x = sin(y) y diferenciar ambos lados con respecto a x. Obtenemos 1 = cos(y) dy/dx, lo que implica dy/dx = 1/cos(y). Dado que cos(y) = (1 - x), obtenemos dy/dx = 1/ (1 - x). De manera similar, podemos encontrar las derivadas de otras funciones trigonométricas inversas.
-
¿Cómo usar la diferenciación logarítmica para encontrar las derivadas de funciones que involucran poderes, productos o cocientes?
-
La diferenciación logarítmica es una técnica que utiliza las propiedades de los logaritmos para simplificar la diferenciación de funciones que involucran poderes, productos o cocientes. Por ejemplo, para encontrar la derivada de y = x, podemos tomar el logaritmo natural de ambos lados y obtener ln(y) = x ln(x). Luego podemos diferenciar ambos lados con respecto a x y obtener (1/y) dy/dx = ln(x) + 1. Multiplicando ambos lados por y, obtenemos dy/dx = y (ln(x) + 1). Dado que y = x, obtenemos dy/dx = x (ln(x) + 1). Del mismo modo, podemos usar la diferenciación logarítmica para encontrar las derivadas de otras funciones que involucran poderes, productos o cocientes.
-
¿Cómo encontrar las derivadas de funciones en formas paramétricas?
-
Una función en forma paramétrica es una función que se expresa en términos de uno o más parámetros. Por ejemplo, una curva puede ser representada por x = f(t) e y = g(t), donde t es un parámetro. Para encontrar la derivada de y con respecto a x, podemos usar la regla de cadena y obtener dy/dx = (dy/dt)/(dx/dt). Para encontrar la segunda derivada de y con respecto a x, podemos usar la regla del cociente y obtener d 2y/dx = (dy/dt)(dx/dt) - (dy/dt)(dx/dt)/(dx/dt). Del mismo modo, podemos encontrar las derivadas de otras funciones en formas paramétricas.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/auth.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/auth.py
deleted file mode 100644
index da9b838e46c67658dfceea2465d92bc08ebf0a23..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/auth.py
+++ /dev/null
@@ -1,990 +0,0 @@
-# Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/
-# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import base64
-import calendar
-import datetime
-import functools
-import hmac
-import json
-import logging
-import time
-from collections.abc import Mapping
-from email.utils import formatdate
-from hashlib import sha1, sha256
-from operator import itemgetter
-
-from botocore.compat import (
- HAS_CRT,
- HTTPHeaders,
- encodebytes,
- ensure_unicode,
- parse_qs,
- quote,
- unquote,
- urlsplit,
- urlunsplit,
-)
-from botocore.exceptions import NoAuthTokenError, NoCredentialsError
-from botocore.utils import (
- is_valid_ipv6_endpoint_url,
- normalize_url_path,
- percent_encode_sequence,
-)
-
-# Imports for backwards compatibility
-from botocore.compat import MD5_AVAILABLE # noqa
-
-
-logger = logging.getLogger(__name__)
-
-
-EMPTY_SHA256_HASH = (
- 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
-)
-# This is the buffer size used when calculating sha256 checksums.
-# Experimenting with various buffer sizes showed that this value generally
-# gave the best result (in terms of performance).
-PAYLOAD_BUFFER = 1024 * 1024
-ISO8601 = '%Y-%m-%dT%H:%M:%SZ'
-SIGV4_TIMESTAMP = '%Y%m%dT%H%M%SZ'
-SIGNED_HEADERS_BLACKLIST = [
- 'expect',
- 'user-agent',
- 'x-amzn-trace-id',
-]
-UNSIGNED_PAYLOAD = 'UNSIGNED-PAYLOAD'
-STREAMING_UNSIGNED_PAYLOAD_TRAILER = 'STREAMING-UNSIGNED-PAYLOAD-TRAILER'
-
-
-def _host_from_url(url):
- # Given URL, derive value for host header. Ensure that value:
- # 1) is lowercase
- # 2) excludes port, if it was the default port
- # 3) excludes userinfo
- url_parts = urlsplit(url)
- host = url_parts.hostname # urlsplit's hostname is always lowercase
- if is_valid_ipv6_endpoint_url(url):
- host = f'[{host}]'
- default_ports = {
- 'http': 80,
- 'https': 443,
- }
- if url_parts.port is not None:
- if url_parts.port != default_ports.get(url_parts.scheme):
- host = '%s:%d' % (host, url_parts.port)
- return host
-
-
-def _get_body_as_dict(request):
- # For query services, request.data is form-encoded and is already a
- # dict, but for other services such as rest-json it could be a json
- # string or bytes. In those cases we attempt to load the data as a
- # dict.
- data = request.data
- if isinstance(data, bytes):
- data = json.loads(data.decode('utf-8'))
- elif isinstance(data, str):
- data = json.loads(data)
- return data
-
-
-class BaseSigner:
- REQUIRES_REGION = False
- REQUIRES_TOKEN = False
-
- def add_auth(self, request):
- raise NotImplementedError("add_auth")
-
-
-class TokenSigner(BaseSigner):
- REQUIRES_TOKEN = True
- """
- Signers that expect an authorization token to perform the authorization
- """
-
- def __init__(self, auth_token):
- self.auth_token = auth_token
-
-
-class SigV2Auth(BaseSigner):
- """
- Sign a request with Signature V2.
- """
-
- def __init__(self, credentials):
- self.credentials = credentials
-
- def calc_signature(self, request, params):
- logger.debug("Calculating signature using v2 auth.")
- split = urlsplit(request.url)
- path = split.path
- if len(path) == 0:
- path = '/'
- string_to_sign = f"{request.method}\n{split.netloc}\n{path}\n"
- lhmac = hmac.new(
- self.credentials.secret_key.encode("utf-8"), digestmod=sha256
- )
- pairs = []
- for key in sorted(params):
- # Any previous signature should not be a part of this
- # one, so we skip that particular key. This prevents
- # issues during retries.
- if key == 'Signature':
- continue
- value = str(params[key])
- quoted_key = quote(key.encode('utf-8'), safe='')
- quoted_value = quote(value.encode('utf-8'), safe='-_~')
- pairs.append(f'{quoted_key}={quoted_value}')
- qs = '&'.join(pairs)
- string_to_sign += qs
- logger.debug('String to sign: %s', string_to_sign)
- lhmac.update(string_to_sign.encode('utf-8'))
- b64 = base64.b64encode(lhmac.digest()).strip().decode('utf-8')
- return (qs, b64)
-
- def add_auth(self, request):
- # The auth handler is the last thing called in the
- # preparation phase of a prepared request.
- # Because of this we have to parse the query params
- # from the request body so we can update them with
- # the sigv2 auth params.
- if self.credentials is None:
- raise NoCredentialsError()
- if request.data:
- # POST
- params = request.data
- else:
- # GET
- params = request.params
- params['AWSAccessKeyId'] = self.credentials.access_key
- params['SignatureVersion'] = '2'
- params['SignatureMethod'] = 'HmacSHA256'
- params['Timestamp'] = time.strftime(ISO8601, time.gmtime())
- if self.credentials.token:
- params['SecurityToken'] = self.credentials.token
- qs, signature = self.calc_signature(request, params)
- params['Signature'] = signature
- return request
-
-
-class SigV3Auth(BaseSigner):
- def __init__(self, credentials):
- self.credentials = credentials
-
- def add_auth(self, request):
- if self.credentials is None:
- raise NoCredentialsError()
- if 'Date' in request.headers:
- del request.headers['Date']
- request.headers['Date'] = formatdate(usegmt=True)
- if self.credentials.token:
- if 'X-Amz-Security-Token' in request.headers:
- del request.headers['X-Amz-Security-Token']
- request.headers['X-Amz-Security-Token'] = self.credentials.token
- new_hmac = hmac.new(
- self.credentials.secret_key.encode('utf-8'), digestmod=sha256
- )
- new_hmac.update(request.headers['Date'].encode('utf-8'))
- encoded_signature = encodebytes(new_hmac.digest()).strip()
- signature = (
- f"AWS3-HTTPS AWSAccessKeyId={self.credentials.access_key},"
- f"Algorithm=HmacSHA256,Signature={encoded_signature.decode('utf-8')}"
- )
- if 'X-Amzn-Authorization' in request.headers:
- del request.headers['X-Amzn-Authorization']
- request.headers['X-Amzn-Authorization'] = signature
-
-
-class SigV4Auth(BaseSigner):
- """
- Sign a request with Signature V4.
- """
-
- REQUIRES_REGION = True
-
- def __init__(self, credentials, service_name, region_name):
- self.credentials = credentials
- # We initialize these value here so the unit tests can have
- # valid values. But these will get overriden in ``add_auth``
- # later for real requests.
- self._region_name = region_name
- self._service_name = service_name
-
- def _sign(self, key, msg, hex=False):
- if hex:
- sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest()
- else:
- sig = hmac.new(key, msg.encode('utf-8'), sha256).digest()
- return sig
-
- def headers_to_sign(self, request):
- """
- Select the headers from the request that need to be included
- in the StringToSign.
- """
- header_map = HTTPHeaders()
- for name, value in request.headers.items():
- lname = name.lower()
- if lname not in SIGNED_HEADERS_BLACKLIST:
- header_map[lname] = value
- if 'host' not in header_map:
- # TODO: We should set the host ourselves, instead of relying on our
- # HTTP client to set it for us.
- header_map['host'] = _host_from_url(request.url)
- return header_map
-
- def canonical_query_string(self, request):
- # The query string can come from two parts. One is the
- # params attribute of the request. The other is from the request
- # url (in which case we have to re-split the url into its components
- # and parse out the query string component).
- if request.params:
- return self._canonical_query_string_params(request.params)
- else:
- return self._canonical_query_string_url(urlsplit(request.url))
-
- def _canonical_query_string_params(self, params):
- # [(key, value), (key2, value2)]
- key_val_pairs = []
- if isinstance(params, Mapping):
- params = params.items()
- for key, value in params:
- key_val_pairs.append(
- (quote(key, safe='-_.~'), quote(str(value), safe='-_.~'))
- )
- sorted_key_vals = []
- # Sort by the URI-encoded key names, and in the case of
- # repeated keys, then sort by the value.
- for key, value in sorted(key_val_pairs):
- sorted_key_vals.append(f'{key}={value}')
- canonical_query_string = '&'.join(sorted_key_vals)
- return canonical_query_string
-
- def _canonical_query_string_url(self, parts):
- canonical_query_string = ''
- if parts.query:
- # [(key, value), (key2, value2)]
- key_val_pairs = []
- for pair in parts.query.split('&'):
- key, _, value = pair.partition('=')
- key_val_pairs.append((key, value))
- sorted_key_vals = []
- # Sort by the URI-encoded key names, and in the case of
- # repeated keys, then sort by the value.
- for key, value in sorted(key_val_pairs):
- sorted_key_vals.append(f'{key}={value}')
- canonical_query_string = '&'.join(sorted_key_vals)
- return canonical_query_string
-
- def canonical_headers(self, headers_to_sign):
- """
- Return the headers that need to be included in the StringToSign
- in their canonical form by converting all header keys to lower
- case, sorting them in alphabetical order and then joining
- them into a string, separated by newlines.
- """
- headers = []
- sorted_header_names = sorted(set(headers_to_sign))
- for key in sorted_header_names:
- value = ','.join(
- self._header_value(v) for v in headers_to_sign.get_all(key)
- )
- headers.append(f'{key}:{ensure_unicode(value)}')
- return '\n'.join(headers)
-
- def _header_value(self, value):
- # From the sigv4 docs:
- # Lowercase(HeaderName) + ':' + Trimall(HeaderValue)
- #
- # The Trimall function removes excess white space before and after
- # values, and converts sequential spaces to a single space.
- return ' '.join(value.split())
-
- def signed_headers(self, headers_to_sign):
- headers = sorted(n.lower().strip() for n in set(headers_to_sign))
- return ';'.join(headers)
-
- def _is_streaming_checksum_payload(self, request):
- checksum_context = request.context.get('checksum', {})
- algorithm = checksum_context.get('request_algorithm')
- return isinstance(algorithm, dict) and algorithm.get('in') == 'trailer'
-
- def payload(self, request):
- if self._is_streaming_checksum_payload(request):
- return STREAMING_UNSIGNED_PAYLOAD_TRAILER
- elif not self._should_sha256_sign_payload(request):
- # When payload signing is disabled, we use this static string in
- # place of the payload checksum.
- return UNSIGNED_PAYLOAD
- request_body = request.body
- if request_body and hasattr(request_body, 'seek'):
- position = request_body.tell()
- read_chunksize = functools.partial(
- request_body.read, PAYLOAD_BUFFER
- )
- checksum = sha256()
- for chunk in iter(read_chunksize, b''):
- checksum.update(chunk)
- hex_checksum = checksum.hexdigest()
- request_body.seek(position)
- return hex_checksum
- elif request_body:
- # The request serialization has ensured that
- # request.body is a bytes() type.
- return sha256(request_body).hexdigest()
- else:
- return EMPTY_SHA256_HASH
-
- def _should_sha256_sign_payload(self, request):
- # Payloads will always be signed over insecure connections.
- if not request.url.startswith('https'):
- return True
-
- # Certain operations may have payload signing disabled by default.
- # Since we don't have access to the operation model, we pass in this
- # bit of metadata through the request context.
- return request.context.get('payload_signing_enabled', True)
-
- def canonical_request(self, request):
- cr = [request.method.upper()]
- path = self._normalize_url_path(urlsplit(request.url).path)
- cr.append(path)
- cr.append(self.canonical_query_string(request))
- headers_to_sign = self.headers_to_sign(request)
- cr.append(self.canonical_headers(headers_to_sign) + '\n')
- cr.append(self.signed_headers(headers_to_sign))
- if 'X-Amz-Content-SHA256' in request.headers:
- body_checksum = request.headers['X-Amz-Content-SHA256']
- else:
- body_checksum = self.payload(request)
- cr.append(body_checksum)
- return '\n'.join(cr)
-
- def _normalize_url_path(self, path):
- normalized_path = quote(normalize_url_path(path), safe='/~')
- return normalized_path
-
- def scope(self, request):
- scope = [self.credentials.access_key]
- scope.append(request.context['timestamp'][0:8])
- scope.append(self._region_name)
- scope.append(self._service_name)
- scope.append('aws4_request')
- return '/'.join(scope)
-
- def credential_scope(self, request):
- scope = []
- scope.append(request.context['timestamp'][0:8])
- scope.append(self._region_name)
- scope.append(self._service_name)
- scope.append('aws4_request')
- return '/'.join(scope)
-
- def string_to_sign(self, request, canonical_request):
- """
- Return the canonical StringToSign as well as a dict
- containing the original version of all headers that
- were included in the StringToSign.
- """
- sts = ['AWS4-HMAC-SHA256']
- sts.append(request.context['timestamp'])
- sts.append(self.credential_scope(request))
- sts.append(sha256(canonical_request.encode('utf-8')).hexdigest())
- return '\n'.join(sts)
-
- def signature(self, string_to_sign, request):
- key = self.credentials.secret_key
- k_date = self._sign(
- (f"AWS4{key}").encode(), request.context["timestamp"][0:8]
- )
- k_region = self._sign(k_date, self._region_name)
- k_service = self._sign(k_region, self._service_name)
- k_signing = self._sign(k_service, 'aws4_request')
- return self._sign(k_signing, string_to_sign, hex=True)
-
- def add_auth(self, request):
- if self.credentials is None:
- raise NoCredentialsError()
- datetime_now = datetime.datetime.utcnow()
- request.context['timestamp'] = datetime_now.strftime(SIGV4_TIMESTAMP)
- # This could be a retry. Make sure the previous
- # authorization header is removed first.
- self._modify_request_before_signing(request)
- canonical_request = self.canonical_request(request)
- logger.debug("Calculating signature using v4 auth.")
- logger.debug('CanonicalRequest:\n%s', canonical_request)
- string_to_sign = self.string_to_sign(request, canonical_request)
- logger.debug('StringToSign:\n%s', string_to_sign)
- signature = self.signature(string_to_sign, request)
- logger.debug('Signature:\n%s', signature)
-
- self._inject_signature_to_request(request, signature)
-
- def _inject_signature_to_request(self, request, signature):
- auth_str = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(request)]
- headers_to_sign = self.headers_to_sign(request)
- auth_str.append(
- f"SignedHeaders={self.signed_headers(headers_to_sign)}"
- )
- auth_str.append('Signature=%s' % signature)
- request.headers['Authorization'] = ', '.join(auth_str)
- return request
-
- def _modify_request_before_signing(self, request):
- if 'Authorization' in request.headers:
- del request.headers['Authorization']
- self._set_necessary_date_headers(request)
- if self.credentials.token:
- if 'X-Amz-Security-Token' in request.headers:
- del request.headers['X-Amz-Security-Token']
- request.headers['X-Amz-Security-Token'] = self.credentials.token
-
- if not request.context.get('payload_signing_enabled', True):
- if 'X-Amz-Content-SHA256' in request.headers:
- del request.headers['X-Amz-Content-SHA256']
- request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD
-
- def _set_necessary_date_headers(self, request):
- # The spec allows for either the Date _or_ the X-Amz-Date value to be
- # used so we check both. If there's a Date header, we use the date
- # header. Otherwise we use the X-Amz-Date header.
- if 'Date' in request.headers:
- del request.headers['Date']
- datetime_timestamp = datetime.datetime.strptime(
- request.context['timestamp'], SIGV4_TIMESTAMP
- )
- request.headers['Date'] = formatdate(
- int(calendar.timegm(datetime_timestamp.timetuple()))
- )
- if 'X-Amz-Date' in request.headers:
- del request.headers['X-Amz-Date']
- else:
- if 'X-Amz-Date' in request.headers:
- del request.headers['X-Amz-Date']
- request.headers['X-Amz-Date'] = request.context['timestamp']
-
-
-class S3SigV4Auth(SigV4Auth):
- def _modify_request_before_signing(self, request):
- super()._modify_request_before_signing(request)
- if 'X-Amz-Content-SHA256' in request.headers:
- del request.headers['X-Amz-Content-SHA256']
-
- request.headers['X-Amz-Content-SHA256'] = self.payload(request)
-
- def _should_sha256_sign_payload(self, request):
- # S3 allows optional body signing, so to minimize the performance
- # impact, we opt to not SHA256 sign the body on streaming uploads,
- # provided that we're on https.
- client_config = request.context.get('client_config')
- s3_config = getattr(client_config, 's3', None)
-
- # The config could be None if it isn't set, or if the customer sets it
- # to None.
- if s3_config is None:
- s3_config = {}
-
- # The explicit configuration takes precedence over any implicit
- # configuration.
- sign_payload = s3_config.get('payload_signing_enabled', None)
- if sign_payload is not None:
- return sign_payload
-
- # We require that both a checksum be present and https be enabled
- # to implicitly disable body signing. The combination of TLS and
- # a checksum is sufficiently secure and durable for us to be
- # confident in the request without body signing.
- checksum_header = 'Content-MD5'
- checksum_context = request.context.get('checksum', {})
- algorithm = checksum_context.get('request_algorithm')
- if isinstance(algorithm, dict) and algorithm.get('in') == 'header':
- checksum_header = algorithm['name']
- if (
- not request.url.startswith("https")
- or checksum_header not in request.headers
- ):
- return True
-
- # If the input is streaming we disable body signing by default.
- if request.context.get('has_streaming_input', False):
- return False
-
- # If the S3-specific checks had no results, delegate to the generic
- # checks.
- return super()._should_sha256_sign_payload(request)
-
- def _normalize_url_path(self, path):
- # For S3, we do not normalize the path.
- return path
-
-
-class SigV4QueryAuth(SigV4Auth):
- DEFAULT_EXPIRES = 3600
-
- def __init__(
- self, credentials, service_name, region_name, expires=DEFAULT_EXPIRES
- ):
- super().__init__(credentials, service_name, region_name)
- self._expires = expires
-
- def _modify_request_before_signing(self, request):
- # We automatically set this header, so if it's the auto-set value we
- # want to get rid of it since it doesn't make sense for presigned urls.
- content_type = request.headers.get('content-type')
- blacklisted_content_type = (
- 'application/x-www-form-urlencoded; charset=utf-8'
- )
- if content_type == blacklisted_content_type:
- del request.headers['content-type']
-
- # Note that we're not including X-Amz-Signature.
- # From the docs: "The Canonical Query String must include all the query
- # parameters from the preceding table except for X-Amz-Signature.
- signed_headers = self.signed_headers(self.headers_to_sign(request))
-
- auth_params = {
- 'X-Amz-Algorithm': 'AWS4-HMAC-SHA256',
- 'X-Amz-Credential': self.scope(request),
- 'X-Amz-Date': request.context['timestamp'],
- 'X-Amz-Expires': self._expires,
- 'X-Amz-SignedHeaders': signed_headers,
- }
- if self.credentials.token is not None:
- auth_params['X-Amz-Security-Token'] = self.credentials.token
- # Now parse the original query string to a dict, inject our new query
- # params, and serialize back to a query string.
- url_parts = urlsplit(request.url)
- # parse_qs makes each value a list, but in our case we know we won't
- # have repeated keys so we know we have single element lists which we
- # can convert back to scalar values.
- query_string_parts = parse_qs(url_parts.query, keep_blank_values=True)
- query_dict = {k: v[0] for k, v in query_string_parts.items()}
-
- if request.params:
- query_dict.update(request.params)
- request.params = {}
- # The spec is particular about this. It *has* to be:
- # https://?&
- # You can't mix the two types of params together, i.e just keep doing
- # new_query_params.update(op_params)
- # new_query_params.update(auth_params)
- # percent_encode_sequence(new_query_params)
- operation_params = ''
- if request.data:
- # We also need to move the body params into the query string. To
- # do this, we first have to convert it to a dict.
- query_dict.update(_get_body_as_dict(request))
- request.data = ''
- if query_dict:
- operation_params = percent_encode_sequence(query_dict) + '&'
- new_query_string = (
- f"{operation_params}{percent_encode_sequence(auth_params)}"
- )
- # url_parts is a tuple (and therefore immutable) so we need to create
- # a new url_parts with the new query string.
- # -
- # scheme - 0
- # netloc - 1
- # path - 2
- # query - 3 <-- we're replacing this.
- # fragment - 4
- p = url_parts
- new_url_parts = (p[0], p[1], p[2], new_query_string, p[4])
- request.url = urlunsplit(new_url_parts)
-
- def _inject_signature_to_request(self, request, signature):
- # Rather than calculating an "Authorization" header, for the query
- # param quth, we just append an 'X-Amz-Signature' param to the end
- # of the query string.
- request.url += '&X-Amz-Signature=%s' % signature
-
-
-class S3SigV4QueryAuth(SigV4QueryAuth):
- """S3 SigV4 auth using query parameters.
-
- This signer will sign a request using query parameters and signature
- version 4, i.e a "presigned url" signer.
-
- Based off of:
-
- http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
-
- """
-
- def _normalize_url_path(self, path):
- # For S3, we do not normalize the path.
- return path
-
- def payload(self, request):
- # From the doc link above:
- # "You don't include a payload hash in the Canonical Request, because
- # when you create a presigned URL, you don't know anything about the
- # payload. Instead, you use a constant string "UNSIGNED-PAYLOAD".
- return UNSIGNED_PAYLOAD
-
-
-class S3SigV4PostAuth(SigV4Auth):
- """
- Presigns a s3 post
-
- Implementation doc here:
- http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html
- """
-
- def add_auth(self, request):
- datetime_now = datetime.datetime.utcnow()
- request.context['timestamp'] = datetime_now.strftime(SIGV4_TIMESTAMP)
-
- fields = {}
- if request.context.get('s3-presign-post-fields', None) is not None:
- fields = request.context['s3-presign-post-fields']
-
- policy = {}
- conditions = []
- if request.context.get('s3-presign-post-policy', None) is not None:
- policy = request.context['s3-presign-post-policy']
- if policy.get('conditions', None) is not None:
- conditions = policy['conditions']
-
- policy['conditions'] = conditions
-
- fields['x-amz-algorithm'] = 'AWS4-HMAC-SHA256'
- fields['x-amz-credential'] = self.scope(request)
- fields['x-amz-date'] = request.context['timestamp']
-
- conditions.append({'x-amz-algorithm': 'AWS4-HMAC-SHA256'})
- conditions.append({'x-amz-credential': self.scope(request)})
- conditions.append({'x-amz-date': request.context['timestamp']})
-
- if self.credentials.token is not None:
- fields['x-amz-security-token'] = self.credentials.token
- conditions.append({'x-amz-security-token': self.credentials.token})
-
- # Dump the base64 encoded policy into the fields dictionary.
- fields['policy'] = base64.b64encode(
- json.dumps(policy).encode('utf-8')
- ).decode('utf-8')
-
- fields['x-amz-signature'] = self.signature(fields['policy'], request)
-
- request.context['s3-presign-post-fields'] = fields
- request.context['s3-presign-post-policy'] = policy
-
-
-class HmacV1Auth(BaseSigner):
-
- # List of Query String Arguments of Interest
- QSAOfInterest = [
- 'accelerate',
- 'acl',
- 'cors',
- 'defaultObjectAcl',
- 'location',
- 'logging',
- 'partNumber',
- 'policy',
- 'requestPayment',
- 'torrent',
- 'versioning',
- 'versionId',
- 'versions',
- 'website',
- 'uploads',
- 'uploadId',
- 'response-content-type',
- 'response-content-language',
- 'response-expires',
- 'response-cache-control',
- 'response-content-disposition',
- 'response-content-encoding',
- 'delete',
- 'lifecycle',
- 'tagging',
- 'restore',
- 'storageClass',
- 'notification',
- 'replication',
- 'requestPayment',
- 'analytics',
- 'metrics',
- 'inventory',
- 'select',
- 'select-type',
- 'object-lock',
- ]
-
- def __init__(self, credentials, service_name=None, region_name=None):
- self.credentials = credentials
-
- def sign_string(self, string_to_sign):
- new_hmac = hmac.new(
- self.credentials.secret_key.encode('utf-8'), digestmod=sha1
- )
- new_hmac.update(string_to_sign.encode('utf-8'))
- return encodebytes(new_hmac.digest()).strip().decode('utf-8')
-
- def canonical_standard_headers(self, headers):
- interesting_headers = ['content-md5', 'content-type', 'date']
- hoi = []
- if 'Date' in headers:
- del headers['Date']
- headers['Date'] = self._get_date()
- for ih in interesting_headers:
- found = False
- for key in headers:
- lk = key.lower()
- if headers[key] is not None and lk == ih:
- hoi.append(headers[key].strip())
- found = True
- if not found:
- hoi.append('')
- return '\n'.join(hoi)
-
- def canonical_custom_headers(self, headers):
- hoi = []
- custom_headers = {}
- for key in headers:
- lk = key.lower()
- if headers[key] is not None:
- if lk.startswith('x-amz-'):
- custom_headers[lk] = ','.join(
- v.strip() for v in headers.get_all(key)
- )
- sorted_header_keys = sorted(custom_headers.keys())
- for key in sorted_header_keys:
- hoi.append(f"{key}:{custom_headers[key]}")
- return '\n'.join(hoi)
-
- def unquote_v(self, nv):
- """
- TODO: Do we need this?
- """
- if len(nv) == 1:
- return nv
- else:
- return (nv[0], unquote(nv[1]))
-
- def canonical_resource(self, split, auth_path=None):
- # don't include anything after the first ? in the resource...
- # unless it is one of the QSA of interest, defined above
- # NOTE:
- # The path in the canonical resource should always be the
- # full path including the bucket name, even for virtual-hosting
- # style addressing. The ``auth_path`` keeps track of the full
- # path for the canonical resource and would be passed in if
- # the client was using virtual-hosting style.
- if auth_path is not None:
- buf = auth_path
- else:
- buf = split.path
- if split.query:
- qsa = split.query.split('&')
- qsa = [a.split('=', 1) for a in qsa]
- qsa = [
- self.unquote_v(a) for a in qsa if a[0] in self.QSAOfInterest
- ]
- if len(qsa) > 0:
- qsa.sort(key=itemgetter(0))
- qsa = ['='.join(a) for a in qsa]
- buf += '?'
- buf += '&'.join(qsa)
- return buf
-
- def canonical_string(
- self, method, split, headers, expires=None, auth_path=None
- ):
- cs = method.upper() + '\n'
- cs += self.canonical_standard_headers(headers) + '\n'
- custom_headers = self.canonical_custom_headers(headers)
- if custom_headers:
- cs += custom_headers + '\n'
- cs += self.canonical_resource(split, auth_path=auth_path)
- return cs
-
- def get_signature(
- self, method, split, headers, expires=None, auth_path=None
- ):
- if self.credentials.token:
- del headers['x-amz-security-token']
- headers['x-amz-security-token'] = self.credentials.token
- string_to_sign = self.canonical_string(
- method, split, headers, auth_path=auth_path
- )
- logger.debug('StringToSign:\n%s', string_to_sign)
- return self.sign_string(string_to_sign)
-
- def add_auth(self, request):
- if self.credentials is None:
- raise NoCredentialsError
- logger.debug("Calculating signature using hmacv1 auth.")
- split = urlsplit(request.url)
- logger.debug('HTTP request method: %s', request.method)
- signature = self.get_signature(
- request.method, split, request.headers, auth_path=request.auth_path
- )
- self._inject_signature(request, signature)
-
- def _get_date(self):
- return formatdate(usegmt=True)
-
- def _inject_signature(self, request, signature):
- if 'Authorization' in request.headers:
- # We have to do this because request.headers is not
- # normal dictionary. It has the (unintuitive) behavior
- # of aggregating repeated setattr calls for the same
- # key value. For example:
- # headers['foo'] = 'a'; headers['foo'] = 'b'
- # list(headers) will print ['foo', 'foo'].
- del request.headers['Authorization']
-
- auth_header = f"AWS {self.credentials.access_key}:{signature}"
- request.headers['Authorization'] = auth_header
-
-
-class HmacV1QueryAuth(HmacV1Auth):
- """
- Generates a presigned request for s3.
-
- Spec from this document:
-
- http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
- #RESTAuthenticationQueryStringAuth
-
- """
-
- DEFAULT_EXPIRES = 3600
-
- def __init__(self, credentials, expires=DEFAULT_EXPIRES):
- self.credentials = credentials
- self._expires = expires
-
- def _get_date(self):
- return str(int(time.time() + int(self._expires)))
-
- def _inject_signature(self, request, signature):
- query_dict = {}
- query_dict['AWSAccessKeyId'] = self.credentials.access_key
- query_dict['Signature'] = signature
-
- for header_key in request.headers:
- lk = header_key.lower()
- # For query string requests, Expires is used instead of the
- # Date header.
- if header_key == 'Date':
- query_dict['Expires'] = request.headers['Date']
- # We only want to include relevant headers in the query string.
- # These can be anything that starts with x-amz, is Content-MD5,
- # or is Content-Type.
- elif lk.startswith('x-amz-') or lk in (
- 'content-md5',
- 'content-type',
- ):
- query_dict[lk] = request.headers[lk]
- # Combine all of the identified headers into an encoded
- # query string
- new_query_string = percent_encode_sequence(query_dict)
-
- # Create a new url with the presigned url.
- p = urlsplit(request.url)
- if p[3]:
- # If there was a pre-existing query string, we should
- # add that back before injecting the new query string.
- new_query_string = f'{p[3]}&{new_query_string}'
- new_url_parts = (p[0], p[1], p[2], new_query_string, p[4])
- request.url = urlunsplit(new_url_parts)
-
-
-class HmacV1PostAuth(HmacV1Auth):
- """
- Generates a presigned post for s3.
-
- Spec from this document:
-
- http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html
- """
-
- def add_auth(self, request):
- fields = {}
- if request.context.get('s3-presign-post-fields', None) is not None:
- fields = request.context['s3-presign-post-fields']
-
- policy = {}
- conditions = []
- if request.context.get('s3-presign-post-policy', None) is not None:
- policy = request.context['s3-presign-post-policy']
- if policy.get('conditions', None) is not None:
- conditions = policy['conditions']
-
- policy['conditions'] = conditions
-
- fields['AWSAccessKeyId'] = self.credentials.access_key
-
- if self.credentials.token is not None:
- fields['x-amz-security-token'] = self.credentials.token
- conditions.append({'x-amz-security-token': self.credentials.token})
-
- # Dump the base64 encoded policy into the fields dictionary.
- fields['policy'] = base64.b64encode(
- json.dumps(policy).encode('utf-8')
- ).decode('utf-8')
-
- fields['signature'] = self.sign_string(fields['policy'])
-
- request.context['s3-presign-post-fields'] = fields
- request.context['s3-presign-post-policy'] = policy
-
-
-class BearerAuth(TokenSigner):
- """
- Performs bearer token authorization by placing the bearer token in the
- Authorization header as specified by Section 2.1 of RFC 6750.
-
- https://datatracker.ietf.org/doc/html/rfc6750#section-2.1
- """
-
- def add_auth(self, request):
- if self.auth_token is None:
- raise NoAuthTokenError()
-
- auth_header = f'Bearer {self.auth_token.token}'
- if 'Authorization' in request.headers:
- del request.headers['Authorization']
- request.headers['Authorization'] = auth_header
-
-
-AUTH_TYPE_MAPS = {
- 'v2': SigV2Auth,
- 'v3': SigV3Auth,
- 'v3https': SigV3Auth,
- 's3': HmacV1Auth,
- 's3-query': HmacV1QueryAuth,
- 's3-presign-post': HmacV1PostAuth,
- 's3v4-presign-post': S3SigV4PostAuth,
- 'bearer': BearerAuth,
-}
-
-# Define v4 signers depending on if CRT is present
-if HAS_CRT:
- from botocore.crt.auth import CRT_AUTH_TYPE_MAPS
-
- AUTH_TYPE_MAPS.update(CRT_AUTH_TYPE_MAPS)
-else:
- AUTH_TYPE_MAPS.update(
- {
- 'v4': SigV4Auth,
- 'v4-query': SigV4QueryAuth,
- 's3v4': S3SigV4Auth,
- 's3v4-query': S3SigV4QueryAuth,
- }
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/palette.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/palette.py
deleted file mode 100644
index fa0c4dd40381addf5b42fae4228b6d8fef03abd9..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/palette.py
+++ /dev/null
@@ -1,100 +0,0 @@
-from math import sqrt
-from functools import lru_cache
-from typing import Sequence, Tuple, TYPE_CHECKING
-
-from .color_triplet import ColorTriplet
-
-if TYPE_CHECKING:
- from pip._vendor.rich.table import Table
-
-
-class Palette:
- """A palette of available colors."""
-
- def __init__(self, colors: Sequence[Tuple[int, int, int]]):
- self._colors = colors
-
- def __getitem__(self, number: int) -> ColorTriplet:
- return ColorTriplet(*self._colors[number])
-
- def __rich__(self) -> "Table":
- from pip._vendor.rich.color import Color
- from pip._vendor.rich.style import Style
- from pip._vendor.rich.text import Text
- from pip._vendor.rich.table import Table
-
- table = Table(
- "index",
- "RGB",
- "Color",
- title="Palette",
- caption=f"{len(self._colors)} colors",
- highlight=True,
- caption_justify="right",
- )
- for index, color in enumerate(self._colors):
- table.add_row(
- str(index),
- repr(color),
- Text(" " * 16, style=Style(bgcolor=Color.from_rgb(*color))),
- )
- return table
-
- # This is somewhat inefficient and needs caching
- @lru_cache(maxsize=1024)
- def match(self, color: Tuple[int, int, int]) -> int:
- """Find a color from a palette that most closely matches a given color.
-
- Args:
- color (Tuple[int, int, int]): RGB components in range 0 > 255.
-
- Returns:
- int: Index of closes matching color.
- """
- red1, green1, blue1 = color
- _sqrt = sqrt
- get_color = self._colors.__getitem__
-
- def get_color_distance(index: int) -> float:
- """Get the distance to a color."""
- red2, green2, blue2 = get_color(index)
- red_mean = (red1 + red2) // 2
- red = red1 - red2
- green = green1 - green2
- blue = blue1 - blue2
- return _sqrt(
- (((512 + red_mean) * red * red) >> 8)
- + 4 * green * green
- + (((767 - red_mean) * blue * blue) >> 8)
- )
-
- min_index = min(range(len(self._colors)), key=get_color_distance)
- return min_index
-
-
-if __name__ == "__main__": # pragma: no cover
- import colorsys
- from typing import Iterable
- from pip._vendor.rich.color import Color
- from pip._vendor.rich.console import Console, ConsoleOptions
- from pip._vendor.rich.segment import Segment
- from pip._vendor.rich.style import Style
-
- class ColorBox:
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> Iterable[Segment]:
- height = console.size.height - 3
- for y in range(0, height):
- for x in range(options.max_width):
- h = x / options.max_width
- l = y / (height + 1)
- r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0)
- r2, g2, b2 = colorsys.hls_to_rgb(h, l + (1 / height / 2), 1.0)
- bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255)
- color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255)
- yield Segment("▄", Style(color=color, bgcolor=bgcolor))
- yield Segment.line()
-
- console = Console()
- console.print(ColorBox())
diff --git a/spaces/Bostoncake/ChatAssistant/README.md b/spaces/Bostoncake/ChatAssistant/README.md
deleted file mode 100644
index 7eeec313696830e08b4a26a4e6dac3c78eb13edb..0000000000000000000000000000000000000000
--- a/spaces/Bostoncake/ChatAssistant/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatAssistant
-emoji: 💩
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.22.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/caffe2_export.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/caffe2_export.py
deleted file mode 100644
index ccac809d7bf49ab144b5f0a34f57e00c3534ad60..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/caffe2_export.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import copy
-import io
-import logging
-import numpy as np
-from typing import List
-import onnx
-import torch
-from caffe2.proto import caffe2_pb2
-from caffe2.python import core
-from caffe2.python.onnx.backend import Caffe2Backend
-from tabulate import tabulate
-from termcolor import colored
-from torch.onnx import OperatorExportTypes
-
-from .shared import (
- ScopedWS,
- construct_init_net_from_params,
- fuse_alias_placeholder,
- fuse_copy_between_cpu_and_gpu,
- get_params_from_init_net,
- group_norm_replace_aten_with_caffe2,
- infer_device_type,
- remove_dead_end_ops,
- remove_reshape_for_fc,
- save_graph,
-)
-
-logger = logging.getLogger(__name__)
-
-
-def export_onnx_model(model, inputs):
- """
- Trace and export a model to onnx format.
-
- Args:
- model (nn.Module):
- inputs (tuple[args]): the model will be called by `model(*inputs)`
-
- Returns:
- an onnx model
- """
- assert isinstance(model, torch.nn.Module)
-
- # make sure all modules are in eval mode, onnx may change the training state
- # of the module if the states are not consistent
- def _check_eval(module):
- assert not module.training
-
- model.apply(_check_eval)
-
- # Export the model to ONNX
- with torch.no_grad():
- with io.BytesIO() as f:
- torch.onnx.export(
- model,
- inputs,
- f,
- operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
- # verbose=True, # NOTE: uncomment this for debugging
- # export_params=True,
- )
- onnx_model = onnx.load_from_string(f.getvalue())
-
- # Apply ONNX's Optimization
- all_passes = onnx.optimizer.get_available_passes()
- passes = ["fuse_bn_into_conv"]
- assert all(p in all_passes for p in passes)
- onnx_model = onnx.optimizer.optimize(onnx_model, passes)
- return onnx_model
-
-
-def _op_stats(net_def):
- type_count = {}
- for t in [op.type for op in net_def.op]:
- type_count[t] = type_count.get(t, 0) + 1
- type_count_list = sorted(type_count.items(), key=lambda kv: kv[0]) # alphabet
- type_count_list = sorted(type_count_list, key=lambda kv: -kv[1]) # count
- return "\n".join("{:>4}x {}".format(count, name) for name, count in type_count_list)
-
-
-def _assign_device_option(
- predict_net: caffe2_pb2.NetDef, init_net: caffe2_pb2.NetDef, tensor_inputs: List[torch.Tensor]
-):
- """
- ONNX exported network doesn't have concept of device, assign necessary
- device option for each op in order to make it runable on GPU runtime.
- """
-
- def _get_device_type(torch_tensor):
- assert torch_tensor.device.type in ["cpu", "cuda"]
- assert torch_tensor.device.index == 0
- return torch_tensor.device.type
-
- def _assign_op_device_option(net_proto, net_ssa, blob_device_types):
- for op, ssa_i in zip(net_proto.op, net_ssa):
- if op.type in ["CopyCPUToGPU", "CopyGPUToCPU"]:
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
- else:
- devices = [blob_device_types[b] for b in ssa_i[0] + ssa_i[1]]
- assert all(d == devices[0] for d in devices)
- if devices[0] == "cuda":
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
-
- # update ops in predict_net
- predict_net_input_device_types = {
- (name, 0): _get_device_type(tensor)
- for name, tensor in zip(predict_net.external_input, tensor_inputs)
- }
- predict_net_device_types = infer_device_type(
- predict_net, known_status=predict_net_input_device_types, device_name_style="pytorch"
- )
- predict_net_ssa, _ = core.get_ssa(predict_net)
- _assign_op_device_option(predict_net, predict_net_ssa, predict_net_device_types)
-
- # update ops in init_net
- init_net_ssa, versions = core.get_ssa(init_net)
- init_net_output_device_types = {
- (name, versions[name]): predict_net_device_types[(name, 0)]
- for name in init_net.external_output
- }
- init_net_device_types = infer_device_type(
- init_net, known_status=init_net_output_device_types, device_name_style="pytorch"
- )
- _assign_op_device_option(init_net, init_net_ssa, init_net_device_types)
-
-
-def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs: List[torch.Tensor]):
- """
- Export a caffe2-compatible Detectron2 model to caffe2 format via ONNX.
-
- Arg:
- model: a caffe2-compatible version of detectron2 model, defined in caffe2_modeling.py
- tensor_inputs: a list of tensors that caffe2 model takes as input.
- """
- model = copy.deepcopy(model)
- assert isinstance(model, torch.nn.Module)
- assert hasattr(model, "encode_additional_info")
-
- # Export via ONNX
- logger.info("Exporting a {} model via ONNX ...".format(type(model).__name__))
- onnx_model = export_onnx_model(model, (tensor_inputs,))
- # Convert ONNX model to Caffe2 protobuf
- init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model)
- ops_table = [[op.type, op.input, op.output] for op in predict_net.op]
- table = tabulate(ops_table, headers=["type", "input", "output"], tablefmt="pipe")
- logger.info(
- "ONNX export Done. Exported predict_net (before optimizations):\n" + colored(table, "cyan")
- )
-
- # Apply protobuf optimization
- fuse_alias_placeholder(predict_net, init_net)
- if any(t.device.type != "cpu" for t in tensor_inputs):
- fuse_copy_between_cpu_and_gpu(predict_net)
- remove_dead_end_ops(init_net)
- _assign_device_option(predict_net, init_net, tensor_inputs)
- params, device_options = get_params_from_init_net(init_net)
- predict_net, params = remove_reshape_for_fc(predict_net, params)
- init_net = construct_init_net_from_params(params, device_options)
- group_norm_replace_aten_with_caffe2(predict_net)
-
- # Record necessary information for running the pb model in Detectron2 system.
- model.encode_additional_info(predict_net, init_net)
-
- logger.info("Operators used in predict_net: \n{}".format(_op_stats(predict_net)))
- logger.info("Operators used in init_net: \n{}".format(_op_stats(init_net)))
-
- return predict_net, init_net
-
-
-def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_path):
- """
- Run the caffe2 model on given inputs, recording the shape and draw the graph.
-
- predict_net/init_net: caffe2 model.
- tensor_inputs: a list of tensors that caffe2 model takes as input.
- graph_save_path: path for saving graph of exported model.
- """
-
- logger.info("Saving graph of ONNX exported model to {} ...".format(graph_save_path))
- save_graph(predict_net, graph_save_path, op_only=False)
-
- # Run the exported Caffe2 net
- logger.info("Running ONNX exported model ...")
- with ScopedWS("__ws_tmp__", True) as ws:
- ws.RunNetOnce(init_net)
- initialized_blobs = set(ws.Blobs())
- uninitialized = [inp for inp in predict_net.external_input if inp not in initialized_blobs]
- for name, blob in zip(uninitialized, tensor_inputs):
- ws.FeedBlob(name, blob)
-
- try:
- ws.RunNetOnce(predict_net)
- except RuntimeError as e:
- logger.warning("Encountered RuntimeError: \n{}".format(str(e)))
-
- ws_blobs = {b: ws.FetchBlob(b) for b in ws.Blobs()}
- blob_sizes = {b: ws_blobs[b].shape for b in ws_blobs if isinstance(ws_blobs[b], np.ndarray)}
-
- logger.info("Saving graph with blob shapes to {} ...".format(graph_save_path))
- save_graph(predict_net, graph_save_path, op_only=False, blob_sizes=blob_sizes)
-
- return ws_blobs
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_stl.py b/spaces/CVPR/LIVE/pybind11/tests/test_stl.py
deleted file mode 100644
index 141b3e8492c7400e4d0980dd9bc6347f5229f80a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_stl.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# -*- coding: utf-8 -*-
-import pytest
-
-from pybind11_tests import stl as m
-from pybind11_tests import UserType
-from pybind11_tests import ConstructorStats
-
-
-def test_vector(doc):
- """std::vector <-> list"""
- lst = m.cast_vector()
- assert lst == [1]
- lst.append(2)
- assert m.load_vector(lst)
- assert m.load_vector(tuple(lst))
-
- assert m.cast_bool_vector() == [True, False]
- assert m.load_bool_vector([True, False])
-
- assert doc(m.cast_vector) == "cast_vector() -> List[int]"
- assert doc(m.load_vector) == "load_vector(arg0: List[int]) -> bool"
-
- # Test regression caused by 936: pointers to stl containers weren't castable
- assert m.cast_ptr_vector() == ["lvalue", "lvalue"]
-
-
-def test_deque(doc):
- """std::deque <-> list"""
- lst = m.cast_deque()
- assert lst == [1]
- lst.append(2)
- assert m.load_deque(lst)
- assert m.load_deque(tuple(lst))
-
-
-def test_array(doc):
- """std::array <-> list"""
- lst = m.cast_array()
- assert lst == [1, 2]
- assert m.load_array(lst)
-
- assert doc(m.cast_array) == "cast_array() -> List[int[2]]"
- assert doc(m.load_array) == "load_array(arg0: List[int[2]]) -> bool"
-
-
-def test_valarray(doc):
- """std::valarray <-> list"""
- lst = m.cast_valarray()
- assert lst == [1, 4, 9]
- assert m.load_valarray(lst)
-
- assert doc(m.cast_valarray) == "cast_valarray() -> List[int]"
- assert doc(m.load_valarray) == "load_valarray(arg0: List[int]) -> bool"
-
-
-def test_map(doc):
- """std::map <-> dict"""
- d = m.cast_map()
- assert d == {"key": "value"}
- assert "key" in d
- d["key2"] = "value2"
- assert "key2" in d
- assert m.load_map(d)
-
- assert doc(m.cast_map) == "cast_map() -> Dict[str, str]"
- assert doc(m.load_map) == "load_map(arg0: Dict[str, str]) -> bool"
-
-
-def test_set(doc):
- """std::set <-> set"""
- s = m.cast_set()
- assert s == {"key1", "key2"}
- s.add("key3")
- assert m.load_set(s)
-
- assert doc(m.cast_set) == "cast_set() -> Set[str]"
- assert doc(m.load_set) == "load_set(arg0: Set[str]) -> bool"
-
-
-def test_recursive_casting():
- """Tests that stl casters preserve lvalue/rvalue context for container values"""
- assert m.cast_rv_vector() == ["rvalue", "rvalue"]
- assert m.cast_lv_vector() == ["lvalue", "lvalue"]
- assert m.cast_rv_array() == ["rvalue", "rvalue", "rvalue"]
- assert m.cast_lv_array() == ["lvalue", "lvalue"]
- assert m.cast_rv_map() == {"a": "rvalue"}
- assert m.cast_lv_map() == {"a": "lvalue", "b": "lvalue"}
- assert m.cast_rv_nested() == [[[{"b": "rvalue", "c": "rvalue"}], [{"a": "rvalue"}]]]
- assert m.cast_lv_nested() == {
- "a": [[["lvalue", "lvalue"]], [["lvalue", "lvalue"]]],
- "b": [[["lvalue", "lvalue"], ["lvalue", "lvalue"]]]
- }
-
- # Issue #853 test case:
- z = m.cast_unique_ptr_vector()
- assert z[0].value == 7 and z[1].value == 42
-
-
-def test_move_out_container():
- """Properties use the `reference_internal` policy by default. If the underlying function
- returns an rvalue, the policy is automatically changed to `move` to avoid referencing
- a temporary. In case the return value is a container of user-defined types, the policy
- also needs to be applied to the elements, not just the container."""
- c = m.MoveOutContainer()
- moved_out_list = c.move_list
- assert [x.value for x in moved_out_list] == [0, 1, 2]
-
-
-@pytest.mark.skipif(not hasattr(m, "has_optional"), reason='no ')
-def test_optional():
- assert m.double_or_zero(None) == 0
- assert m.double_or_zero(42) == 84
- pytest.raises(TypeError, m.double_or_zero, 'foo')
-
- assert m.half_or_none(0) is None
- assert m.half_or_none(42) == 21
- pytest.raises(TypeError, m.half_or_none, 'foo')
-
- assert m.test_nullopt() == 42
- assert m.test_nullopt(None) == 42
- assert m.test_nullopt(42) == 42
- assert m.test_nullopt(43) == 43
-
- assert m.test_no_assign() == 42
- assert m.test_no_assign(None) == 42
- assert m.test_no_assign(m.NoAssign(43)) == 43
- pytest.raises(TypeError, m.test_no_assign, 43)
-
- assert m.nodefer_none_optional(None)
-
- holder = m.OptionalHolder()
- mvalue = holder.member
- assert mvalue.initialized
- assert holder.member_initialized()
-
-
-@pytest.mark.skipif(not hasattr(m, "has_exp_optional"), reason='no ')
-def test_exp_optional():
- assert m.double_or_zero_exp(None) == 0
- assert m.double_or_zero_exp(42) == 84
- pytest.raises(TypeError, m.double_or_zero_exp, 'foo')
-
- assert m.half_or_none_exp(0) is None
- assert m.half_or_none_exp(42) == 21
- pytest.raises(TypeError, m.half_or_none_exp, 'foo')
-
- assert m.test_nullopt_exp() == 42
- assert m.test_nullopt_exp(None) == 42
- assert m.test_nullopt_exp(42) == 42
- assert m.test_nullopt_exp(43) == 43
-
- assert m.test_no_assign_exp() == 42
- assert m.test_no_assign_exp(None) == 42
- assert m.test_no_assign_exp(m.NoAssign(43)) == 43
- pytest.raises(TypeError, m.test_no_assign_exp, 43)
-
- holder = m.OptionalExpHolder()
- mvalue = holder.member
- assert mvalue.initialized
- assert holder.member_initialized()
-
-
-@pytest.mark.skipif(not hasattr(m, "load_variant"), reason='no ')
-def test_variant(doc):
- assert m.load_variant(1) == "int"
- assert m.load_variant("1") == "std::string"
- assert m.load_variant(1.0) == "double"
- assert m.load_variant(None) == "std::nullptr_t"
-
- assert m.load_variant_2pass(1) == "int"
- assert m.load_variant_2pass(1.0) == "double"
-
- assert m.cast_variant() == (5, "Hello")
-
- assert doc(m.load_variant) == "load_variant(arg0: Union[int, str, float, None]) -> str"
-
-
-def test_vec_of_reference_wrapper():
- """#171: Can't return reference wrappers (or STL structures containing them)"""
- assert str(m.return_vec_of_reference_wrapper(UserType(4))) == \
- "[UserType(1), UserType(2), UserType(3), UserType(4)]"
-
-
-def test_stl_pass_by_pointer(msg):
- """Passing nullptr or None to an STL container pointer is not expected to work"""
- with pytest.raises(TypeError) as excinfo:
- m.stl_pass_by_pointer() # default value is `nullptr`
- assert msg(excinfo.value) == """
- stl_pass_by_pointer(): incompatible function arguments. The following argument types are supported:
- 1. (v: List[int] = None) -> List[int]
-
- Invoked with:
- """ # noqa: E501 line too long
-
- with pytest.raises(TypeError) as excinfo:
- m.stl_pass_by_pointer(None)
- assert msg(excinfo.value) == """
- stl_pass_by_pointer(): incompatible function arguments. The following argument types are supported:
- 1. (v: List[int] = None) -> List[int]
-
- Invoked with: None
- """ # noqa: E501 line too long
-
- assert m.stl_pass_by_pointer([1, 2, 3]) == [1, 2, 3]
-
-
-def test_missing_header_message():
- """Trying convert `list` to a `std::vector`, or vice versa, without including
- should result in a helpful suggestion in the error message"""
- import pybind11_cross_module_tests as cm
-
- expected_message = ("Did you forget to `#include `? Or ,\n"
- ", , etc. Some automatic\n"
- "conversions are optional and require extra headers to be included\n"
- "when compiling your pybind11 module.")
-
- with pytest.raises(TypeError) as excinfo:
- cm.missing_header_arg([1.0, 2.0, 3.0])
- assert expected_message in str(excinfo.value)
-
- with pytest.raises(TypeError) as excinfo:
- cm.missing_header_return()
- assert expected_message in str(excinfo.value)
-
-
-def test_function_with_string_and_vector_string_arg():
- """Check if a string is NOT implicitly converted to a list, which was the
- behavior before fix of issue #1258"""
- assert m.func_with_string_or_vector_string_arg_overload(('A', 'B', )) == 2
- assert m.func_with_string_or_vector_string_arg_overload(['A', 'B']) == 2
- assert m.func_with_string_or_vector_string_arg_overload('A') == 3
-
-
-def test_stl_ownership():
- cstats = ConstructorStats.get(m.Placeholder)
- assert cstats.alive() == 0
- r = m.test_stl_ownership()
- assert len(r) == 1
- del r
- assert cstats.alive() == 0
-
-
-def test_array_cast_sequence():
- assert m.array_cast_sequence((1, 2, 3)) == [1, 2, 3]
-
-
-def test_issue_1561():
- """ check fix for issue #1561 """
- bar = m.Issue1561Outer()
- bar.list = [m.Issue1561Inner('bar')]
- bar.list
- assert bar.list[0].data == 'bar'
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/placeholder.h b/spaces/CVPR/LIVE/thrust/thrust/detail/functional/placeholder.h
deleted file mode 100644
index d0832cfecb1c70dd28d78c44349f0ee5ad78c0fa..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/placeholder.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace detail
-{
-namespace functional
-{
-
-template
- struct placeholder
-{
- typedef actor > type;
-};
-
-} // end functional
-} // end detail
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/scatter.h
deleted file mode 100644
index baaf1e63b1e28fbe8b071ca0fb6666145bfe7c1f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/scatter.h
+++ /dev/null
@@ -1,423 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file scatter.h
- * \brief Irregular copying to a destination range
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup scattering
- * \ingroup copying
- * \{
- */
-
-
-/*! \p scatter copies elements from a source range into an output array
- * according to a map. For each iterator \c i in the range [\p first, \p last),
- * the value \c *i is assigned to output[*(map + (i - first))]. The
- * output iterator must permit random access. If the same index
- * appears more than once in the range [map, map + (last - first)),
- * the result is undefined.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first Beginning of the sequence of values to scatter.
- * \param last End of the sequence of values to scatter.
- * \param map Beginning of the sequence of output indices.
- * \param result Destination of the source elements.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type.
- * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type.
- * \tparam RandomAccessIterator must be a model of Random Access iterator.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The expression `result[*i]` shall be valid for all iterators in the range `[map,map + (last - first))`.
- *
- * The following code snippet demonstrates how to use \p scatter to
- * reorder a range using the \p thrust::device execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * // mark even indices with a 1; odd indices with a 0
- * int values[10] = {1, 0, 1, 0, 1, 0, 1, 0, 1, 0};
- * thrust::device_vector d_values(values, values + 10);
- *
- * // scatter all even indices into the first half of the
- * // range, and odd indices vice versa
- * int map[10] = {0, 5, 1, 6, 2, 7, 3, 8, 4, 9};
- * thrust::device_vector d_map(map, map + 10);
- *
- * thrust::device_vector d_output(10);
- * thrust::scatter(thrust::device,
- * d_values.begin(), d_values.end(),
- * d_map.begin(), d_output.begin());
- * // d_output is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0}
- * \endcode
- *
- * \note \p scatter is the inverse of thrust::gather.
- */
-template
-__host__ __device__
- void scatter(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- RandomAccessIterator result);
-
-
-/*! \p scatter copies elements from a source range into an output array
- * according to a map. For each iterator \c i in the range [\p first, \p last),
- * the value \c *i is assigned to output[*(map + (i - first))]. The
- * output iterator must permit random access. If the same index
- * appears more than once in the range [map, map + (last - first)),
- * the result is undefined.
- *
- * \param first Beginning of the sequence of values to scatter.
- * \param last End of the sequence of values to scatter.
- * \param map Beginning of the sequence of output indices.
- * \param result Destination of the source elements.
- *
- * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type.
- * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type.
- * \tparam RandomAccessIterator must be a model of Random Access iterator.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The expression `result[*i]` shall be valid for all iterators in the range `[map,map + (last - first))`.
- *
- * The following code snippet demonstrates how to use \p scatter to
- * reorder a range.
- *
- * \code
- * #include
- * #include
- * ...
- * // mark even indices with a 1; odd indices with a 0
- * int values[10] = {1, 0, 1, 0, 1, 0, 1, 0, 1, 0};
- * thrust::device_vector d_values(values, values + 10);
- *
- * // scatter all even indices into the first half of the
- * // range, and odd indices vice versa
- * int map[10] = {0, 5, 1, 6, 2, 7, 3, 8, 4, 9};
- * thrust::device_vector d_map(map, map + 10);
- *
- * thrust::device_vector d_output(10);
- * thrust::scatter(d_values.begin(), d_values.end(),
- * d_map.begin(), d_output.begin());
- * // d_output is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0}
- * \endcode
- *
- * \note \p scatter is the inverse of thrust::gather.
- */
-template
- void scatter(InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- RandomAccessIterator result);
-
-
-/*! \p scatter_if conditionally copies elements from a source range into an
- * output array according to a map. For each iterator \c i in the
- * range [first, last) such that *(stencil + (i - first)) is
- * true, the value \c *i is assigned to output[*(map + (i - first))].
- * The output iterator must permit random access. If the same index
- * appears more than once in the range [map, map + (last - first))
- * the result is undefined.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first Beginning of the sequence of values to scatter.
- * \param last End of the sequence of values to scatter.
- * \param map Beginning of the sequence of output indices.
- * \param stencil Beginning of the sequence of predicate values.
- * \param output Beginning of the destination range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type.
- * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type.
- * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c bool.
- * \tparam RandomAccessIterator must be a model of Random Access iterator.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `*(stencil + i) != false`.
- *
- * \code
- * #include
- * #include
- * ...
- * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80};
- * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4};
- * int S[8] = {1, 0, 1, 0, 1, 0, 1, 0};
- * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0};
- *
- * thrust::scatter_if(thrust::host, V, V + 8, M, S, D);
- *
- * // D contains [10, 30, 50, 70, 0, 0, 0, 0];
- * \endcode
- *
- * \note \p scatter_if is the inverse of thrust::gather_if.
- */
-template
-__host__ __device__
- void scatter_if(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output);
-
-
-/*! \p scatter_if conditionally copies elements from a source range into an
- * output array according to a map. For each iterator \c i in the
- * range [first, last) such that *(stencil + (i - first)) is
- * true, the value \c *i is assigned to output[*(map + (i - first))].
- * The output iterator must permit random access. If the same index
- * appears more than once in the range [map, map + (last - first))
- * the result is undefined.
- *
- * \param first Beginning of the sequence of values to scatter.
- * \param last End of the sequence of values to scatter.
- * \param map Beginning of the sequence of output indices.
- * \param stencil Beginning of the sequence of predicate values.
- * \param output Beginning of the destination range.
- *
- * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type.
- * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type.
- * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c bool.
- * \tparam RandomAccessIterator must be a model of Random Access iterator.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `*(stencil + i) != false`.
- *
- * \code
- * #include
- * ...
- * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80};
- * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4};
- * int S[8] = {1, 0, 1, 0, 1, 0, 1, 0};
- * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0};
- *
- * thrust::scatter_if(V, V + 8, M, S, D);
- *
- * // D contains [10, 30, 50, 70, 0, 0, 0, 0];
- * \endcode
- *
- * \note \p scatter_if is the inverse of thrust::gather_if.
- */
-template
- void scatter_if(InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output);
-
-
-/*! \p scatter_if conditionally copies elements from a source range into an
- * output array according to a map. For each iterator \c i in the
- * range [first, last) such that pred(*(stencil + (i - first))) is
- * \c true, the value \c *i is assigned to output[*(map + (i - first))].
- * The output iterator must permit random access. If the same index
- * appears more than once in the range [map, map + (last - first))
- * the result is undefined.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first Beginning of the sequence of values to scatter.
- * \param last End of the sequence of values to scatter.
- * \param map Beginning of the sequence of output indices.
- * \param stencil Beginning of the sequence of predicate values.
- * \param output Beginning of the destination range.
- * \param pred Predicate to apply to the stencil values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type.
- * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type.
- * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c Predicate's \c argument_type.
- * \tparam RandomAccessIterator must be a model of Random Access iterator.
- * \tparam Predicate must be a model of Predicate.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `pred(*(stencil + i)) != false`.
- *
- * \code
- * #include
- * #include
- *
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(int x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- *
- * ...
- *
- * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80};
- * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4};
- * int S[8] = {2, 1, 2, 1, 2, 1, 2, 1};
- * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0};
- *
- * is_even pred;
- * thrust::scatter_if(thrust::host, V, V + 8, M, S, D, pred);
- *
- * // D contains [10, 30, 50, 70, 0, 0, 0, 0];
- * \endcode
- *
- * \note \p scatter_if is the inverse of thrust::gather_if.
- */
-template
-__host__ __device__
- void scatter_if(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output,
- Predicate pred);
-
-
-/*! \p scatter_if conditionally copies elements from a source range into an
- * output array according to a map. For each iterator \c i in the
- * range [first, last) such that pred(*(stencil + (i - first))) is
- * \c true, the value \c *i is assigned to output[*(map + (i - first))].
- * The output iterator must permit random access. If the same index
- * appears more than once in the range [map, map + (last - first))
- * the result is undefined.
- *
- * \param first Beginning of the sequence of values to scatter.
- * \param last End of the sequence of values to scatter.
- * \param map Beginning of the sequence of output indices.
- * \param stencil Beginning of the sequence of predicate values.
- * \param output Beginning of the destination range.
- * \param pred Predicate to apply to the stencil values.
- *
- * \tparam InputIterator1 must be a model of Input Iterator and \c InputIterator1's \c value_type must be convertible to \c RandomAccessIterator's \c value_type.
- * \tparam InputIterator2 must be a model of Input Iterator and \c InputIterator2's \c value_type must be convertible to \c RandomAccessIterator's \c difference_type.
- * \tparam InputIterator3 must be a model of Input Iterator and \c InputIterator3's \c value_type must be convertible to \c Predicate's \c argument_type.
- * \tparam RandomAccessIterator must be a model of Random Access iterator.
- * \tparam Predicate must be a model of Predicate.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[first,last)` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[map,map + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The iterator `result + i` shall not refer to any element referenced by any iterator `j` in the range `[stencil,stencil + (last - first))` for all iterators `i` in the range `[map,map + (last - first))`.
- *
- * \pre The expression `result[*i]` shall be valid for all iterators `i` in the range `[map,map + (last - first))` for which the following condition holds: `pred(*(stencil + i)) != false`.
- *
- * \code
- * #include
- *
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(int x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- *
- * ...
- *
- * int V[8] = {10, 20, 30, 40, 50, 60, 70, 80};
- * int M[8] = {0, 5, 1, 6, 2, 7, 3, 4};
- * int S[8] = {2, 1, 2, 1, 2, 1, 2, 1};
- * int D[8] = {0, 0, 0, 0, 0, 0, 0, 0};
- *
- * is_even pred;
- * thrust::scatter_if(V, V + 8, M, S, D, pred);
- *
- * // D contains [10, 30, 50, 70, 0, 0, 0, 0];
- * \endcode
- *
- * \note \p scatter_if is the inverse of thrust::gather_if.
- */
-template
- void scatter_if(InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output,
- Predicate pred);
-
-
-/*! \} // end scattering
- */
-
-
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/advance.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/advance.h
deleted file mode 100644
index f9cab587b374b9349ee7bfff8128a42462ad17ab..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/advance.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-template
-__host__ __device__
-void advance(InputIterator& i, Distance n);
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_text/text_detection.py b/spaces/Cpp4App/Cpp4App/CDM/detect_text/text_detection.py
deleted file mode 100644
index 3d7a92c993a5ae3544dd20b6b17e02b37bc9aaf9..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/CDM/detect_text/text_detection.py
+++ /dev/null
@@ -1,289 +0,0 @@
-import CDM.detect_text.ocr as ocr
-from CDM.detect_text.Text import Text
-import numpy as np
-import cv2
-import json
-import time
-import os
-from os.path import join as pjoin
-# from paddleocr import PaddleOCR
-import pytesseract
-
-# paddle_model = PaddleOCR(use_angle_cls=True, lang="en") #'ch' for chinese and english, 'en' for english
-
-
-def save_detection_json(file_path, texts, img_shape):
- f_out = open(file_path, 'w')
- output = {'img_shape': img_shape, 'texts': []}
- for text in texts:
- c = {'id': text.id, 'content': text.content}
- loc = text.location
- c['column_min'], c['row_min'], c['column_max'], c['row_max'] = loc['left'], loc['top'], loc['right'], loc['bottom']
- c['width'] = text.width
- c['height'] = text.height
- output['texts'].append(c)
- json.dump(output, f_out, indent=4)
-
-
-def visualize_texts(org_img, texts, shown_resize_height=None, show=False, write_path=None):
- img = org_img.copy()
- for text in texts:
- text.visualize_element(img, line=2)
-
- img_resize = img
- if shown_resize_height is not None:
- img_resize = cv2.resize(img, (int(shown_resize_height * (img.shape[1]/img.shape[0])), shown_resize_height))
-
- if show:
- cv2.imshow('texts', img_resize)
- cv2.waitKey(0)
- cv2.destroyWindow('texts')
- if write_path is not None:
- cv2.imwrite(write_path, img)
-
-
-def text_sentences_recognition(texts):
- '''
- Merge separate words detected by Google ocr into a sentence
- '''
- changed = True
- while changed:
- changed = False
- temp_set = []
- for text_a in texts:
- merged = False
- for text_b in temp_set:
- if text_a.is_on_same_line(text_b, 'h', bias_justify=0.2 * min(text_a.height, text_b.height), bias_gap=2 * max(text_a.word_width, text_b.word_width)):
- text_b.merge_text(text_a)
- merged = True
- changed = True
- break
- if not merged:
- temp_set.append(text_a)
- texts = temp_set.copy()
-
- for i, text in enumerate(texts):
- text.id = i
- return texts
-
-
-def merge_intersected_texts(texts):
- '''
- Merge intersected texts (sentences or words)
- '''
- changed = True
- while changed:
- changed = False
- temp_set = []
- for text_a in texts:
- merged = False
- for text_b in temp_set:
- if text_a.is_intersected(text_b, bias=2):
- text_b.merge_text(text_a)
- merged = True
- changed = True
- break
- if not merged:
- temp_set.append(text_a)
- texts = temp_set.copy()
- return texts
-
-
-def text_cvt_orc_format(ocr_result):
- texts = []
- if ocr_result is not None:
- for i, result in enumerate(ocr_result):
- error = False
- x_coordinates = []
- y_coordinates = []
- text_location = result['boundingPoly']['vertices']
- content = result['description']
- for loc in text_location:
- if 'x' not in loc or 'y' not in loc:
- error = True
- break
- x_coordinates.append(loc['x'])
- y_coordinates.append(loc['y'])
- if error: continue
- location = {'left': min(x_coordinates), 'top': min(y_coordinates),
- 'right': max(x_coordinates), 'bottom': max(y_coordinates)}
- texts.append(Text(i, content, location))
- return texts
-
-
-def text_cvt_orc_format_paddle(paddle_result):
- texts = []
- for i, line in enumerate(paddle_result):
- points = np.array(line[0])
- # points = points * 5
- location = {'left': int(min(points[:, 0])), 'top': int(min(points[:, 1])), 'right': int(max(points[:, 0])),
- 'bottom': int(max(points[:, 1]))}
- content = line[1][0]
- texts.append(Text(i, content, location))
- return texts
-
-
-def text_cvt_orc_format_tesseract(tesseract_result):
- # texts = []
- # i_real = 0
- # for i, line in enumerate(tesseract_result['text']):
- # content = line.strip()
- # location = {
- # 'left': int(tesseract_result['left'][i]),
- # 'top': int(tesseract_result['top'][i]),
- # 'right': int(tesseract_result['left'][i]) + int(tesseract_result['width'][i]),
- # 'bottom': int(tesseract_result['top'][i]) + int(tesseract_result['height'][i])
- # }
- # if len(content) > 0:
- # texts.append(Text(i_real, content, location))
- # i_real = i_real + 1
-
- # Extract line boxes
- texts = []
- i_real = 0
- line_boxes = []
- n_boxes = len(tesseract_result['level'])
- for i in range(n_boxes):
- if tesseract_result['level'][i] == 4 and len(tesseract_result['text'][i].strip()) > 0:
- # (x, y, w, h) = (tesseract_result['left'][i], tesseract_result['top'][i], tesseract_result['width'][i], tesseract_result['height'][i])
- content = tesseract_result['text'][i].strip()
- location = {
- 'left': int(tesseract_result['left'][i]),
- 'top': int(tesseract_result['top'][i]),
- 'right': int(tesseract_result['left'][i]) + int(tesseract_result['width'][i]),
- 'bottom': int(tesseract_result['top'][i]) + int(tesseract_result['height'][i])
- }
- texts.append(Text(i_real, content, location))
- i_real = i_real + 1
- # print("ocr result: ", texts)
-
- return texts
-
-def text_cvt_orc_format_tesseract_by_line(data):
-
- # line_data = []
- line_num = None
- line_text = []
- line_box = [0, 0, 0, 0]
- texts = []
- i_real = 0
-
- for i in range(len(data['level'])):
- # check if the level is word
- if data['level'][i] == 5:
- if line_num != data['line_num'][i]:
- if line_num is not None: # append the previous line data to line_data
- content = ' '.join(line_text)
- location = {
- 'left': line_box[0],
- 'top': line_box[1],
- 'right': line_box[2],
- 'bottom': line_box[3]
- }
- texts.append(Text(i_real, content, location))
- i_real = i_real + 1
-
- # start a new line
- line_num = data['line_num'][i]
- line_text = [data['text'][i]]
- line_box = [
- data['left'][i],
- data['top'][i],
- data['left'][i] + data['width'][i],
- data['top'][i] + data['height'][i],
- ]
- else: # add a word to the current line
- line_text.append(data['text'][i])
- line_box[2] = max(line_box[2], data['left'][i] + data['width'][i])
- line_box[3] = max(line_box[3], data['top'][i] + data['height'][i])
-
- # append the last line data to line_data
- if line_text:
- content = ' '.join(line_text)
- location = {
- 'left': line_box[0],
- 'top': line_box[1],
- 'right': line_box[2],
- 'bottom': line_box[3]
- }
- texts.append(Text(i_real, content, location))
- i_real = i_real + 1
-
- return texts
-
-
-def text_filter_noise(texts):
- valid_texts = []
- for text in texts:
- if len(text.content) <= 1 and text.content.lower() not in ['a', ',', '.', '!', '?', '$', '%', ':', '&', '+']:
- continue
- valid_texts.append(text)
- return valid_texts
-
-
-def text_detection(input_file='../data/input/30800.jpg', output_file='../data/output', show=False, method='google', paddle_model=None):
- '''
- :param method: google or paddle
- :param paddle_model: the preload paddle model for paddle ocr
- '''
- start = time.process_time()
- name = input_file.split('/')[-1][:-4]
- ocr_root = pjoin(output_file, 'ocr')
- img = cv2.imread(input_file)
- if img is None:
- print("imread nothing!")
-
- # resize the img to speed up the ocr
- # img = cv2.resize(img, (int(img.shape[1]/5), int(img.shape[0]/5)))
- # cv2.imshow("img", img)
- # cv2.waitKey(0)
-
- if method == 'google':
- print('*** Detect Text through Google OCR ***')
- ocr_result = ocr.ocr_detection_google(input_file)
- texts = text_cvt_orc_format(ocr_result)
- texts = merge_intersected_texts(texts)
- texts = text_filter_noise(texts)
- texts = text_sentences_recognition(texts)
- ocr_time_cost = time.process_time() - start
- elif method == 'paddle':
- # The import of the paddle ocr can be separate to the beginning of the program if you decide to use this method
- # from paddleocr import PaddleOCR
- print('*** Detect Text through Paddle OCR ***')
- # if paddle_model is None:
- # paddle_model = PaddleOCR(use_angle_cls=True, lang="en") #'ch' for chinese and english, 'en' for english
- # None
- result = paddle_model.ocr(input_file, cls=True)
- ocr_time_cost = time.process_time() - start
- texts = text_cvt_orc_format_paddle(result)
-
- elif method == 'pytesseract':
-
- img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # Perform OCR using Tesseract
- result = pytesseract.image_to_data(img_rgb, output_type=pytesseract.Output.DICT)
- print("ocr result: ", result)
-
- ocr_time_cost = time.process_time() - start
-
- # Convert the Tesseract result to the desired format
- texts = text_cvt_orc_format_tesseract_by_line(result)
- print("texts: ", texts)
- else:
- raise ValueError('Method has to be "google" or "paddle" or "pytesseract"')
-
- visualize_texts(img, texts, shown_resize_height=800, show=show, write_path=pjoin(ocr_root, name+'.png'))
- save_detection_json(pjoin(ocr_root, name+'.json'), texts, img.shape)
- # ocr_time_cost = time.process_time() - start
- print("[Text Detection Completed in %.3f s] Input: %s Output: %s" % (ocr_time_cost, input_file, pjoin(ocr_root, name+'.json')))
-
- # print("!!! detected content !!!")
- # for text in texts:
- # print(text.content)
-
- return ocr_time_cost
-
-
-# text_detection()
-
diff --git a/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v5/preprocess.py b/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v5/preprocess.py
deleted file mode 100644
index 4020532614a2fa0c501e59585d2f2b52a79f0184..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v5/preprocess.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import cv2
-import numpy as np
-
-def unsharp_masking(img, kernel_size=5, threshold=2.0):
- if kernel_size % 2 == 0:
- kernel_size += 1 # Ensure the kernel size is odd
- gaussian = cv2.GaussianBlur(img, (kernel_size, kernel_size), 2.0)
- unsharp_mask = cv2.addWeighted(img, threshold, gaussian, -1.0, 0)
- # Clip the pixel values to the valid range [0, 255]
- unsharp_mask = np.clip(unsharp_mask, 0, 255)
- # Normalize the image to bring pixel values back to [0, 255]
- cv2.normalize(unsharp_mask, unsharp_mask, 0, 255, cv2.NORM_MINMAX)
- return unsharp_mask
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/stat.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/stat.py
deleted file mode 100644
index 46c9498dc720e7c23b278ae31b65dbf55f2ad8be..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/stat.py
+++ /dev/null
@@ -1,142 +0,0 @@
-"""Extra methods for DesignSpaceDocument to generate its STAT table data."""
-
-from __future__ import annotations
-
-from typing import Dict, List, Union
-
-import fontTools.otlLib.builder
-from fontTools.designspaceLib import (
- AxisLabelDescriptor,
- DesignSpaceDocument,
- DesignSpaceDocumentError,
- LocationLabelDescriptor,
-)
-from fontTools.designspaceLib.types import Region, getVFUserRegion, locationInRegion
-from fontTools.ttLib import TTFont
-
-
-def buildVFStatTable(ttFont: TTFont, doc: DesignSpaceDocument, vfName: str) -> None:
- """Build the STAT table for the variable font identified by its name in
- the given document.
-
- Knowing which variable we're building STAT data for is needed to subset
- the STAT locations to only include what the variable font actually ships.
-
- .. versionadded:: 5.0
-
- .. seealso::
- - :func:`getStatAxes()`
- - :func:`getStatLocations()`
- - :func:`fontTools.otlLib.builder.buildStatTable()`
- """
- for vf in doc.getVariableFonts():
- if vf.name == vfName:
- break
- else:
- raise DesignSpaceDocumentError(
- f"Cannot find the variable font by name {vfName}"
- )
-
- region = getVFUserRegion(doc, vf)
-
- return fontTools.otlLib.builder.buildStatTable(
- ttFont,
- getStatAxes(doc, region),
- getStatLocations(doc, region),
- doc.elidedFallbackName if doc.elidedFallbackName is not None else 2,
- )
-
-
-def getStatAxes(doc: DesignSpaceDocument, userRegion: Region) -> List[Dict]:
- """Return a list of axis dicts suitable for use as the ``axes``
- argument to :func:`fontTools.otlLib.builder.buildStatTable()`.
-
- .. versionadded:: 5.0
- """
- # First, get the axis labels with explicit ordering
- # then append the others in the order they appear.
- maxOrdering = max(
- (axis.axisOrdering for axis in doc.axes if axis.axisOrdering is not None),
- default=-1,
- )
- axisOrderings = []
- for axis in doc.axes:
- if axis.axisOrdering is not None:
- axisOrderings.append(axis.axisOrdering)
- else:
- maxOrdering += 1
- axisOrderings.append(maxOrdering)
- return [
- dict(
- tag=axis.tag,
- name={"en": axis.name, **axis.labelNames},
- ordering=ordering,
- values=[
- _axisLabelToStatLocation(label)
- for label in axis.axisLabels
- if locationInRegion({axis.name: label.userValue}, userRegion)
- ],
- )
- for axis, ordering in zip(doc.axes, axisOrderings)
- ]
-
-
-def getStatLocations(doc: DesignSpaceDocument, userRegion: Region) -> List[Dict]:
- """Return a list of location dicts suitable for use as the ``locations``
- argument to :func:`fontTools.otlLib.builder.buildStatTable()`.
-
- .. versionadded:: 5.0
- """
- axesByName = {axis.name: axis for axis in doc.axes}
- return [
- dict(
- name={"en": label.name, **label.labelNames},
- # Location in the designspace is keyed by axis name
- # Location in buildStatTable by axis tag
- location={
- axesByName[name].tag: value
- for name, value in label.getFullUserLocation(doc).items()
- },
- flags=_labelToFlags(label),
- )
- for label in doc.locationLabels
- if locationInRegion(label.getFullUserLocation(doc), userRegion)
- ]
-
-
-def _labelToFlags(label: Union[AxisLabelDescriptor, LocationLabelDescriptor]) -> int:
- flags = 0
- if label.olderSibling:
- flags |= 1
- if label.elidable:
- flags |= 2
- return flags
-
-
-def _axisLabelToStatLocation(
- label: AxisLabelDescriptor,
-) -> Dict:
- label_format = label.getFormat()
- name = {"en": label.name, **label.labelNames}
- flags = _labelToFlags(label)
- if label_format == 1:
- return dict(name=name, value=label.userValue, flags=flags)
- if label_format == 3:
- return dict(
- name=name,
- value=label.userValue,
- linkedValue=label.linkedUserValue,
- flags=flags,
- )
- if label_format == 2:
- res = dict(
- name=name,
- nominalValue=label.userValue,
- flags=flags,
- )
- if label.userMinimum is not None:
- res["rangeMinValue"] = label.userMinimum
- if label.userMaximum is not None:
- res["rangeMaxValue"] = label.userMaximum
- return res
- raise NotImplementedError("Unknown STAT label format")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py
deleted file mode 100644
index ddc0510e60e7b744b177394dba49f7541c81b803..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py
+++ /dev/null
@@ -1,356 +0,0 @@
-import ssl
-import sys
-from types import TracebackType
-from typing import AsyncIterable, AsyncIterator, Iterable, List, Optional, Type
-
-from .._backends.auto import AutoBackend
-from .._backends.base import SOCKET_OPTION, AsyncNetworkBackend
-from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol
-from .._models import Origin, Request, Response
-from .._synchronization import AsyncEvent, AsyncLock, AsyncShieldCancellation
-from .connection import AsyncHTTPConnection
-from .interfaces import AsyncConnectionInterface, AsyncRequestInterface
-
-
-class RequestStatus:
- def __init__(self, request: Request):
- self.request = request
- self.connection: Optional[AsyncConnectionInterface] = None
- self._connection_acquired = AsyncEvent()
-
- def set_connection(self, connection: AsyncConnectionInterface) -> None:
- assert self.connection is None
- self.connection = connection
- self._connection_acquired.set()
-
- def unset_connection(self) -> None:
- assert self.connection is not None
- self.connection = None
- self._connection_acquired = AsyncEvent()
-
- async def wait_for_connection(
- self, timeout: Optional[float] = None
- ) -> AsyncConnectionInterface:
- if self.connection is None:
- await self._connection_acquired.wait(timeout=timeout)
- assert self.connection is not None
- return self.connection
-
-
-class AsyncConnectionPool(AsyncRequestInterface):
- """
- A connection pool for making HTTP requests.
- """
-
- def __init__(
- self,
- ssl_context: Optional[ssl.SSLContext] = None,
- max_connections: Optional[int] = 10,
- max_keepalive_connections: Optional[int] = None,
- keepalive_expiry: Optional[float] = None,
- http1: bool = True,
- http2: bool = False,
- retries: int = 0,
- local_address: Optional[str] = None,
- uds: Optional[str] = None,
- network_backend: Optional[AsyncNetworkBackend] = None,
- socket_options: Optional[Iterable[SOCKET_OPTION]] = None,
- ) -> None:
- """
- A connection pool for making HTTP requests.
-
- Parameters:
- ssl_context: An SSL context to use for verifying connections.
- If not specified, the default `httpcore.default_ssl_context()`
- will be used.
- max_connections: The maximum number of concurrent HTTP connections that
- the pool should allow. Any attempt to send a request on a pool that
- would exceed this amount will block until a connection is available.
- max_keepalive_connections: The maximum number of idle HTTP connections
- that will be maintained in the pool.
- keepalive_expiry: The duration in seconds that an idle HTTP connection
- may be maintained for before being expired from the pool.
- http1: A boolean indicating if HTTP/1.1 requests should be supported
- by the connection pool. Defaults to True.
- http2: A boolean indicating if HTTP/2 requests should be supported by
- the connection pool. Defaults to False.
- retries: The maximum number of retries when trying to establish a
- connection.
- local_address: Local address to connect from. Can also be used to connect
- using a particular address family. Using `local_address="0.0.0.0"`
- will connect using an `AF_INET` address (IPv4), while using
- `local_address="::"` will connect using an `AF_INET6` address (IPv6).
- uds: Path to a Unix Domain Socket to use instead of TCP sockets.
- network_backend: A backend instance to use for handling network I/O.
- socket_options: Socket options that have to be included
- in the TCP socket when the connection was established.
- """
- self._ssl_context = ssl_context
-
- self._max_connections = (
- sys.maxsize if max_connections is None else max_connections
- )
- self._max_keepalive_connections = (
- sys.maxsize
- if max_keepalive_connections is None
- else max_keepalive_connections
- )
- self._max_keepalive_connections = min(
- self._max_connections, self._max_keepalive_connections
- )
-
- self._keepalive_expiry = keepalive_expiry
- self._http1 = http1
- self._http2 = http2
- self._retries = retries
- self._local_address = local_address
- self._uds = uds
-
- self._pool: List[AsyncConnectionInterface] = []
- self._requests: List[RequestStatus] = []
- self._pool_lock = AsyncLock()
- self._network_backend = (
- AutoBackend() if network_backend is None else network_backend
- )
- self._socket_options = socket_options
-
- def create_connection(self, origin: Origin) -> AsyncConnectionInterface:
- return AsyncHTTPConnection(
- origin=origin,
- ssl_context=self._ssl_context,
- keepalive_expiry=self._keepalive_expiry,
- http1=self._http1,
- http2=self._http2,
- retries=self._retries,
- local_address=self._local_address,
- uds=self._uds,
- network_backend=self._network_backend,
- socket_options=self._socket_options,
- )
-
- @property
- def connections(self) -> List[AsyncConnectionInterface]:
- """
- Return a list of the connections currently in the pool.
-
- For example:
-
- ```python
- >>> pool.connections
- [
- ,
- ,
- ,
- ]
- ```
- """
- return list(self._pool)
-
- async def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool:
- """
- Attempt to provide a connection that can handle the given origin.
- """
- origin = status.request.url.origin
-
- # If there are queued requests in front of us, then don't acquire a
- # connection. We handle requests strictly in order.
- waiting = [s for s in self._requests if s.connection is None]
- if waiting and waiting[0] is not status:
- return False
-
- # Reuse an existing connection if one is currently available.
- for idx, connection in enumerate(self._pool):
- if connection.can_handle_request(origin) and connection.is_available():
- self._pool.pop(idx)
- self._pool.insert(0, connection)
- status.set_connection(connection)
- return True
-
- # If the pool is currently full, attempt to close one idle connection.
- if len(self._pool) >= self._max_connections:
- for idx, connection in reversed(list(enumerate(self._pool))):
- if connection.is_idle():
- await connection.aclose()
- self._pool.pop(idx)
- break
-
- # If the pool is still full, then we cannot acquire a connection.
- if len(self._pool) >= self._max_connections:
- return False
-
- # Otherwise create a new connection.
- connection = self.create_connection(origin)
- self._pool.insert(0, connection)
- status.set_connection(connection)
- return True
-
- async def _close_expired_connections(self) -> None:
- """
- Clean up the connection pool by closing off any connections that have expired.
- """
- # Close any connections that have expired their keep-alive time.
- for idx, connection in reversed(list(enumerate(self._pool))):
- if connection.has_expired():
- await connection.aclose()
- self._pool.pop(idx)
-
- # If the pool size exceeds the maximum number of allowed keep-alive connections,
- # then close off idle connections as required.
- pool_size = len(self._pool)
- for idx, connection in reversed(list(enumerate(self._pool))):
- if connection.is_idle() and pool_size > self._max_keepalive_connections:
- await connection.aclose()
- self._pool.pop(idx)
- pool_size -= 1
-
- async def handle_async_request(self, request: Request) -> Response:
- """
- Send an HTTP request, and return an HTTP response.
-
- This is the core implementation that is called into by `.request()` or `.stream()`.
- """
- scheme = request.url.scheme.decode()
- if scheme == "":
- raise UnsupportedProtocol(
- "Request URL is missing an 'http://' or 'https://' protocol."
- )
- if scheme not in ("http", "https", "ws", "wss"):
- raise UnsupportedProtocol(
- f"Request URL has an unsupported protocol '{scheme}://'."
- )
-
- status = RequestStatus(request)
-
- async with self._pool_lock:
- self._requests.append(status)
- await self._close_expired_connections()
- await self._attempt_to_acquire_connection(status)
-
- while True:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("pool", None)
- try:
- connection = await status.wait_for_connection(timeout=timeout)
- except BaseException as exc:
- # If we timeout here, or if the task is cancelled, then make
- # sure to remove the request from the queue before bubbling
- # up the exception.
- async with self._pool_lock:
- # Ensure only remove when task exists.
- if status in self._requests:
- self._requests.remove(status)
- raise exc
-
- try:
- response = await connection.handle_async_request(request)
- except ConnectionNotAvailable:
- # The ConnectionNotAvailable exception is a special case, that
- # indicates we need to retry the request on a new connection.
- #
- # The most common case where this can occur is when multiple
- # requests are queued waiting for a single connection, which
- # might end up as an HTTP/2 connection, but which actually ends
- # up as HTTP/1.1.
- async with self._pool_lock:
- # Maintain our position in the request queue, but reset the
- # status so that the request becomes queued again.
- status.unset_connection()
- await self._attempt_to_acquire_connection(status)
- except BaseException as exc:
- with AsyncShieldCancellation():
- await self.response_closed(status)
- raise exc
- else:
- break
-
- # When we return the response, we wrap the stream in a special class
- # that handles notifying the connection pool once the response
- # has been released.
- assert isinstance(response.stream, AsyncIterable)
- return Response(
- status=response.status,
- headers=response.headers,
- content=ConnectionPoolByteStream(response.stream, self, status),
- extensions=response.extensions,
- )
-
- async def response_closed(self, status: RequestStatus) -> None:
- """
- This method acts as a callback once the request/response cycle is complete.
-
- It is called into from the `ConnectionPoolByteStream.aclose()` method.
- """
- assert status.connection is not None
- connection = status.connection
-
- async with self._pool_lock:
- # Update the state of the connection pool.
- if status in self._requests:
- self._requests.remove(status)
-
- if connection.is_closed() and connection in self._pool:
- self._pool.remove(connection)
-
- # Since we've had a response closed, it's possible we'll now be able
- # to service one or more requests that are currently pending.
- for status in self._requests:
- if status.connection is None:
- acquired = await self._attempt_to_acquire_connection(status)
- # If we could not acquire a connection for a queued request
- # then we don't need to check anymore requests that are
- # queued later behind it.
- if not acquired:
- break
-
- # Housekeeping.
- await self._close_expired_connections()
-
- async def aclose(self) -> None:
- """
- Close any connections in the pool.
- """
- async with self._pool_lock:
- for connection in self._pool:
- await connection.aclose()
- self._pool = []
- self._requests = []
-
- async def __aenter__(self) -> "AsyncConnectionPool":
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]] = None,
- exc_value: Optional[BaseException] = None,
- traceback: Optional[TracebackType] = None,
- ) -> None:
- await self.aclose()
-
-
-class ConnectionPoolByteStream:
- """
- A wrapper around the response byte stream, that additionally handles
- notifying the connection pool when the response has been closed.
- """
-
- def __init__(
- self,
- stream: AsyncIterable[bytes],
- pool: AsyncConnectionPool,
- status: RequestStatus,
- ) -> None:
- self._stream = stream
- self._pool = pool
- self._status = status
-
- async def __aiter__(self) -> AsyncIterator[bytes]:
- async for part in self._stream:
- yield part
-
- async def aclose(self) -> None:
- try:
- if hasattr(self._stream, "aclose"):
- await self._stream.aclose()
- finally:
- with AsyncShieldCancellation():
- await self._pool.response_closed(self._status)
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/PRIVACY.md b/spaces/DaFujaTyping/hf-Chat-ui/PRIVACY.md
deleted file mode 100644
index 462692780d6c4617948b39f20ad1a8a32f4f3af9..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/PRIVACY.md
+++ /dev/null
@@ -1,35 +0,0 @@
-## Privacy
-
-> Last updated: May 2nd, 2023
-
-In this `v0.1` of HuggingChat, users are not authenticated in any way, i.e. this app doesn't have access to your HF user account even if you're logged in to huggingface.co. The app is only using an anonymous session cookie. ❗️ Warning ❗️ this means if you switch browsers or clear cookies, you will currently lose your conversations.
-
-By default, your conversations are shared with the model's authors (for the `v0.1` model, to Open Assistant) to improve their training data and model over time. Model authors are the custodians of the data collected by their model, even if it's hosted on our platform.
-
-If you disable data sharing in your settings, your conversations will not be used for any downstream usage (including for research or model training purposes), and they will only be stored to let you access past conversations. You can click on the Delete icon to delete any past conversation at any moment.
-
-🗓 Please also consult huggingface.co's main privacy policy at https://huggingface.co/privacy. To exercise any of your legal privacy rights, please send an email to privacy@huggingface.co.
-
-## About available LLMs
-
-The goal of this app is to showcase that it is now (April 2023) possible to build an open source alternative to ChatGPT. 💪
-
-For now, it's running OpenAssistant's [latest LLaMA based model](https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor) (which is one of the current best open source chat models), but the plan in the longer-term is to expose all good-quality chat models from the Hub.
-
-We are not affiliated with Open Assistant, but if you want to contribute to the training data for the next generation of open models, please consider contributing to https://open-assistant.io/ ❤️
-
-## Technical details
-
-This app is running in a [Space](https://huggingface.co/docs/hub/spaces-overview), which entails that the code for this UI is open source: https://huggingface.co/spaces/huggingchat/chat-ui/tree/main.
-The inference backend is running [text-generation-inference](https://github.com/huggingface/text-generation-inference) on HuggingFace's Inference API infrastructure.
-
-It is therefore possible to deploy a copy of this app to a Space and customize it (swap model, add some UI elements, or store user messages according to your own Terms and conditions)
-
-We welcome any feedback on this app: please participate to the public discussion at https://huggingface.co/spaces/huggingchat/chat-ui/discussions
-
-
-
-## Coming soon
-
-- LLM watermarking
-- User setting to share conversations with model authors (done ✅)
diff --git a/spaces/Dantra1/CeliaSensei/mel_processing.py b/spaces/Dantra1/CeliaSensei/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/Dantra1/CeliaSensei/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.cpp b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.cpp
deleted file mode 100644
index 44fa337d8d4c34dfa010a59cd27d86857db671aa..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.cpp
+++ /dev/null
@@ -1,107 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "upfirdn2d.h"
-
-//------------------------------------------------------------------------
-
-static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain)
-{
- // Validate arguments.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x");
- TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32");
- TORCH_CHECK(x.numel() <= INT_MAX, "x is too large");
- TORCH_CHECK(f.numel() <= INT_MAX, "f is too large");
- TORCH_CHECK(x.numel() > 0, "x has zero size");
- TORCH_CHECK(f.numel() > 0, "f has zero size");
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(f.dim() == 2, "f must be rank 2");
- TORCH_CHECK((x.size(0)-1)*x.stride(0) + (x.size(1)-1)*x.stride(1) + (x.size(2)-1)*x.stride(2) + (x.size(3)-1)*x.stride(3) <= INT_MAX, "x memory footprint is too large");
- TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1");
- TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1");
- TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1");
-
- // Create output tensor.
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
- int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx;
- int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy;
- TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1");
- torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format());
- TORCH_CHECK(y.numel() <= INT_MAX, "output is too large");
- TORCH_CHECK((y.size(0)-1)*y.stride(0) + (y.size(1)-1)*y.stride(1) + (y.size(2)-1)*y.stride(2) + (y.size(3)-1)*y.stride(3) <= INT_MAX, "output memory footprint is too large");
-
- // Initialize CUDA kernel parameters.
- upfirdn2d_kernel_params p;
- p.x = x.data_ptr();
- p.f = f.data_ptr();
- p.y = y.data_ptr();
- p.up = make_int2(upx, upy);
- p.down = make_int2(downx, downy);
- p.pad0 = make_int2(padx0, pady0);
- p.flip = (flip) ? 1 : 0;
- p.gain = gain;
- p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0));
- p.filterSize = make_int2((int)f.size(1), (int)f.size(0));
- p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0));
- p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
- p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0));
- p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z;
- p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1;
-
- // Choose CUDA kernel.
- upfirdn2d_kernel_spec spec;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&]
- {
- spec = choose_upfirdn2d_kernel(p);
- });
-
- // Set looping options.
- p.loopMajor = (p.sizeMajor - 1) / 16384 + 1;
- p.loopMinor = spec.loopMinor;
- p.loopX = spec.loopX;
- p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1;
- p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1;
-
- // Compute grid size.
- dim3 blockSize, gridSize;
- if (spec.tileOutW < 0) // large
- {
- blockSize = dim3(4, 32, 1);
- gridSize = dim3(
- ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor,
- (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1,
- p.launchMajor);
- }
- else // small
- {
- blockSize = dim3(256, 1, 1);
- gridSize = dim3(
- ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor,
- (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1,
- p.launchMajor);
- }
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream()));
- return y;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("upfirdn2d", &upfirdn2d);
-}
-
-//------------------------------------------------------------------------
diff --git a/spaces/EronSamez/RVC_HFmeu/lib/globals/globals.py b/spaces/EronSamez/RVC_HFmeu/lib/globals/globals.py
deleted file mode 100644
index d0da59d56e8c2e482bcda5eeae7cf797b830560e..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/lib/globals/globals.py
+++ /dev/null
@@ -1,5 +0,0 @@
-DoFormant: bool = False
-Quefrency: float = 8.0
-Timbre: float = 1.2
-
-NotesOrHertz: bool = False
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/css/main.js b/spaces/EsoCode/text-generation-webui/css/main.js
deleted file mode 100644
index 32820ebe15ddb80ca5fbcd2c4f88cc7c244cf3c5..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/css/main.js
+++ /dev/null
@@ -1,18 +0,0 @@
-document.getElementById("main").parentNode.childNodes[0].classList.add("header_bar");
-document.getElementById("main").parentNode.style = "padding: 0; margin: 0";
-document.getElementById("main").parentNode.parentNode.parentNode.style = "padding: 0";
-
-// Get references to the elements
-let main = document.getElementById('main');
-let main_parent = main.parentNode;
-let extensions = document.getElementById('extensions');
-
-// Add an event listener to the main element
-main_parent.addEventListener('click', function(e) {
- // Check if the main element is visible
- if (main.offsetHeight > 0 && main.offsetWidth > 0) {
- extensions.style.display = 'flex';
- } else {
- extensions.style.display = 'none';
- }
-});
diff --git a/spaces/EuroPython2022/BayesCap/src/networks_SRGAN.py b/spaces/EuroPython2022/BayesCap/src/networks_SRGAN.py
deleted file mode 100644
index cd8a30dd8deecde53f527fb81c91b78409abc390..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/BayesCap/src/networks_SRGAN.py
+++ /dev/null
@@ -1,347 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision.models as models
-from torch import Tensor
-
-# __all__ = [
-# "ResidualConvBlock",
-# "Discriminator", "Generator",
-# ]
-
-
-class ResidualConvBlock(nn.Module):
- """Implements residual conv function.
-
- Args:
- channels (int): Number of channels in the input image.
- """
-
- def __init__(self, channels: int) -> None:
- super(ResidualConvBlock, self).__init__()
- self.rcb = nn.Sequential(
- nn.Conv2d(channels, channels, (3, 3), (1, 1), (1, 1), bias=False),
- nn.BatchNorm2d(channels),
- nn.PReLU(),
- nn.Conv2d(channels, channels, (3, 3), (1, 1), (1, 1), bias=False),
- nn.BatchNorm2d(channels),
- )
-
- def forward(self, x: Tensor) -> Tensor:
- identity = x
-
- out = self.rcb(x)
- out = torch.add(out, identity)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self) -> None:
- super(Discriminator, self).__init__()
- self.features = nn.Sequential(
- # input size. (3) x 96 x 96
- nn.Conv2d(3, 64, (3, 3), (1, 1), (1, 1), bias=False),
- nn.LeakyReLU(0.2, True),
- # state size. (64) x 48 x 48
- nn.Conv2d(64, 64, (3, 3), (2, 2), (1, 1), bias=False),
- nn.BatchNorm2d(64),
- nn.LeakyReLU(0.2, True),
- nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1), bias=False),
- nn.BatchNorm2d(128),
- nn.LeakyReLU(0.2, True),
- # state size. (128) x 24 x 24
- nn.Conv2d(128, 128, (3, 3), (2, 2), (1, 1), bias=False),
- nn.BatchNorm2d(128),
- nn.LeakyReLU(0.2, True),
- nn.Conv2d(128, 256, (3, 3), (1, 1), (1, 1), bias=False),
- nn.BatchNorm2d(256),
- nn.LeakyReLU(0.2, True),
- # state size. (256) x 12 x 12
- nn.Conv2d(256, 256, (3, 3), (2, 2), (1, 1), bias=False),
- nn.BatchNorm2d(256),
- nn.LeakyReLU(0.2, True),
- nn.Conv2d(256, 512, (3, 3), (1, 1), (1, 1), bias=False),
- nn.BatchNorm2d(512),
- nn.LeakyReLU(0.2, True),
- # state size. (512) x 6 x 6
- nn.Conv2d(512, 512, (3, 3), (2, 2), (1, 1), bias=False),
- nn.BatchNorm2d(512),
- nn.LeakyReLU(0.2, True),
- )
-
- self.classifier = nn.Sequential(
- nn.Linear(512 * 6 * 6, 1024),
- nn.LeakyReLU(0.2, True),
- nn.Linear(1024, 1),
- )
-
- def forward(self, x: Tensor) -> Tensor:
- out = self.features(x)
- out = torch.flatten(out, 1)
- out = self.classifier(out)
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(self) -> None:
- super(Generator, self).__init__()
- # First conv layer.
- self.conv_block1 = nn.Sequential(
- nn.Conv2d(3, 64, (9, 9), (1, 1), (4, 4)),
- nn.PReLU(),
- )
-
- # Features trunk blocks.
- trunk = []
- for _ in range(16):
- trunk.append(ResidualConvBlock(64))
- self.trunk = nn.Sequential(*trunk)
-
- # Second conv layer.
- self.conv_block2 = nn.Sequential(
- nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1), bias=False),
- nn.BatchNorm2d(64),
- )
-
- # Upscale conv block.
- self.upsampling = nn.Sequential(
- nn.Conv2d(64, 256, (3, 3), (1, 1), (1, 1)),
- nn.PixelShuffle(2),
- nn.PReLU(),
- nn.Conv2d(64, 256, (3, 3), (1, 1), (1, 1)),
- nn.PixelShuffle(2),
- nn.PReLU(),
- )
-
- # Output layer.
- self.conv_block3 = nn.Conv2d(64, 3, (9, 9), (1, 1), (4, 4))
-
- # Initialize neural network weights.
- self._initialize_weights()
-
- def forward(self, x: Tensor, dop=None) -> Tensor:
- if not dop:
- return self._forward_impl(x)
- else:
- return self._forward_w_dop_impl(x, dop)
-
- # Support torch.script function.
- def _forward_impl(self, x: Tensor) -> Tensor:
- out1 = self.conv_block1(x)
- out = self.trunk(out1)
- out2 = self.conv_block2(out)
- out = torch.add(out1, out2)
- out = self.upsampling(out)
- out = self.conv_block3(out)
-
- return out
-
- def _forward_w_dop_impl(self, x: Tensor, dop) -> Tensor:
- out1 = self.conv_block1(x)
- out = self.trunk(out1)
- out2 = F.dropout2d(self.conv_block2(out), p=dop)
- out = torch.add(out1, out2)
- out = self.upsampling(out)
- out = self.conv_block3(out)
-
- return out
-
- def _initialize_weights(self) -> None:
- for module in self.modules():
- if isinstance(module, nn.Conv2d):
- nn.init.kaiming_normal_(module.weight)
- if module.bias is not None:
- nn.init.constant_(module.bias, 0)
- elif isinstance(module, nn.BatchNorm2d):
- nn.init.constant_(module.weight, 1)
-
-
-#### BayesCap
-class BayesCap(nn.Module):
- def __init__(self, in_channels=3, out_channels=3) -> None:
- super(BayesCap, self).__init__()
- # First conv layer.
- self.conv_block1 = nn.Sequential(
- nn.Conv2d(
- in_channels, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- )
-
- # Features trunk blocks.
- trunk = []
- for _ in range(16):
- trunk.append(ResidualConvBlock(64))
- self.trunk = nn.Sequential(*trunk)
-
- # Second conv layer.
- self.conv_block2 = nn.Sequential(
- nn.Conv2d(
- 64, 64,
- kernel_size=3, stride=1, padding=1, bias=False
- ),
- nn.BatchNorm2d(64),
- )
-
- # Output layer.
- self.conv_block3_mu = nn.Conv2d(
- 64, out_channels=out_channels,
- kernel_size=9, stride=1, padding=4
- )
- self.conv_block3_alpha = nn.Sequential(
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 1,
- kernel_size=9, stride=1, padding=4
- ),
- nn.ReLU(),
- )
- self.conv_block3_beta = nn.Sequential(
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 1,
- kernel_size=9, stride=1, padding=4
- ),
- nn.ReLU(),
- )
-
- # Initialize neural network weights.
- self._initialize_weights()
-
- def forward(self, x: Tensor) -> Tensor:
- return self._forward_impl(x)
-
- # Support torch.script function.
- def _forward_impl(self, x: Tensor) -> Tensor:
- out1 = self.conv_block1(x)
- out = self.trunk(out1)
- out2 = self.conv_block2(out)
- out = out1 + out2
- out_mu = self.conv_block3_mu(out)
- out_alpha = self.conv_block3_alpha(out)
- out_beta = self.conv_block3_beta(out)
- return out_mu, out_alpha, out_beta
-
- def _initialize_weights(self) -> None:
- for module in self.modules():
- if isinstance(module, nn.Conv2d):
- nn.init.kaiming_normal_(module.weight)
- if module.bias is not None:
- nn.init.constant_(module.bias, 0)
- elif isinstance(module, nn.BatchNorm2d):
- nn.init.constant_(module.weight, 1)
-
-
-class BayesCap_noID(nn.Module):
- def __init__(self, in_channels=3, out_channels=3) -> None:
- super(BayesCap_noID, self).__init__()
- # First conv layer.
- self.conv_block1 = nn.Sequential(
- nn.Conv2d(
- in_channels, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- )
-
- # Features trunk blocks.
- trunk = []
- for _ in range(16):
- trunk.append(ResidualConvBlock(64))
- self.trunk = nn.Sequential(*trunk)
-
- # Second conv layer.
- self.conv_block2 = nn.Sequential(
- nn.Conv2d(
- 64, 64,
- kernel_size=3, stride=1, padding=1, bias=False
- ),
- nn.BatchNorm2d(64),
- )
-
- # Output layer.
- # self.conv_block3_mu = nn.Conv2d(
- # 64, out_channels=out_channels,
- # kernel_size=9, stride=1, padding=4
- # )
- self.conv_block3_alpha = nn.Sequential(
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 1,
- kernel_size=9, stride=1, padding=4
- ),
- nn.ReLU(),
- )
- self.conv_block3_beta = nn.Sequential(
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 64,
- kernel_size=9, stride=1, padding=4
- ),
- nn.PReLU(),
- nn.Conv2d(
- 64, 1,
- kernel_size=9, stride=1, padding=4
- ),
- nn.ReLU(),
- )
-
- # Initialize neural network weights.
- self._initialize_weights()
-
- def forward(self, x: Tensor) -> Tensor:
- return self._forward_impl(x)
-
- # Support torch.script function.
- def _forward_impl(self, x: Tensor) -> Tensor:
- out1 = self.conv_block1(x)
- out = self.trunk(out1)
- out2 = self.conv_block2(out)
- out = out1 + out2
- # out_mu = self.conv_block3_mu(out)
- out_alpha = self.conv_block3_alpha(out)
- out_beta = self.conv_block3_beta(out)
- return out_alpha, out_beta
-
- def _initialize_weights(self) -> None:
- for module in self.modules():
- if isinstance(module, nn.Conv2d):
- nn.init.kaiming_normal_(module.weight)
- if module.bias is not None:
- nn.init.constant_(module.bias, 0)
- elif isinstance(module, nn.BatchNorm2d):
- nn.init.constant_(module.weight, 1)
\ No newline at end of file
diff --git a/spaces/FacundoSander/PdfQA/static/style.css b/spaces/FacundoSander/PdfQA/static/style.css
deleted file mode 100644
index d0269ce416827e9821a7a989a89ae3e8a8b38f81..0000000000000000000000000000000000000000
--- a/spaces/FacundoSander/PdfQA/static/style.css
+++ /dev/null
@@ -1,179 +0,0 @@
-body {
- font-family: 'Roboto', sans-serif;
-}
-
-.main-page {
- display: flex;
- flex-direction: column;
- align-items: center;
- justify-content: center;
- position: absolute;
- top: 0;
- left: 0;
- width: 100%;
- height: 100%;
- background: linear-gradient(45deg, #3a6186, #89253e);
- z-index: 1000;
- opacity: 1;
- visibility: visible;
- transition: opacity 0.5s ease-in-out, visibility 0.5s ease-in-out;
-}
-
-.main-content {
- max-width: 80%;
-}
-
-.hidden {
- opacity: 0;
- visibility: hidden;
-}
-
-.btn-outline-primary {
- border-color: #ffffff;
- color: #ffffff;
- transition: background-color 0.3s, color 0.3s;
-}
-
-.btn-outline-primary:hover {
- background-color: #ffffff;
- color: #89253e;
-}
-
-.chat-container {
- height: 100vh;
- display: flex;
- flex-direction: column;
- background-color: #f8f9fa;
-}
-
-#messages {
- flex-grow: 1;
- overflow-y: auto;
- padding: 1rem;
-}
-
-.user-message {
- text-align: right;
- margin-bottom: 1rem;
- background-color: #007bff;
- padding: 10px;
- border-radius: 5px;
- color: white;
-}
-
-.response-message {
- text-align: left;
- margin-bottom: 1rem;
- background-color: #e9ecef;
- padding: 10px;
- border-radius: 5px;
-}
-
-.input-group {
- box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
- border-radius: 5px;
-}
-
-#toggle-sidebar {
- transition: all 0.3s ease;
-}
-
-#toggle-sidebar:hover {
- background-color: #343a40;
-}
-
-.mb-3 {
- box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
- border-radius: 5px;
- background-color: white;
- padding: 10px;
- margin-bottom: 15px;
-}
-
-
-@keyframes spin {
- 0% {
- transform: rotate(0deg);
- }
- 100% {
- transform: rotate(360deg);
- }
-}
-
-.typing-indicator {
- display: inline-block;
- width: 1rem;
- height: 1rem;
- border: 2px solid #0d6efd;
- border-top-color: transparent;
- border-radius: 50%;
- animation: spin 1s linear infinite;
-}
-
-
-.dark-theme {
- background-color: #343a40;
- color: #f8f9fa;
-}
-
-.dark-theme .response-message {
- background-color: #495057;
- color: #f8f9fa;
-}
-
-.dark-theme .user-message {
- background-color: #007bff;
- color: #f8f9fa;
-}
-
-.dark-theme .input-group {
- background-color: #495057;
-}
-
-.dark-theme .form-control,
-.dark-theme .form-select {
- background-color: #495057;
- color: #f8f9fa;
-}
-
-.dark-theme .form-label {
- color: #f8f9fa;
-}
-
-.dark-theme #toggle-sidebar {
- background-color: #adb5bd;
-}
-
-/* Tema claro */
-.light-theme {
- background-color: #f8f9fa;
- color: #343a40;
-}
-
-.light-theme .response-message {
- background-color: #e9ecef;
- color: #343a40;
-}
-
-.light-theme .user-message {
- background-color: #007bff;
- color: #f8f9fa;
-}
-
-.light-theme .input-group {
- background-color: #f8f9fa;
-}
-
-.light-theme .form-control,
-.light-theme .form-select {
- background-color: #f8f9fa;
- color: #343a40;
-}
-
-.light-theme .form-label {
- color: #343a40;
-}
-
-.light-theme #toggle-sidebar {
- background-color: #343a40;
-}
diff --git a/spaces/Faridmaruf/RVCV2MODEL/app.py b/spaces/Faridmaruf/RVCV2MODEL/app.py
deleted file mode 100644
index 8323578e050c19032d933082dc5fa3b138008565..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/RVCV2MODEL/app.py
+++ /dev/null
@@ -1,680 +0,0 @@
-import os
-import glob
-import json
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-import yt_dlp
-import ffmpeg
-import subprocess
-import sys
-import io
-import wave
-from datetime import datetime
-from fairseq import checkpoint_utils
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from vc_infer_pipeline import VC
-from config import Config
-config = Config()
-logging.getLogger("numba").setLevel(logging.WARNING)
-spaces = os.getenv("SYSTEM") == "spaces"
-force_support = None
-if config.unsupported is False:
- if config.device == "mps" or config.device == "cpu":
- force_support = False
-else:
- force_support = True
-
-audio_mode = []
-f0method_mode = []
-f0method_info = ""
-
-if force_support is False or spaces is True:
- if spaces is True:
- audio_mode = ["Upload audio", "TTS Audio"]
- else:
- audio_mode = ["Input path", "Upload audio", "TTS Audio"]
- f0method_mode = ["pm", "harvest"]
- f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better). (Default: PM)"
-else:
- audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"]
- f0method_mode = ["pm", "harvest", "crepe"]
- f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better), and Crepe effect is good but requires GPU (Default: PM)"
-
-if os.path.isfile("rmvpe.pt"):
- f0method_mode.insert(2, "rmvpe")
-
-def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index):
- def vc_fn(
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- f0_up_key,
- f0_method,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- ):
- try:
- logs = []
- print(f"Converting using {model_name}...")
- logs.append(f"Converting using {model_name}...")
- yield "\n".join(logs), None
- if vc_audio_mode == "Input path" or "Youtube" and vc_input != "":
- audio, sr = librosa.load(vc_input, sr=16000, mono=True)
- elif vc_audio_mode == "Upload audio":
- if vc_upload is None:
- return "You need to upload an audio", None
- sampling_rate, audio = vc_upload
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and spaces:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- elif vc_audio_mode == "TTS Audio":
- if len(tts_text) > 100 and spaces:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- vc_input = "tts.mp3"
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- vc_input,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- )
- info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- print(f"{model_name} | {info}")
- logs.append(f"Successfully Convert {model_name}\n{info}")
- yield "\n".join(logs), (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- yield info, None
- return vc_fn
-
-def load_model():
- categories = []
- if os.path.isfile("weights/folder_info.json"):
- with open("weights/folder_info.json", "r", encoding="utf-8") as f:
- folder_info = json.load(f)
- for category_name, category_info in folder_info.items():
- if not category_info['enable']:
- continue
- category_title = category_info['title']
- category_folder = category_info['folder_path']
- description = category_info['description']
- models = []
- with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for character_name, info in models_info.items():
- if not info['enable']:
- continue
- model_title = info['title']
- model_name = info['model_path']
- model_author = info.get("author", None)
- model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}"
- model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}"
- cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- model_version = "V1"
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- model_version = "V2"
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})")
- models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index)))
- categories.append([category_title, category_folder, description, models])
- else:
- categories = []
- return categories
-
-def download_audio(url, audio_provider):
- logs = []
- if url == "":
- raise gr.Error("URL Required!")
- return "URL Required"
- if not os.path.exists("dl_audio"):
- os.mkdir("dl_audio")
- if audio_provider == "Youtube":
- logs.append("Downloading the audio...")
- yield None, "\n".join(logs)
- ydl_opts = {
- 'noplaylist': True,
- 'format': 'bestaudio/best',
- 'postprocessors': [{
- 'key': 'FFmpegExtractAudio',
- 'preferredcodec': 'wav',
- }],
- "outtmpl": 'dl_audio/audio',
- }
- audio_path = "dl_audio/audio.wav"
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([url])
- logs.append("Download Complete.")
- yield audio_path, "\n".join(logs)
-
-def cut_vocal_and_inst(split_model):
- logs = []
- logs.append("Starting the audio splitting process...")
- yield "\n".join(logs), None, None, None, None
- command = f"demucs --two-stems=vocals -n {split_model} dl_audio/audio.wav -o output"
- result = subprocess.Popen(command.split(), stdout=subprocess.PIPE, text=True)
- for line in result.stdout:
- logs.append(line)
- yield "\n".join(logs), None, None, None, None
- print(result.stdout)
- vocal = f"output/{split_model}/audio/vocals.wav"
- inst = f"output/{split_model}/audio/no_vocals.wav"
- logs.append("Audio splitting complete.")
- yield "\n".join(logs), vocal, inst, vocal
-
-def combine_vocal_and_inst(audio_data, vocal_volume, inst_volume, split_model):
- if not os.path.exists("output/result"):
- os.mkdir("output/result")
- vocal_path = "output/result/output.wav"
- output_path = "output/result/combine.mp3"
- inst_path = f"output/{split_model}/audio/no_vocals.wav"
- with wave.open(vocal_path, "w") as wave_file:
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.setframerate(audio_data[0])
- wave_file.writeframes(audio_data[1].tobytes())
- command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [0:a]volume={inst_volume}[i];[1:a]volume={vocal_volume}[v];[i][v]amix=inputs=2:duration=longest[a] -map [a] -b:a 320k -c:a libmp3lame {output_path}'
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return output_path
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_audio_mode(vc_audio_mode):
- if vc_audio_mode == "Input path":
- return (
- # Input & Upload
- gr.Textbox.update(visible=True),
- gr.Checkbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Button.update(visible=False),
- # Splitter
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Upload audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=True),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Button.update(visible=False),
- # Splitter
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Youtube":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Button.update(visible=True),
- # Splitter
- gr.Dropdown.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Button.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Slider.update(visible=True),
- gr.Slider.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Button.update(visible=True),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "TTS Audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Checkbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Button.update(visible=False),
- # Splitter
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True)
- )
-
-def use_microphone(microphone):
- if microphone == True:
- return gr.Audio.update(source="microphone")
- else:
- return gr.Audio.update(source="upload")
-
-if __name__ == '__main__':
- load_hubert()
- categories = load_model()
- tts_voice_list = asyncio.new_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with gr.Blocks() as app:
- gr.Markdown(
- "
\n\n"+
- "# RVC V2 MODELS GENSHIN IMPACT\n\n"+
- "### Recommended to use Google Colab to use other character and feature.\n\n"+
- "#### All of this voice samples are taken from the game Genshin Impact, and all voice credits belong to hoyoverse.\n\n"+
- "##### NO COLAB! IM DONE WITH THAT SH*T!. \n\n"+
- "[![Google collab]](https://colab.research.google.com/drive/1KcR2BO1VGdZR7ZF2luvH7lo1QWujHi-Q?usp=sharing)
- "[](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+
- "
"
- )
- if categories == []:
- gr.Markdown(
- "
\n\n"+
- "## No model found, please add the model into weights folder\n\n"+
- "
"
- )
- for (folder_title, folder, description, models) in categories:
- with gr.TabItem(folder_title):
- if description:
- gr.Markdown(f"###
{description}")
- with gr.Tabs():
- if not models:
- gr.Markdown("#
No Model Loaded.")
- gr.Markdown("##
Please add the model or fix your model path.")
- continue
- for (name, title, author, cover, model_version, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- f'
RVC {model_version} Model
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- if spaces is False:
- with gr.TabItem("Input"):
- with gr.Row():
- with gr.Column():
- vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio")
- # Input
- vc_input = gr.Textbox(label="Input audio path", visible=False)
- # Upload
- vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True)
- vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True)
- # Youtube
- vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)")
- vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...")
- vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False)
- vc_download_button = gr.Button("Download Audio", variant="primary", visible=False)
- vc_audio_preview = gr.Audio(label="Audio Preview", visible=False)
- # TTS
- tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False)
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- with gr.Column():
- vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)")
- vc_split_log = gr.Textbox(label="Output Information", visible=False, interactive=False)
- vc_split = gr.Button("Split Audio", variant="primary", visible=False)
- vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False)
- vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False)
- with gr.TabItem("Convert"):
- with gr.Row():
- with gr.Column():
- vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice')
- f0method0 = gr.Radio(
- label="Pitch extraction algorithm",
- info=f0method_info,
- choices=f0method_mode,
- value="pm",
- interactive=True
- )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- info="(Default: 0.7)",
- value=0.7,
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label="Apply Median Filtering",
- info="The value represents the filter radius and can reduce breathiness.",
- value=3,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label="Resample the output audio",
- info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling",
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Volume Envelope",
- info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Voice Protection",
- info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy",
- value=0.5,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- vc_log = gr.Textbox(label="Output Information", interactive=False)
- vc_output = gr.Audio(label="Output Audio", interactive=False)
- vc_convert = gr.Button("Convert", variant="primary")
- vc_vocal_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=1,
- interactive=True,
- step=1,
- info="Adjust vocal volume (Default: 1}",
- visible=False
- )
- vc_inst_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Instrument volume",
- value=1,
- interactive=True,
- step=1,
- info="Adjust instrument volume (Default: 1}",
- visible=False
- )
- vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False)
- vc_combine = gr.Button("Combine",variant="primary", visible=False)
- else:
- with gr.Column():
- vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio")
- # Input
- vc_input = gr.Textbox(label="Input audio path", visible=False)
- # Upload
- vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True)
- vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True)
- # Youtube
- vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)")
- vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...")
- vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False)
- vc_download_button = gr.Button("Download Audio", variant="primary", visible=False)
- vc_audio_preview = gr.Audio(label="Audio Preview", visible=False)
- # Splitter
- vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)")
- vc_split_log = gr.Textbox(label="Output Information", visible=False, interactive=False)
- vc_split = gr.Button("Split Audio", variant="primary", visible=False)
- vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False)
- vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False)
- # TTS
- tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False)
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- with gr.Column():
- vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice')
- f0method0 = gr.Radio(
- label="Pitch extraction algorithm",
- info=f0method_info,
- choices=f0method_mode,
- value="pm",
- interactive=True
- )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- info="(Default: 0.7)",
- value=0.7,
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label="Apply Median Filtering",
- info="The value represents the filter radius and can reduce breathiness.",
- value=3,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label="Resample the output audio",
- info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling",
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Volume Envelope",
- info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Voice Protection",
- info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy",
- value=0.5,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- vc_log = gr.Textbox(label="Output Information", interactive=False)
- vc_output = gr.Audio(label="Output Audio", interactive=False)
- vc_convert = gr.Button("Convert", variant="primary")
- vc_vocal_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=1,
- interactive=True,
- step=1,
- info="Adjust vocal volume (Default: 1}",
- visible=False
- )
- vc_inst_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Instrument volume",
- value=1,
- interactive=True,
- step=1,
- info="Adjust instrument volume (Default: 1}",
- visible=False
- )
- vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False)
- vc_combine = gr.Button("Combine",variant="primary", visible=False)
- vc_convert.click(
- fn=vc_fn,
- inputs=[
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- vc_transform0,
- f0method0,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- ],
- outputs=[vc_log ,vc_output]
- )
- vc_download_button.click(
- fn=download_audio,
- inputs=[vc_link, vc_download_audio],
- outputs=[vc_audio_preview, vc_log_yt]
- )
- vc_split.click(
- fn=cut_vocal_and_inst,
- inputs=[vc_split_model],
- outputs=[vc_split_log, vc_vocal_preview, vc_inst_preview, vc_input]
- )
- vc_combine.click(
- fn=combine_vocal_and_inst,
- inputs=[vc_output, vc_vocal_volume, vc_inst_volume, vc_split_model],
- outputs=[vc_combined_output]
- )
- vc_microphone_mode.change(
- fn=use_microphone,
- inputs=vc_microphone_mode,
- outputs=vc_upload
- )
- vc_audio_mode.change(
- fn=change_audio_mode,
- inputs=[vc_audio_mode],
- outputs=[
- vc_input,
- vc_microphone_mode,
- vc_upload,
- vc_download_audio,
- vc_link,
- vc_log_yt,
- vc_download_button,
- vc_split_model,
- vc_split_log,
- vc_split,
- vc_audio_preview,
- vc_vocal_preview,
- vc_inst_preview,
- vc_vocal_volume,
- vc_inst_volume,
- vc_combined_output,
- vc_combine,
- tts_text,
- tts_voice
- ]
- )
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab)
\ No newline at end of file
diff --git a/spaces/Felix123456/bingo/src/components/chat.tsx b/spaces/Felix123456/bingo/src/components/chat.tsx
deleted file mode 100644
index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000
--- a/spaces/Felix123456/bingo/src/components/chat.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-'use client'
-
-import { useCallback, useEffect, useMemo, useState } from 'react'
-import { useAtom } from 'jotai'
-import Image from 'next/image'
-import { cn } from '@/lib/utils'
-import { ChatList } from '@/components/chat-list'
-import { ChatPanel } from '@/components/chat-panel'
-import { WelcomeScreen } from '@/components/welcome-screen'
-import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
-import { ToneSelector } from './tone-selector'
-import { ChatHeader } from './chat-header'
-import { ChatSuggestions } from './chat-suggestions'
-import { bingConversationStyleAtom } from '@/state'
-import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
-import StopIcon from '@/assets/images/stop.svg'
-import { useBing } from '@/lib/hooks/use-bing'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { ChatNotification } from './chat-notification'
-import { Settings } from './settings'
-import { ChatHistory } from './chat-history'
-
-export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
-
-export default function Chat({ className }: ChatProps) {
-
- const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
- const {
- messages,
- sendMessage,
- resetConversation,
- stopGenerating,
- setInput,
- bot,
- input,
- generating,
- isSpeaking,
- uploadImage,
- attachmentList,
- setAttachmentList,
- } = useBing()
-
- useEffect(() => {
- window.scrollTo({
- top: document.body.offsetHeight,
- behavior: 'smooth'
- })
- }, [])
-
- return (
-
'
- )
-
- with gr.Tabs():
- with gr.TabItem("vits"):
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text")
- lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"],
- type="index", value="中文")
- btn = gr.Button(value="Submit")
- with gr.Row():
- search = gr.Textbox(label="Search Speaker", lines=1)
- btn2 = gr.Button(value="Search")
- sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228])
- with gr.Row():
- ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="Output Message")
- o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio")
- o3 = gr.Textbox(label="Extra Info")
- download = gr.Button("Download Audio")
- btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate")
- download.click(None, [], [], _js=download_audio_js.format())
- btn2.click(search_speaker, inputs=[search], outputs=[sid])
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- with gr.TabItem("可用人物一览"):
- gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index")
- app.queue(concurrency_count=1).launch()
\ No newline at end of file
diff --git a/spaces/Froleptan/stablediffusion-infinity/postprocess.py b/spaces/Froleptan/stablediffusion-infinity/postprocess.py
deleted file mode 100644
index 90c7f535c568fa46b6433390459d82e7967bb1fd..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/postprocess.py
+++ /dev/null
@@ -1,249 +0,0 @@
-"""
-https://github.com/Trinkle23897/Fast-Poisson-Image-Editing
-MIT License
-
-Copyright (c) 2022 Jiayi Weng
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
-"""
-
-import time
-import argparse
-import os
-import fpie
-from process import ALL_BACKEND, CPU_COUNT, DEFAULT_BACKEND
-from fpie.io import read_images, write_image
-from process import BaseProcessor, EquProcessor, GridProcessor
-
-from PIL import Image
-import numpy as np
-import skimage
-import skimage.measure
-import scipy
-import scipy.signal
-
-
-class PhotometricCorrection:
- def __init__(self,quite=False):
- self.get_parser("cli")
- args=self.parser.parse_args(["--method","grid","-g","src","-s","a","-t","a","-o","a"])
- args.mpi_sync_interval = getattr(args, "mpi_sync_interval", 0)
- self.backend=args.backend
- self.args=args
- self.quite=quite
- proc: BaseProcessor
- proc = GridProcessor(
- args.gradient,
- args.backend,
- args.cpu,
- args.mpi_sync_interval,
- args.block_size,
- args.grid_x,
- args.grid_y,
- )
- print(
- f"[PIE]Successfully initialize PIE {args.method} solver "
- f"with {args.backend} backend"
- )
- self.proc=proc
-
- def run(self, original_image, inpainted_image, mode="mask_mode"):
- print(f"[PIE] start")
- if mode=="disabled":
- return inpainted_image
- input_arr=np.array(original_image)
- if input_arr[:,:,-1].sum()<1:
- return inpainted_image
- output_arr=np.array(inpainted_image)
- mask=input_arr[:,:,-1]
- mask=255-mask
- if mask.sum()<1 and mode=="mask_mode":
- mode=""
- if mode=="mask_mode":
- mask = skimage.measure.block_reduce(mask, (8, 8), np.max)
- mask = mask.repeat(8, axis=0).repeat(8, axis=1)
- else:
- mask[8:-9,8:-9]=255
- mask = mask[:,:,np.newaxis].repeat(3,axis=2)
- nmask=mask.copy()
- output_arr2=output_arr[:,:,0:3].copy()
- input_arr2=input_arr[:,:,0:3].copy()
- output_arr2[nmask<128]=0
- input_arr2[nmask>=128]=0
- output_arr2+=input_arr2
- src = output_arr2[:,:,0:3]
- tgt = src.copy()
- proc=self.proc
- args=self.args
- if proc.root:
- n = proc.reset(src, mask, tgt, (args.h0, args.w0), (args.h1, args.w1))
- proc.sync()
- if proc.root:
- result = tgt
- t = time.time()
- if args.p == 0:
- args.p = args.n
-
- for i in range(0, args.n, args.p):
- if proc.root:
- result, err = proc.step(args.p) # type: ignore
- print(f"[PIE] Iter {i + args.p}, abs_err {err}")
- else:
- proc.step(args.p)
-
- if proc.root:
- dt = time.time() - t
- print(f"[PIE] Time elapsed: {dt:.4f}s")
- # make sure consistent with dummy process
- return Image.fromarray(result)
-
-
- def get_parser(self,gen_type: str) -> argparse.Namespace:
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-v", "--version", action="store_true", help="show the version and exit"
- )
- parser.add_argument(
- "--check-backend", action="store_true", help="print all available backends"
- )
- if gen_type == "gui" and "mpi" in ALL_BACKEND:
- # gui doesn't support MPI backend
- ALL_BACKEND.remove("mpi")
- parser.add_argument(
- "-b",
- "--backend",
- type=str,
- choices=ALL_BACKEND,
- default=DEFAULT_BACKEND,
- help="backend choice",
- )
- parser.add_argument(
- "-c",
- "--cpu",
- type=int,
- default=CPU_COUNT,
- help="number of CPU used",
- )
- parser.add_argument(
- "-z",
- "--block-size",
- type=int,
- default=1024,
- help="cuda block size (only for equ solver)",
- )
- parser.add_argument(
- "--method",
- type=str,
- choices=["equ", "grid"],
- default="equ",
- help="how to parallelize computation",
- )
- parser.add_argument("-s", "--source", type=str, help="source image filename")
- if gen_type == "cli":
- parser.add_argument(
- "-m",
- "--mask",
- type=str,
- help="mask image filename (default is to use the whole source image)",
- default="",
- )
- parser.add_argument("-t", "--target", type=str, help="target image filename")
- parser.add_argument("-o", "--output", type=str, help="output image filename")
- if gen_type == "cli":
- parser.add_argument(
- "-h0", type=int, help="mask position (height) on source image", default=0
- )
- parser.add_argument(
- "-w0", type=int, help="mask position (width) on source image", default=0
- )
- parser.add_argument(
- "-h1", type=int, help="mask position (height) on target image", default=0
- )
- parser.add_argument(
- "-w1", type=int, help="mask position (width) on target image", default=0
- )
- parser.add_argument(
- "-g",
- "--gradient",
- type=str,
- choices=["max", "src", "avg"],
- default="max",
- help="how to calculate gradient for PIE",
- )
- parser.add_argument(
- "-n",
- type=int,
- help="how many iteration would you perfer, the more the better",
- default=5000,
- )
- if gen_type == "cli":
- parser.add_argument(
- "-p", type=int, help="output result every P iteration", default=0
- )
- if "mpi" in ALL_BACKEND:
- parser.add_argument(
- "--mpi-sync-interval",
- type=int,
- help="MPI sync iteration interval",
- default=100,
- )
- parser.add_argument(
- "--grid-x", type=int, help="x axis stride for grid solver", default=8
- )
- parser.add_argument(
- "--grid-y", type=int, help="y axis stride for grid solver", default=8
- )
- self.parser=parser
-
-if __name__ =="__main__":
- import sys
- import io
- import base64
- from PIL import Image
- def base64_to_pil(base64_str):
- data = base64.b64decode(str(base64_str))
- pil = Image.open(io.BytesIO(data))
- return pil
-
- def pil_to_base64(out_pil):
- out_buffer = io.BytesIO()
- out_pil.save(out_buffer, format="PNG")
- out_buffer.seek(0)
- base64_bytes = base64.b64encode(out_buffer.read())
- base64_str = base64_bytes.decode("ascii")
- return base64_str
- correction_func=PhotometricCorrection(quite=True)
- while True:
- buffer = sys.stdin.readline()
- print(f"[PIE] suprocess {len(buffer)} {type(buffer)} ")
- if len(buffer)==0:
- break
- if isinstance(buffer,str):
- lst=buffer.strip().split(",")
- else:
- lst=buffer.decode("ascii").strip().split(",")
- img0=base64_to_pil(lst[0])
- img1=base64_to_pil(lst[1])
- ret=correction_func.run(img0,img1,mode=lst[2])
- ret_base64=pil_to_base64(ret)
- if isinstance(buffer,str):
- sys.stdout.write(f"{ret_base64}\n")
- else:
- sys.stdout.write(f"{ret_base64}\n".encode())
- sys.stdout.flush()
\ No newline at end of file
diff --git a/spaces/GAITOR/MLMondayDemo-Week1/app.py b/spaces/GAITOR/MLMondayDemo-Week1/app.py
deleted file mode 100644
index 17190745e693d470632da94d5fcdfc21c65ede33..0000000000000000000000000000000000000000
--- a/spaces/GAITOR/MLMondayDemo-Week1/app.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import tensorflow as tf
-import matplotlib.pyplot as plt
-from PIL import Image, ImageOps
-from tensorflow.keras.utils import img_to_array
-
-from streamlit_drawable_canvas import st_canvas
-import streamlit as st
-
-# st.set_page_config(layout="wide")
-
-st.write('# MNIST Digit Recognition')
-st.write('## Using trained CNN `Keras` model')
-st.write('To view how this model was trained go to the `Files and Versions` tab and download the `Week1.ipynb` notebook')
-
-# Import Pre-trained Model
-model = tf.keras.models.load_model('mnist.h5')
-tf.device('/cpu:0')
-plt.rcParams.update({'font.size': 18})
-
-# Create a sidebar to hold the settings
-stroke_width = st.sidebar.slider("Stroke width: ", 1, 25, 9)
-realtime_update = st.sidebar.checkbox("Update in realtime", True)
-
-
-canvas_result = st_canvas(
- fill_color="rgba(255, 165, 0, 0.3)", # Fixed fill color with some opacity
- stroke_width=stroke_width,
- stroke_color='#FFFFFF',
- background_color='#000000',
- #background_image=Image.open(bg_image) if bg_image else None,
- update_streamlit=realtime_update,
- height=28*9,
- width=28*9,
- drawing_mode='freedraw',
- key="canvas",
-)
-
-if canvas_result.image_data is not None:
-
- # Get image data from canvas
- im = ImageOps.grayscale(Image.fromarray(canvas_result.image_data.astype(
- 'uint8'), mode="RGBA")).resize((28, 28))
-
- # Convert image to array and reshape
- data = img_to_array(im)
- data = data / 255
- data = data.reshape(1, 28, 28, 1)
- data = data.astype('float32')
-
- # Predict digit
- st.write('### Predicted Digit')
- prediction = model.predict(data)
-
- # Plot prediction
- result = plt.figure(figsize=(12, 3))
- plt.bar(range(10), prediction[0])
- plt.xticks(range(10))
- plt.xlabel('Digit')
- plt.ylabel('Probability')
- plt.title('Drawing Prediction')
- plt.ylim(0, 1)
- st.write(result)
-
- # Show resized image
- with st.expander('Show Resized Image'):
- st.write(
- "The image needs to be resized, because it can only input 28x28 images")
- st.image(im, caption='Resized Image', width=28*9)
diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/tokenizer/simple_tokenizer.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/tokenizer/simple_tokenizer.py
deleted file mode 100644
index c84cc8fb3adff99225d3e3a75b2a3d81564adcef..0000000000000000000000000000000000000000
--- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/tokenizer/simple_tokenizer.py
+++ /dev/null
@@ -1,163 +0,0 @@
-"""
-Copied from: https://github.com/openai/CLIP/blob/573315e83f07b53a61ff5098757e8fc885f1703e/clip/simple_tokenizer.py
-"""
-
-import gzip
-import html
-import os
-from functools import lru_cache
-from typing import List, Tuple
-
-import ftfy
-import regex as re
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r"\s+", " ", text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split("\n")
- merges = merges[1 : 49152 - 256 - 2 + 1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v + "" for v in vocab]
- for merge in merges:
- vocab.append("".join(merge))
- vocab.extend(["<|startoftext|>", "<|endoftext|>"])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {"<|startoftext|>": "<|startoftext|>", "<|endoftext|>": "<|endoftext|>"}
- self.pat = re.compile(
- r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
- re.IGNORECASE,
- )
-
- @property
- def start_token(self):
- return self.encoder["<|startoftext|>"]
-
- @property
- def end_token(self):
- return self.encoder["<|endoftext|>"]
-
- def padded_tokens_and_len(self, tokens: List[int], text_ctx: int) -> Tuple[List[int], int]:
- tokens = [self.start_token] + tokens[: text_ctx - 2] + [self.end_token]
- text_len = len(tokens)
- padding = text_ctx - len(tokens)
- padded_tokens = tokens + [0] * padding
- return padded_tokens, text_len
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + (token[-1] + "",)
- pairs = get_pairs(word)
-
- if not pairs:
- return token + ""
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except: # pylint: disable=bare-except
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder[token] for token in tokens])
- text = (
- bytearray([self.byte_decoder[c] for c in text])
- .decode("utf-8", errors="replace")
- .replace("", " ")
- )
- return text
diff --git a/spaces/Godrose0728/sound-link/text/shanghainese.py b/spaces/Godrose0728/sound-link/text/shanghainese.py
deleted file mode 100644
index 1c28c17d0dc0d920fd222c909a53d703c95e043b..0000000000000000000000000000000000000000
--- a/spaces/Godrose0728/sound-link/text/shanghainese.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('chinese_dialect_lexicons/zaonhe')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ᴇ'),
- ('B', 'bi'),
- ('C', 'si'),
- ('D', 'di'),
- ('E', 'i'),
- ('F', 'ᴇf'),
- ('G', 'dʑi'),
- ('H', 'ᴇtɕʰ'),
- ('I', 'ᴀi'),
- ('J', 'dʑᴇ'),
- ('K', 'kʰᴇ'),
- ('L', 'ᴇl'),
- ('M', 'ᴇm'),
- ('N', 'ᴇn'),
- ('O', 'o'),
- ('P', 'pʰi'),
- ('Q', 'kʰiu'),
- ('R', 'ᴀl'),
- ('S', 'ᴇs'),
- ('T', 'tʰi'),
- ('U', 'ɦiu'),
- ('V', 'vi'),
- ('W', 'dᴀbɤliu'),
- ('X', 'ᴇks'),
- ('Y', 'uᴀi'),
- ('Z', 'zᴇ')
-]]
-
-
-def _number_to_shanghainese(num):
- num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两')
- return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num)
-
-
-def number_to_shanghainese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def shanghainese_to_ipa(text):
- text = number_to_shanghainese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/utils.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/utils.py
deleted file mode 100644
index e65b8824d3f240e869ca073a8264f32cb224813c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/utils.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Common utilities for data pipeline tools."""
-import contextlib
-import shutil
-import tempfile
-import time
-from typing import Optional
-
-from absl import logging
-
-
-@contextlib.contextmanager
-def tmpdir_manager(base_dir: Optional[str] = None):
- """Context manager that deletes a temporary directory on exit."""
- tmpdir = tempfile.mkdtemp(dir=base_dir)
- try:
- yield tmpdir
- finally:
- shutil.rmtree(tmpdir, ignore_errors=True)
-
-
-@contextlib.contextmanager
-def timing(msg: str):
- logging.info('Started %s', msg)
- tic = time.time()
- yield
- toc = time.time()
- logging.info('Finished %s in %.3f seconds', msg, toc - tic)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index a496204bdb061d975c40cb7ef2aaada40e020a13..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/gcnet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_20k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/H0n3y/Honeystesting/Dockerfile b/spaces/H0n3y/Honeystesting/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/H0n3y/Honeystesting/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Hallucinate/demo/midas/backbones/utils.py b/spaces/Hallucinate/demo/midas/backbones/utils.py
deleted file mode 100644
index 0558899dddcfccec5f01a764d4f21738eb612149..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/midas/backbones/utils.py
+++ /dev/null
@@ -1,249 +0,0 @@
-import torch
-
-import torch.nn as nn
-
-
-class Slice(nn.Module):
- def __init__(self, start_index=1):
- super(Slice, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- return x[:, self.start_index:]
-
-
-class AddReadout(nn.Module):
- def __init__(self, start_index=1):
- super(AddReadout, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- if self.start_index == 2:
- readout = (x[:, 0] + x[:, 1]) / 2
- else:
- readout = x[:, 0]
- return x[:, self.start_index:] + readout.unsqueeze(1)
-
-
-class ProjectReadout(nn.Module):
- def __init__(self, in_features, start_index=1):
- super(ProjectReadout, self).__init__()
- self.start_index = start_index
-
- self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU())
-
- def forward(self, x):
- readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index:])
- features = torch.cat((x[:, self.start_index:], readout), -1)
-
- return self.project(features)
-
-
-class Transpose(nn.Module):
- def __init__(self, dim0, dim1):
- super(Transpose, self).__init__()
- self.dim0 = dim0
- self.dim1 = dim1
-
- def forward(self, x):
- x = x.transpose(self.dim0, self.dim1)
- return x
-
-
-activations = {}
-
-
-def get_activation(name):
- def hook(model, input, output):
- activations[name] = output
-
- return hook
-
-
-def forward_default(pretrained, x, function_name="forward_features"):
- exec(f"pretrained.model.{function_name}(x)")
-
- layer_1 = pretrained.activations["1"]
- layer_2 = pretrained.activations["2"]
- layer_3 = pretrained.activations["3"]
- layer_4 = pretrained.activations["4"]
-
- if hasattr(pretrained, "act_postprocess1"):
- layer_1 = pretrained.act_postprocess1(layer_1)
- if hasattr(pretrained, "act_postprocess2"):
- layer_2 = pretrained.act_postprocess2(layer_2)
- if hasattr(pretrained, "act_postprocess3"):
- layer_3 = pretrained.act_postprocess3(layer_3)
- if hasattr(pretrained, "act_postprocess4"):
- layer_4 = pretrained.act_postprocess4(layer_4)
-
- return layer_1, layer_2, layer_3, layer_4
-
-
-def forward_adapted_unflatten(pretrained, x, function_name="forward_features"):
- b, c, h, w = x.shape
-
- exec(f"glob = pretrained.model.{function_name}(x)")
-
- layer_1 = pretrained.activations["1"]
- layer_2 = pretrained.activations["2"]
- layer_3 = pretrained.activations["3"]
- layer_4 = pretrained.activations["4"]
-
- layer_1 = pretrained.act_postprocess1[0:2](layer_1)
- layer_2 = pretrained.act_postprocess2[0:2](layer_2)
- layer_3 = pretrained.act_postprocess3[0:2](layer_3)
- layer_4 = pretrained.act_postprocess4[0:2](layer_4)
-
- unflatten = nn.Sequential(
- nn.Unflatten(
- 2,
- torch.Size(
- [
- h // pretrained.model.patch_size[1],
- w // pretrained.model.patch_size[0],
- ]
- ),
- )
- )
-
- if layer_1.ndim == 3:
- layer_1 = unflatten(layer_1)
- if layer_2.ndim == 3:
- layer_2 = unflatten(layer_2)
- if layer_3.ndim == 3:
- layer_3 = unflatten(layer_3)
- if layer_4.ndim == 3:
- layer_4 = unflatten(layer_4)
-
- layer_1 = pretrained.act_postprocess1[3: len(pretrained.act_postprocess1)](layer_1)
- layer_2 = pretrained.act_postprocess2[3: len(pretrained.act_postprocess2)](layer_2)
- layer_3 = pretrained.act_postprocess3[3: len(pretrained.act_postprocess3)](layer_3)
- layer_4 = pretrained.act_postprocess4[3: len(pretrained.act_postprocess4)](layer_4)
-
- return layer_1, layer_2, layer_3, layer_4
-
-
-def get_readout_oper(vit_features, features, use_readout, start_index=1):
- if use_readout == "ignore":
- readout_oper = [Slice(start_index)] * len(features)
- elif use_readout == "add":
- readout_oper = [AddReadout(start_index)] * len(features)
- elif use_readout == "project":
- readout_oper = [
- ProjectReadout(vit_features, start_index) for out_feat in features
- ]
- else:
- assert (
- False
- ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'"
-
- return readout_oper
-
-
-def make_backbone_default(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[2, 5, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
- start_index_readout=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index_readout)
-
- # 32, 48, 136, 384
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- return pretrained
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_token_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_token_dataset.py
deleted file mode 100644
index fd1331f4c44c1595eb9bb78baa0cf5cf3bcce9ad..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_token_dataset.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-from . import BaseWrapperDataset
-
-
-class PrependTokenDataset(BaseWrapperDataset):
- def __init__(self, dataset, token=None):
- super().__init__(dataset)
- self.token = token
- if token is not None:
- self._sizes = np.array(dataset.sizes) + 1
- else:
- self._sizes = dataset.sizes
-
- def __getitem__(self, idx):
- item = self.dataset[idx]
- if self.token is not None:
- item = torch.cat([item.new([self.token]), item])
- return item
-
- @property
- def sizes(self):
- return self._sizes
-
- def num_tokens(self, index):
- n = self.dataset.num_tokens(index)
- if self.token is not None:
- n += 1
- return n
-
- def size(self, index):
- n = self.dataset.size(index)
- if self.token is not None:
- n += 1
- return n
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py
deleted file mode 100644
index 5ee9c1be4a59ad3d072412827ab4e9b62dc7434e..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-from typing import List
-
-import torch.optim.lr_scheduler
-from omegaconf import II
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class ReduceLROnPlateauLRScheduleConfig(FairseqDataclass):
- lr_shrink: float = field(
- default=0.1, metadata={"help": "shrink factor for annealing"}
- )
- lr_threshold: float = field(
- default=1e-4,
- metadata={
- "help": (
- "threshold for measuring the new optimum, to only focus on "
- "significant changes"
- )
- },
- )
- lr_patience: int = field(
- default=0,
- metadata={
- "help": (
- "number of epochs with no improvement after which learning rate will "
- "be reduced"
- )
- },
- )
- warmup_updates: int = field(
- default=0,
- metadata={"help": "warmup the learning rate linearly for the first N updates"},
- )
- warmup_init_lr: float = field(
- default=-1,
- metadata={
- "help": "initial learning rate during warmup phase; default is cfg.lr"
- },
- )
- lr: List[float] = II("optimization.lr")
- maximize_best_checkpoint_metric: bool = II(
- "checkpoint.maximize_best_checkpoint_metric"
- )
-
-
-@register_lr_scheduler(
- "reduce_lr_on_plateau", dataclass=ReduceLROnPlateauLRScheduleConfig
-)
-class ReduceLROnPlateauLRSchedule(FairseqLRScheduler):
- """
- Decay the LR by a factor every time the validation loss plateaus.
- Also comes with optional warmup phase, where we linearly increase
- the learning rate from some initial learning rate
- (``--warmup-init-lr``) until the configured learning rate
- (``--lr``). Thereafter the lr is adjusted according to original
- reduce_on_plateau scheme.
-
- During warmup::
-
- lrs = torch.linspace(
- cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates
- )
- lr = lrs[update_num]
- """
-
- def __init__(self, cfg: ReduceLROnPlateauLRScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
- if len(cfg.lr) > 1:
- raise ValueError(
- "Cannot use a fixed learning rate schedule with reduce_lr_on_plateau."
- " Consider --lr-scheduler=fixed instead."
- )
- self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
- self.optimizer.optimizer,
- patience=cfg.lr_patience,
- factor=cfg.lr_shrink,
- mode="max" if cfg.maximize_best_checkpoint_metric else "min",
- threshold=cfg.lr_threshold,
- )
- warmup_end_lr = cfg.lr[0]
- # if no warm up, sets initial lr to be cfg.lr[0]
- if cfg.warmup_init_lr < 0:
- cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr
-
- # linearly warmup for the first cfg.warmup_updates
- if cfg.warmup_updates > 0:
- self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates
-
- # this flag is either set from arg when no warm up, or set by
- # step_update() when warmup finishes
- self.warmup_end = True if cfg.warmup_updates <= 0 else False
-
- # initial learning rate
- # this self.lr is used only during init and/or warm up period
- self.lr = warmup_end_lr if self.warmup_end else cfg.warmup_init_lr
- self.optimizer.set_lr(self.lr)
-
- def state_dict(self):
- """Return the LR scheduler state dict."""
- return {
- "best": self.lr_scheduler.best,
- "last_epoch": self.lr_scheduler.last_epoch,
- }
-
- def load_state_dict(self, state_dict):
- """Load an LR scheduler state dict."""
- self.lr_scheduler.best = state_dict["best"]
- if "last_epoch" in state_dict:
- self.lr_scheduler.last_epoch = state_dict["last_epoch"]
-
- def step(self, epoch, val_loss=None):
- """
- Update the learning rate at the end of the given epoch if warmup
- finishes otherwise no update of lr on epoch boundaries
- """
- if val_loss is not None and self.warmup_end is True:
- self.lr_scheduler.step(val_loss)
- else:
- self.lr_scheduler.last_epoch = epoch
- return self.optimizer.get_lr()
-
- def step_update(self, num_updates):
- """
- Update the learning rate after each update."""
- # if there is warmup
- if self.cfg.warmup_updates > 0:
- if num_updates <= self.cfg.warmup_updates:
- self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step
- self.optimizer.set_lr(self.lr)
- else:
- if self.warmup_end is False:
- self.warmup_end = True
- # else do nothing
- return self.optimizer.get_lr()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/setup.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/setup.py
deleted file mode 100644
index 4379b2c31f593134fb027cf01da5fcd706a64e00..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/setup.py
+++ /dev/null
@@ -1,284 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import subprocess
-import sys
-
-from setuptools import Extension, find_packages, setup
-
-if sys.version_info < (3, 6):
- sys.exit("Sorry, Python >= 3.6 is required for fairseq.")
-
-
-def write_version_py():
- with open(os.path.join("fairseq", "version.txt")) as f:
- version = f.read().strip()
-
- # append latest commit hash to version string
- try:
- sha = (
- subprocess.check_output(["git", "rev-parse", "HEAD"])
- .decode("ascii")
- .strip()
- )
- version += "+" + sha[:7]
- except Exception:
- pass
-
- # write version info to fairseq/version.py
- with open(os.path.join("fairseq", "version.py"), "w") as f:
- f.write('__version__ = "{}"\n'.format(version))
- return version
-
-
-version = write_version_py()
-
-
-with open("README.md") as f:
- readme = f.read()
-
-
-if sys.platform == "darwin":
- extra_compile_args = ["-stdlib=libc++", "-O3"]
-else:
- extra_compile_args = ["-std=c++11", "-O3"]
-
-
-class NumpyExtension(Extension):
- """Source: https://stackoverflow.com/a/54128391"""
-
- def __init__(self, *args, **kwargs):
- self.__include_dirs = []
- super().__init__(*args, **kwargs)
-
- @property
- def include_dirs(self):
- import numpy
-
- return self.__include_dirs + [numpy.get_include()]
-
- @include_dirs.setter
- def include_dirs(self, dirs):
- self.__include_dirs = dirs
-
-
-extensions = [
- Extension(
- "fairseq.libbleu",
- sources=[
- "fairseq/clib/libbleu/libbleu.cpp",
- "fairseq/clib/libbleu/module.cpp",
- ],
- extra_compile_args=extra_compile_args,
- ),
- NumpyExtension(
- "fairseq.data.data_utils_fast",
- sources=["fairseq/data/data_utils_fast.pyx"],
- language="c++",
- extra_compile_args=extra_compile_args,
- ),
- NumpyExtension(
- "fairseq.data.token_block_utils_fast",
- sources=["fairseq/data/token_block_utils_fast.pyx"],
- language="c++",
- extra_compile_args=extra_compile_args,
- ),
-]
-
-
-cmdclass = {}
-
-
-try:
- # torch is not available when generating docs
- from torch.utils import cpp_extension
-
- extensions.extend(
- [
- cpp_extension.CppExtension(
- "fairseq.libbase",
- sources=[
- "fairseq/clib/libbase/balanced_assignment.cpp",
- ],
- )
- ]
- )
-
- extensions.extend(
- [
- cpp_extension.CppExtension(
- "fairseq.libnat",
- sources=[
- "fairseq/clib/libnat/edit_dist.cpp",
- ],
- ),
- cpp_extension.CppExtension(
- "alignment_train_cpu_binding",
- sources=[
- "examples/operators/alignment_train_cpu.cpp",
- ],
- ),
- ]
- )
- if "CUDA_HOME" in os.environ:
- extensions.extend(
- [
- cpp_extension.CppExtension(
- "fairseq.libnat_cuda",
- sources=[
- "fairseq/clib/libnat_cuda/edit_dist.cu",
- "fairseq/clib/libnat_cuda/binding.cpp",
- ],
- ),
- cpp_extension.CppExtension(
- "fairseq.ngram_repeat_block_cuda",
- sources=[
- "fairseq/clib/cuda/ngram_repeat_block_cuda.cpp",
- "fairseq/clib/cuda/ngram_repeat_block_cuda_kernel.cu",
- ],
- ),
- cpp_extension.CppExtension(
- "alignment_train_cuda_binding",
- sources=[
- "examples/operators/alignment_train_kernel.cu",
- "examples/operators/alignment_train_cuda.cpp",
- ],
- ),
- ]
- )
- cmdclass["build_ext"] = cpp_extension.BuildExtension
-
-except ImportError:
- pass
-
-
-if "READTHEDOCS" in os.environ:
- # don't build extensions when generating docs
- extensions = []
- if "build_ext" in cmdclass:
- del cmdclass["build_ext"]
-
- # use CPU build of PyTorch
- dependency_links = [
- "https://download.pytorch.org/whl/cpu/torch-1.7.0%2Bcpu-cp36-cp36m-linux_x86_64.whl"
- ]
-else:
- dependency_links = []
-
-
-if "clean" in sys.argv[1:]:
- # Source: https://bit.ly/2NLVsgE
- print("deleting Cython files...")
- import subprocess
-
- subprocess.run(
- ["rm -f fairseq/*.so fairseq/**/*.so fairseq/*.pyd fairseq/**/*.pyd"],
- shell=True,
- )
-
-
-extra_packages = []
-if os.path.exists(os.path.join("fairseq", "model_parallel", "megatron", "mpu")):
- extra_packages.append("fairseq.model_parallel.megatron.mpu")
-
-
-def do_setup(package_data):
- setup(
- name="fairseq",
- version=version,
- description="Facebook AI Research Sequence-to-Sequence Toolkit",
- url="https://github.com/pytorch/fairseq",
- classifiers=[
- "Intended Audience :: Science/Research",
- "License :: OSI Approved :: MIT License",
- "Programming Language :: Python :: 3.6",
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8",
- "Topic :: Scientific/Engineering :: Artificial Intelligence",
- ],
- long_description=readme,
- long_description_content_type="text/markdown",
- setup_requires=[
- "cython",
- 'numpy<1.20.0; python_version<"3.7"',
- 'numpy; python_version>="3.7"',
- "setuptools>=18.0",
- ],
- install_requires=[
- "cffi",
- "cython",
- 'dataclasses; python_version<"3.7"',
- "hydra-core>=1.0.7,<1.1",
- "omegaconf<2.1",
- 'numpy<1.20.0; python_version<"3.7"',
- 'numpy; python_version>="3.7"',
- "regex",
- "sacrebleu>=1.4.12",
- # "torch",
- "tqdm",
- "bitarray",
- # "torchaudio>=0.8.0",
- ],
- dependency_links=dependency_links,
- packages=find_packages(
- exclude=[
- "examples",
- "examples.*",
- "scripts",
- "scripts.*",
- "tests",
- "tests.*",
- ]
- )
- + extra_packages,
- package_data=package_data,
- ext_modules=extensions,
- test_suite="tests",
- entry_points={
- "console_scripts": [
- "fairseq-eval-lm = fairseq_cli.eval_lm:cli_main",
- "fairseq-generate = fairseq_cli.generate:cli_main",
- "fairseq-hydra-train = fairseq_cli.hydra_train:cli_main",
- "fairseq-interactive = fairseq_cli.interactive:cli_main",
- "fairseq-preprocess = fairseq_cli.preprocess:cli_main",
- "fairseq-score = fairseq_cli.score:cli_main",
- "fairseq-train = fairseq_cli.train:cli_main",
- "fairseq-validate = fairseq_cli.validate:cli_main",
- ],
- },
- cmdclass=cmdclass,
- zip_safe=False,
- )
-
-
-def get_files(path, relative_to="fairseq"):
- all_files = []
- for root, _dirs, files in os.walk(path, followlinks=True):
- root = os.path.relpath(root, relative_to)
- for file in files:
- if file.endswith(".pyc"):
- continue
- all_files.append(os.path.join(root, file))
- return all_files
-
-
-if __name__ == "__main__":
- try:
- # symlink examples into fairseq package so package_data accepts them
- fairseq_examples = os.path.join("fairseq", "examples")
- if "build_ext" not in sys.argv[1:] and not os.path.exists(fairseq_examples):
- os.symlink(os.path.join("..", "examples"), fairseq_examples)
-
- package_data = {
- "fairseq": (
- get_files(fairseq_examples) + get_files(os.path.join("fairseq", "config"))
- )
- }
- do_setup(package_data)
- finally:
- if "build_ext" not in sys.argv[1:] and os.path.islink(fairseq_examples):
- os.unlink(fairseq_examples)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/roberta/README.pretraining.md b/spaces/ICML2022/OFA/fairseq/examples/roberta/README.pretraining.md
deleted file mode 100644
index a4e7453529111fdd198be637d911d1764cb96c0e..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/roberta/README.pretraining.md
+++ /dev/null
@@ -1,84 +0,0 @@
-# Pretraining RoBERTa using your own data
-
-This tutorial will walk you through pretraining RoBERTa over your own data.
-
-### 1) Preprocess the data
-
-Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training.
-
-We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/)
-to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course
-this dataset is quite small, so the resulting pretrained model will perform
-poorly, but it gives the general idea.
-
-First download the dataset:
-```bash
-wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
-unzip wikitext-103-raw-v1.zip
-```
-
-Next encode it with the GPT-2 BPE:
-```bash
-mkdir -p gpt2_bpe
-wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
-wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
-for SPLIT in train valid test; do \
- python -m examples.roberta.multiprocessing_bpe_encoder \
- --encoder-json gpt2_bpe/encoder.json \
- --vocab-bpe gpt2_bpe/vocab.bpe \
- --inputs wikitext-103-raw/wiki.${SPLIT}.raw \
- --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \
- --keep-empty \
- --workers 60; \
-done
-```
-
-Finally preprocess/binarize the data using the GPT-2 fairseq dictionary:
-```bash
-wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt
-fairseq-preprocess \
- --only-source \
- --srcdict gpt2_bpe/dict.txt \
- --trainpref wikitext-103-raw/wiki.train.bpe \
- --validpref wikitext-103-raw/wiki.valid.bpe \
- --testpref wikitext-103-raw/wiki.test.bpe \
- --destdir data-bin/wikitext-103 \
- --workers 60
-```
-
-### 2) Train RoBERTa base
-```bash
-DATA_DIR=data-bin/wikitext-103
-
-fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \
---config-name base task.data=$DATA_DIR
-```
-
-**Note:** You can optionally resume training the released RoBERTa base model by
-adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`.
-
-**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses
-a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to
-further increase the batch size by 16x (`optimization.update_freq`), for a total batch size
-of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need
-to reduce `dataset.batch_size` and increase dataset.update_freq to compensate.
-Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly
-to increase training speed.
-
-**Note:** The learning rate and batch size are tightly connected and need to be
-adjusted together. We generally recommend increasing the learning rate as you
-increase the batch size according to the following table (although it's also
-dataset dependent, so don't rely on the following values too closely):
-
-batch size | peak learning rate
----|---
-256 | 0.0001
-2048 | 0.0005
-8192 | 0.0007
-
-### 3) Load your pretrained model
-```python
-from fairseq.models.roberta import RobertaModel
-roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data')
-assert isinstance(roberta.model, torch.nn.Module)
-```
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/cross_entropy.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/cross_entropy.py
deleted file mode 100644
index 6f33c24cb56e25f91595009af38e63784c2263a0..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/cross_entropy.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch
-import torch.nn.functional as F
-
-
-logger = logging.getLogger(__name__)
-
-
-def _cross_entropy_pytorch(logits, target, ignore_index=None, reduction="mean"):
- lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- return F.nll_loss(
- lprobs,
- target,
- ignore_index=ignore_index,
- reduction=reduction,
- )
-
-
-try:
- import xentropy_cuda
- from apex.contrib import xentropy
-
- def cross_entropy(logits, target, ignore_index=-100, reduction="mean"):
- if logits.device == torch.device("cpu"):
- return _cross_entropy_pytorch(logits, target, ignore_index, reduction)
- else:
- if not getattr(cross_entropy, "_has_logged_once", False):
- logger.info("using fused cross entropy")
- cross_entropy._has_logged_once = True
-
- half_to_float = logits.dtype == torch.half
- losses = xentropy.SoftmaxCrossEntropyLoss.apply(
- logits,
- target,
- 0.0,
- ignore_index,
- half_to_float,
- )
- if reduction == "sum":
- return losses.sum()
- elif reduction == "mean":
- if ignore_index >= 0:
- return losses.sum() / target.ne(ignore_index).sum()
- else:
- return losses.mean()
- elif reduction == "none":
- return losses
- else:
- raise NotImplementedError
-
-
-except ImportError:
-
- def cross_entropy(logits, target, ignore_index=-100, reduction="mean"):
- return _cross_entropy_pytorch(logits, target, ignore_index, reduction)
diff --git a/spaces/ICML2022/PointCloudC/util/__init__.py b/spaces/ICML2022/PointCloudC/util/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/IPN/FirstSpaceTEST_Gradio/app.py b/spaces/IPN/FirstSpaceTEST_Gradio/app.py
deleted file mode 100644
index 5b2c473a206ff74e146b3bfd14775b79d23b011e..0000000000000000000000000000000000000000
--- a/spaces/IPN/FirstSpaceTEST_Gradio/app.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/mrm8488/bert-mini-finetuned-age_news-classification").launch();
-print("he importado un modelo");
\ No newline at end of file
diff --git a/spaces/Ibtehaj10/cheating-detection/person_detection_video.py b/spaces/Ibtehaj10/cheating-detection/person_detection_video.py
deleted file mode 100644
index fbd6f742afca23acd2debe99679f8c70f9153adb..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection/person_detection_video.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import cv2
-import datetime
-import imutils
-import numpy as np
-
-protopath = "MobileNetSSD_deploy.prototxt"
-modelpath = "MobileNetSSD_deploy.caffemodel"
-detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath)
-# Only enable it if you are using OpenVino environment
-# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
-# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
-
-
-CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
- "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
- "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
- "sofa", "train", "tvmonitor"]
-
-
-def main():
- cap = cv2.VideoCapture('test_video.mp4')
-
- fps_start_time = datetime.datetime.now()
- fps = 0
- total_frames = 0
-
- while True:
- ret, frame = cap.read()
- frame = imutils.resize(frame, width=600)
- total_frames = total_frames + 1
-
- (H, W) = frame.shape[:2]
-
- blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5)
-
- detector.setInput(blob)
- person_detections = detector.forward()
-
- for i in np.arange(0, person_detections.shape[2]):
- confidence = person_detections[0, 0, i, 2]
- if confidence > 0.5:
- idx = int(person_detections[0, 0, i, 1])
-
- if CLASSES[idx] != "person":
- continue
-
- person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H])
- (startX, startY, endX, endY) = person_box.astype("int")
-
- cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2)
-
- fps_end_time = datetime.datetime.now()
- time_diff = fps_end_time - fps_start_time
- if time_diff.seconds == 0:
- fps = 0.0
- else:
- fps = (total_frames / time_diff.seconds)
-
- fps_text = "FPS: {:.2f}".format(fps)
-
- cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- cv2.imshow("Application", frame)
- key = cv2.waitKey(1)
- if key == ord('q'):
- break
-
- cv2.destroyAllWindows()
-
-
-main()
diff --git a/spaces/Ifeanyi/tellme.ai/app.py b/spaces/Ifeanyi/tellme.ai/app.py
deleted file mode 100644
index d72aa32d6adc2ff2b97fbea6bc79a11fc7648f08..0000000000000000000000000000000000000000
--- a/spaces/Ifeanyi/tellme.ai/app.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# import required libraries
-from transformers import pipeline
-import gradio as gr
-import timm
-
-# build gradio interface
-model = pipeline("image-classification")
-examples = ["birdA.jpg", "birdB.jpg", "birdC.jpg"]
-gr.Interface.from_pipeline(model,
- title = "tellme.ai",
- examples = examples,
- theme = gr.themes.Soft(),
- css=".gradio-container {background: url('file=blue.jpg')}").launch()
\ No newline at end of file
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/mandarin.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/mandarin.py
deleted file mode 100644
index 162e1b912dabec4b448ccd3d00d56306f82ce076..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/mandarin.py
+++ /dev/null
@@ -1,326 +0,0 @@
-import os
-import sys
-import re
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba
-import cn2an
-import logging
-
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (romaji, ipa) pairs:
-_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ʃy', 'ʃ'),
- ('ʧʰy', 'ʧʰ'),
- ('ʧ⁼y', 'ʧ⁼'),
- ('NN', 'n'),
- ('Ng', 'ŋ'),
- ('y', 'j'),
- ('h', 'x')
-]]
-
-# List of (bopomofo, ipa) pairs:
-_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'x'),
- ('ㄐ', 'tʃ⁼'),
- ('ㄑ', 'tʃʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ts`⁼'),
- ('ㄔ', 'ts`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ts⁼'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'ɥæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'ɥn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'əŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (bopomofo, ipa2) pairs:
-_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'pwo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'tɕ'),
- ('ㄑ', 'tɕʰ'),
- ('ㄒ', 'ɕ'),
- ('ㄓ', 'tʂ'),
- ('ㄔ', 'tʂʰ'),
- ('ㄕ', 'ʂ'),
- ('ㄖ', 'ɻ'),
- ('ㄗ', 'ts'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ɤ'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'yæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'yn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'ɤŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'y'),
- ('ˉ', '˥'),
- ('ˊ', '˧˥'),
- ('ˇ', '˨˩˦'),
- ('ˋ', '˥˩'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def number_to_chinese(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- return text
-
-
-def chinese_to_bopomofo(text):
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
- words = jieba.lcut(text, cut_all=False)
- text = ''
- for word in words:
- bopomofos = lazy_pinyin(word, BOPOMOFO)
- if not re.search('[\u4e00-\u9fff]', word):
- text += word
- continue
- for i in range(len(bopomofos)):
- bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
- if text != '':
- text += ' '
- text += ''.join(bopomofos)
- return text
-
-
-def latin_to_bopomofo(text):
- for regex, replacement in _latin_to_bopomofo:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_romaji(text):
- for regex, replacement in _bopomofo_to_romaji:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa(text):
- for regex, replacement in _bopomofo_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa2(text):
- for regex, replacement in _bopomofo_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_romaji(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_romaji(text)
- text = re.sub('i([aoe])', r'y\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_lazy_ipa(text):
- text = chinese_to_romaji(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_ipa(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa(text)
- text = re.sub('i([aoe])', r'j\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_ipa2(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa2(text)
- text = re.sub(r'i([aoe])', r'j\1', text)
- text = re.sub(r'u([aoəe])', r'w\1', text)
- text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
- text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
- return text
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/score_sde_ve/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/score_sde_ve/__init__.py
deleted file mode 100644
index 000d61f6e9b183728cb6fc137e7180cac3a616df..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/score_sde_ve/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# flake8: noqa
-from .pipeline_score_sde_ve import ScoreSdeVePipeline
diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/README.md b/spaces/Jacks2003/3D_Photo_Inpainting/README.md
deleted file mode 100644
index be64a526ee278adf28a4bd0a1fa61c84b2f0d87a..0000000000000000000000000000000000000000
--- a/spaces/Jacks2003/3D_Photo_Inpainting/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 3D_Photo_Inpainting
-emoji: 👁
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-duplicated_from: doevent/3D_Photo_Inpainting
----
-
-# Configuration
diff --git a/spaces/JosephusCheung/ACertainsStrategyTalk/11.html b/spaces/JosephusCheung/ACertainsStrategyTalk/11.html
deleted file mode 100644
index 03671b57fec06b1f3ab389e0025abf4ccd625f85..0000000000000000000000000000000000000000
--- a/spaces/JosephusCheung/ACertainsStrategyTalk/11.html
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Problems
-and
-Proposed
-Solutions
-Some merged models have good performance, such as AnythingV3.
-Should I continue to merge?
-This is not scientifically sound and will ultimately result in a model
-that is overfitted in some cases and normal in others. This model
-looks good at first, but you will find that it is not faithful to the
-input prompt and suffers from the problem of language drift
-mentioned above.
-3.Merged Models?
-Solution We can use the method mentioned above to train two models together using a
-word frequency list with Dreambooth. We can add or replace the training data with the
-images generated by the model we want to merge, according to the calculated ratio, and
-maintain a dynamic training dataset during training to prevent overfitting, as mentioned
-above. We get a balanced model that does not overfit in certain directions. Then choose a
-checkpoint that is about to be overfitted but not yet as the final version. This type of
-model is popular in the community because it has good output even under poorly written
-prompt inputs, such as the CertainThing.
-For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
-
-
-
"""
-with gr.Blocks(css='style.css') as demo:
-
- def reset_do_inversion():
- do_inversion = True
- return do_inversion
-
-
- def edit(input_image,
- do_inversion,
- wts, zs,
- src_prompt ="",
- tar_prompt="",
- steps=100,
- cfg_scale_src = 3.5,
- cfg_scale_tar = 15,
- skip=36,
- seed = 0,
- randomize_seed = True):
-
- x0 = load_512(input_image, device=device)
-
- if do_inversion or randomize_seed:
- zs_tensor, wts_tensor = invert(x0 =x0 , prompt_src=src_prompt, num_diffusion_steps=steps, cfg_scale_src=cfg_scale_src)
- wts = gr.State(value=wts_tensor)
- zs = gr.State(value=zs_tensor)
- do_inversion = False
-
- output = sample(zs.value, wts.value, prompt_tar=tar_prompt, skip=skip, cfg_scale_tar=cfg_scale_tar)
- return output, wts, zs, do_inversion
-
- gr.HTML(intro)
- wts = gr.State()
- zs = gr.State()
- do_inversion = gr.State(value=True)
- with gr.Row():
- input_image = gr.Image(label="Input Image", interactive=True)
- input_image.style(height=365, width=365)
- output_image = gr.Image(label=f"Edited Image", interactive=False)
- output_image.style(height=365, width=365)
-
- with gr.Row():
- tar_prompt = gr.Textbox(lines=1, label="Describe your desired edited output", interactive=True)
-
- with gr.Row():
- with gr.Column(scale=1, min_width=100):
- edit_button = gr.Button("Run")
-
-
-
- with gr.Accordion("Advanced Options", open=False):
- with gr.Row():
- with gr.Column():
- #inversion
- src_prompt = gr.Textbox(lines=1, label="Source Prompt", interactive=True, placeholder="describe the original image")
- steps = gr.Number(value=100, precision=0, label="Num Diffusion Steps", interactive=True)
- cfg_scale_src = gr.Slider(minimum=1, maximum=15, value=3.5, label=f"Source Guidance Scale", interactive=True)
- with gr.Column():
- # reconstruction
- skip = gr.Slider(minimum=0, maximum=60, value=36, step = 1, label="Skip Steps", interactive=True)
- cfg_scale_tar = gr.Slider(minimum=7, maximum=18,value=15, label=f"Target Guidance Scale", interactive=True)
- seed = gr.Number(value=0, precision=0, label="Seed", interactive=True)
- randomize_seed = gr.Checkbox(label='Randomize seed', value=False)
-
-
- edit_button.click(
- fn = randomize_seed_fn,
- inputs = [seed, randomize_seed],
- outputs = [seed], queue = False).then(
- fn=edit,
- inputs=[input_image,
- do_inversion, wts, zs,
- src_prompt,
- tar_prompt,
- steps,
- cfg_scale_src,
- cfg_scale_tar,
- skip,
- seed,randomize_seed
- ],
- outputs=[output_image, wts, zs, do_inversion],
- )
-
- input_image.change(
- fn = reset_do_inversion,
- outputs = [do_inversion]
- )
-
- src_prompt.change(
- fn = reset_do_inversion,
- outputs = [do_inversion]
- )
-
-
- gr.Examples(
- label='Examples',
- examples=get_example(),
- inputs=[input_image, tar_prompt,output_image, src_prompt,steps,
- cfg_scale_tar,
- skip,
- cfg_scale_tar
-
- ],
- outputs=[output_image ],
- )
-
-
-
-demo.queue()
-demo.launch(share=False)
\ No newline at end of file
diff --git "a/spaces/LuxOAI/ChatGpt-Web/.github/ISSUE_TEMPLATE/\345\212\237\350\203\275\345\273\272\350\256\256.md" "b/spaces/LuxOAI/ChatGpt-Web/.github/ISSUE_TEMPLATE/\345\212\237\350\203\275\345\273\272\350\256\256.md"
deleted file mode 100644
index 9ed1c845d53f067265724359c8149284a22deddf..0000000000000000000000000000000000000000
--- "a/spaces/LuxOAI/ChatGpt-Web/.github/ISSUE_TEMPLATE/\345\212\237\350\203\275\345\273\272\350\256\256.md"
+++ /dev/null
@@ -1,20 +0,0 @@
----
-name: 功能建议
-about: 请告诉我们你的灵光一闪
-title: "[Feature] "
-labels: ''
-assignees: ''
-
----
-
-**这个功能与现有的问题有关吗?**
-如果有关,请在此列出链接或者描述问题。
-
-**你想要什么功能或者有什么建议?**
-尽管告诉我们。
-
-**有没有可以参考的同类竞品?**
-可以给出参考产品的链接或者截图。
-
-**其他信息**
-可以说说你的其他考虑。
diff --git a/spaces/MRiwu/Collection/attentions.py b/spaces/MRiwu/Collection/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/MRiwu/Collection/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Marne/MockingBird/app.py b/spaces/Marne/MockingBird/app.py
deleted file mode 100644
index 224162a824377e751663b85ec7e415f05998253b..0000000000000000000000000000000000000000
--- a/spaces/Marne/MockingBird/app.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import os
-import httpx
-import torch
-import gradio as gr
-from tempfile import NamedTemporaryFile
-from pathlib import Path
-
-from mockingbirdforuse import MockingBird
-
-
-mockingbird = MockingBird()
-mockingbird_path = Path(os.path.dirname(__file__)) / "data"
-base_url = "https://al.smoe.top/d/Home/source/mockingbird/"
-
-for sy in ["encoder.pt", "g_hifigan.pt", "wavernn.pt"]:
- if not os.path.exists(os.path.join(mockingbird_path, sy)):
- torch.hub.download_url_to_file(f"{base_url}/{sy}", mockingbird_path / sy)
-
-for model in ["azusa", "nanmei", "ltyai", "tianyi"]:
- model_path = mockingbird_path / model
- model_path.mkdir(parents=True, exist_ok=True)
- for file_name in ["record.wav", f"{model}.pt"]:
- if not os.path.exists(os.path.join(model_path, file_name)):
- torch.hub.download_url_to_file(
- f"{base_url}/{model}/{file_name}", model_path / file_name
- )
-
-mockingbird.load_model(
- Path(os.path.join(mockingbird_path, "encoder.pt")),
- Path(os.path.join(mockingbird_path, "g_hifigan.pt")),
- Path(os.path.join(mockingbird_path, "wavernn.pt")),
-)
-
-
-def inference(
- text: str,
- model_name: str,
- vocoder_type: str = "HifiGan",
- style_idx: int = 0,
- min_stop_token: int = 9,
- steps: int = 2000,
-):
- model_path = mockingbird_path / model_name
- mockingbird.set_synthesizer(Path(os.path.join(model_path, f"{model_name}.pt")))
- fd = NamedTemporaryFile(suffix=".wav", delete=False)
- record = mockingbird.synthesize(
- text=str(text),
- input_wav=model_path / "record.wav",
- vocoder_type=vocoder_type,
- style_idx=style_idx,
- min_stop_token=min_stop_token,
- steps=steps,
- )
- with open(fd.name, "wb") as file:
- file.write(record.getvalue())
- return fd.name
-
-
-title = "MockingBird"
-description = "🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time"
-article = "Github Repo"
-
-gr.Interface(
- inference,
- [
- gr.Textbox(label="Input"),
- gr.Radio(
- ["azusa", "nanmei", "ltyai", "tianyi"],
- label="model type",
- value="azusa",
- ),
- gr.Radio(
- ["HifiGan", "WaveRNN"],
- label="Vocoder type",
- value="HifiGan",
- ),
- gr.Slider(minimum=-1, maximum=9, step=1, label="style idx", value=0),
- gr.Slider(minimum=3, maximum=9, label="min stop token", value=9),
- gr.Slider(minimum=200, maximum=2000, label="steps", value=2000),
- ],
- gr.Audio(type="filepath", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[["阿梓不是你的电子播放器", "azusa", "HifiGan", 0, 9, 2000], ["不是", "nanmei", "HifiGan", 0, 9, 2000]],
-).launch()
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/__init__.py
deleted file mode 100644
index 73199b01dec52820dc6ca0139903536344d5a1eb..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .io import Cache, VideoReader, frames2video
-from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread,
- flowwrite, quantize_flow, sparse_flow_from_bytes)
-from .processing import concat_video, convert_video, cut_video, resize_video
-
-__all__ = [
- 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video',
- 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow',
- 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes'
-]
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/visualization/color.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/visualization/color.py
deleted file mode 100644
index 9041e0e6b7581c3356795d6a3c5e84667c88f025..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/visualization/color.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from enum import Enum
-
-import numpy as np
-
-from annotator.uniformer.mmcv.utils import is_str
-
-
-class Color(Enum):
- """An enum that defines common colors.
-
- Contains red, green, blue, cyan, yellow, magenta, white and black.
- """
- red = (0, 0, 255)
- green = (0, 255, 0)
- blue = (255, 0, 0)
- cyan = (255, 255, 0)
- yellow = (0, 255, 255)
- magenta = (255, 0, 255)
- white = (255, 255, 255)
- black = (0, 0, 0)
-
-
-def color_val(color):
- """Convert various input to color tuples.
-
- Args:
- color (:obj:`Color`/str/tuple/int/ndarray): Color inputs
-
- Returns:
- tuple[int]: A tuple of 3 integers indicating BGR channels.
- """
- if is_str(color):
- return Color[color].value
- elif isinstance(color, Color):
- return color.value
- elif isinstance(color, tuple):
- assert len(color) == 3
- for channel in color:
- assert 0 <= channel <= 255
- return color
- elif isinstance(color, int):
- assert 0 <= color <= 255
- return color, color, color
- elif isinstance(color, np.ndarray):
- assert color.ndim == 1 and color.size == 3
- assert np.all((color >= 0) & (color <= 255))
- color = color.astype(np.uint8)
- return tuple(color)
- else:
- raise TypeError(f'Invalid type for color: {type(color)}')
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/dataset_wrappers.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/dataset_wrappers.py
deleted file mode 100644
index d6a5e957ec3b44465432617cf6e8f0b86a8a5efa..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/dataset_wrappers.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from torch.utils.data.dataset import ConcatDataset as _ConcatDataset
-
-from .builder import DATASETS
-
-
-@DATASETS.register_module()
-class ConcatDataset(_ConcatDataset):
- """A wrapper of concatenated dataset.
-
- Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but
- concat the group flag for image aspect ratio.
-
- Args:
- datasets (list[:obj:`Dataset`]): A list of datasets.
- """
-
- def __init__(self, datasets):
- super(ConcatDataset, self).__init__(datasets)
- self.CLASSES = datasets[0].CLASSES
- self.PALETTE = datasets[0].PALETTE
-
-
-@DATASETS.register_module()
-class RepeatDataset(object):
- """A wrapper of repeated dataset.
-
- The length of repeated dataset will be `times` larger than the original
- dataset. This is useful when the data loading time is long but the dataset
- is small. Using RepeatDataset can reduce the data loading time between
- epochs.
-
- Args:
- dataset (:obj:`Dataset`): The dataset to be repeated.
- times (int): Repeat times.
- """
-
- def __init__(self, dataset, times):
- self.dataset = dataset
- self.times = times
- self.CLASSES = dataset.CLASSES
- self.PALETTE = dataset.PALETTE
- self._ori_len = len(self.dataset)
-
- def __getitem__(self, idx):
- """Get item from original dataset."""
- return self.dataset[idx % self._ori_len]
-
- def __len__(self):
- """The length is multiplied by ``times``"""
- return self.times * self._ori_len
diff --git a/spaces/MirageML/sjc/adapt_gddpm.py b/spaces/MirageML/sjc/adapt_gddpm.py
deleted file mode 100644
index f71db9e6f8e3dff6906f690046dec4e33a2e5ea2..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/adapt_gddpm.py
+++ /dev/null
@@ -1,562 +0,0 @@
-from pathlib import Path
-from math import sin, pi, sqrt
-from functools import partial
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from easydict import EasyDict
-from guided_diffusion.script_util import (
- create_model_and_diffusion,
- model_and_diffusion_defaults,
-
- NUM_CLASSES,
- create_classifier,
- classifier_defaults,
-
- sr_create_model_and_diffusion,
- sr_model_and_diffusion_defaults,
-)
-
-from adapt import ScoreAdapter
-
-from my.registry import Registry
-
-PRETRAINED_REGISTRY = Registry("pretrained")
-
-
-device = torch.device("cuda")
-
-
-def load_ckpt(path, **kwargs):
- # with bf.BlobFile(path, "rb") as f:
- # data = f.read()
- return torch.load(path, **kwargs)
-
-
-def pick_out_cfgs(src, target_ks):
- return {k: src[k] for k in target_ks}
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_64():
- return dict(
- attention_resolutions="32,16,8",
- class_cond=True,
- diffusion_steps=1000,
- dropout=0.1,
- image_size=64,
- learn_sigma=True,
- noise_schedule="cosine",
- num_channels=192,
- num_head_channels=64,
- num_res_blocks=3,
- resblock_updown=True,
- use_new_attention_order=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- classifier_depth=4,
-
- classifier_scale=1.0,
- model_path="models/64x64_diffusion.pt",
- classifier_path="models/64x64_classifier.pt",
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_128():
- return dict(
- attention_resolutions="32,16,8",
- class_cond=True,
- diffusion_steps=1000,
- image_size=128,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=256,
- num_heads=4,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- classifier_scale=0.5,
- model_path="models/128x128_diffusion.pt",
- classifier_path="models/128x128_classifier.pt",
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_256():
- return dict(
- attention_resolutions="32,16,8",
- class_cond=True,
- diffusion_steps=1000,
- image_size=256,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=256,
- num_head_channels=64,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- classifier_scale=1.0,
- model_path="models/256x256_diffusion.pt",
- classifier_path="models/256x256_classifier.pt"
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_256_uncond():
- return dict(
- attention_resolutions="32,16,8",
- class_cond=False,
- diffusion_steps=1000,
- image_size=256,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=256,
- num_head_channels=64,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- classifier_scale=10.0,
- model_path="models/256x256_diffusion_uncond.pt",
- classifier_path="models/256x256_classifier.pt",
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_512():
- return dict(
- attention_resolutions="32,16,8",
- class_cond=True,
- diffusion_steps=1000,
- image_size=512,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=256,
- num_head_channels=64,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=False,
- use_scale_shift_norm=True,
-
- classifier_scale=4.0,
- model_path="models/512x512_diffusion.pt",
- classifier_path="models/512x512_classifier.pt"
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_64_256(base_samples="64_samples.npz"):
- return dict(
- attention_resolutions="32,16,8",
- class_cond=True,
- diffusion_steps=1000,
- large_size=256,
- small_size=64,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=192,
- num_heads=4,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- model_path="models/64_256_upsampler.pt",
-
- base_samples=base_samples,
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_imgnet_128_512(base_samples="128_samples.npz",):
- return dict(
- attention_resolutions="32,16",
- class_cond=True,
- diffusion_steps=1000,
- large_size=512,
- small_size=128,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=192,
- num_head_channels=64,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- model_path="models/128_512_upsampler.pt",
-
- base_samples=base_samples,
- )
-
-
-@PRETRAINED_REGISTRY.register()
-def m_lsun_256(category="bedroom"):
- return dict(
- attention_resolutions="32,16,8",
- class_cond=False,
- diffusion_steps=1000,
- dropout=0.1,
- image_size=256,
- learn_sigma=True,
- noise_schedule="linear",
- num_channels=256,
- num_head_channels=64,
- num_res_blocks=2,
- resblock_updown=True,
- use_fp16=True,
- use_scale_shift_norm=True,
-
- model_path=f"models/lsun_{category}.pt"
- )
-
-
-def img_gen(specific_cfgs, num_samples=16, batch_size=16, load_only=False, ckpt_root=Path("")):
- cfgs = EasyDict(
- clip_denoised=True,
- num_samples=num_samples,
- batch_size=batch_size,
- use_ddim=False,
- model_path="",
- classifier_path="",
- classifier_scale=1.0,
- )
- cfgs.update(model_and_diffusion_defaults())
- cfgs.update(classifier_defaults())
- cfgs.update(specific_cfgs)
-
- use_classifier_guidance = bool(cfgs.classifier_path)
- class_aware = cfgs.class_cond or use_classifier_guidance
-
- model, diffusion = create_model_and_diffusion(
- **pick_out_cfgs(cfgs, model_and_diffusion_defaults().keys())
- )
- model.load_state_dict(
- load_ckpt(str(ckpt_root / cfgs.model_path), map_location="cpu")
- )
- model.to(device)
- if cfgs.use_fp16:
- model.convert_to_fp16()
- model.eval()
-
- def model_fn(x, t, y=None):
- return model(x, t, y if cfgs.class_cond else None)
-
- classifier = None
- cond_fn = None
- if use_classifier_guidance:
- classifier = create_classifier(
- **pick_out_cfgs(cfgs, classifier_defaults().keys())
- )
- classifier.load_state_dict(
- load_ckpt(str(ckpt_root / cfgs.classifier_path), map_location="cpu")
- )
- classifier.to(device)
- if cfgs.classifier_use_fp16:
- classifier.convert_to_fp16()
- classifier.eval()
-
- def cond_fn(x, t, y=None):
- assert y is not None
- with torch.enable_grad():
- x_in = x.detach().requires_grad_(True)
- logits = classifier(x_in, t)
- log_probs = F.log_softmax(logits, dim=-1)
- selected = log_probs[range(len(logits)), y.view(-1)]
- return torch.autograd.grad(selected.sum(), x_in)[0] * cfgs.classifier_scale
-
- if load_only:
- return model, classifier
-
- all_images = []
- all_labels = []
-
- while len(all_images) * cfgs.batch_size < cfgs.num_samples:
- model_kwargs = {}
-
- if class_aware:
- classes = torch.randint(
- low=0, high=NUM_CLASSES, size=(cfgs.batch_size,), device=device
- )
- model_kwargs["y"] = classes
-
- sample_fn = (
- diffusion.p_sample_loop if not cfgs.use_ddim else diffusion.ddim_sample_loop
- )
- sample = sample_fn(
- model_fn,
- (cfgs.batch_size, 3, cfgs.image_size, cfgs.image_size),
- clip_denoised=cfgs.clip_denoised,
- model_kwargs=model_kwargs,
- cond_fn=cond_fn,
- device=device,
- progress=True
- )
- sample = ((sample + 1) * 127.5).clamp(0, 255).to(torch.uint8)
- sample = sample.permute(0, 2, 3, 1)
- sample = sample.contiguous()
-
- all_images.append(sample.cpu().numpy())
- if class_aware:
- all_labels.append(classes.cpu().numpy())
-
- arr = np.concatenate(all_images, axis=0)
- arr = arr[:cfgs.num_samples]
-
- if class_aware:
- all_labels = np.concatenate(all_labels, axis=0)
- all_labels = all_labels[:cfgs.num_samples]
-
- shape_str = "x".join([str(x) for x in arr.shape])
- out_path = Path("./out") / f"samples_{shape_str}.npz"
- np.savez(out_path, arr, all_labels)
-
-
-def img_upsamp(specific_cfgs, num_samples=16, batch_size=16, load_only=False):
- """note that here the ckpt root is not configured properly; will break but easy fix"""
- cfgs = EasyDict(
- clip_denoised=True,
- num_samples=num_samples,
- batch_size=batch_size,
- use_ddim=False,
- base_samples="",
- model_path="",
- )
- cfgs.update(sr_model_and_diffusion_defaults())
- cfgs.update(specific_cfgs)
-
- model, diffusion = sr_create_model_and_diffusion(
- **pick_out_cfgs(cfgs, sr_model_and_diffusion_defaults().keys())
- )
- model.load_state_dict(load_ckpt(cfgs.model_path, map_location="cpu"))
- model.to(device)
- if cfgs.use_fp16:
- model.convert_to_fp16()
- model.eval()
-
- if load_only:
- return model
-
- data = load_low_res_samples(
- cfgs.base_samples, cfgs.batch_size, cfgs.class_cond
- )
-
- all_images = []
- while len(all_images) * cfgs.batch_size < cfgs.num_samples:
- model_kwargs = next(data)
- model_kwargs = {k: v.to(device) for k, v in model_kwargs.items()}
- samples = diffusion.p_sample_loop(
- model,
- (cfgs.batch_size, 3, cfgs.large_size, cfgs.large_size),
- clip_denoised=cfgs.clip_denoised,
- model_kwargs=model_kwargs,
- progress=True
- )
- samples = ((samples + 1) * 127.5).clamp(0, 255).to(torch.uint8)
- samples = samples.permute(0, 2, 3, 1)
- samples = samples.contiguous()
-
- all_images.append(samples.cpu().numpy())
-
- arr = np.concatenate(all_images, axis=0)
- arr = arr[: cfgs.num_samples]
-
- shape_str = "x".join([str(x) for x in arr.shape])
- out_path = Path("./out") / f"samples_{shape_str}.npz"
- np.savez(out_path, arr)
-
-
-def load_low_res_samples(base_samples, batch_size, class_cond):
- obj = np.load(base_samples)
- image_arr = obj["arr_0"]
- if class_cond:
- label_arr = obj["arr_1"]
-
- buffer = []
- label_buffer = []
- while True:
- for i in range(len(image_arr)):
- buffer.append(image_arr[i])
- if class_cond:
- label_buffer.append(label_arr[i])
-
- if len(buffer) == batch_size:
- batch = torch.from_numpy(np.stack(buffer)).float()
- batch = batch / 127.5 - 1.0
- batch = batch.permute(0, 3, 1, 2)
- res = {}
- res["low_res"] = batch
- if class_cond:
- res["y"] = torch.from_numpy(np.stack(label_buffer))
- yield res
- buffer, label_buffer = [], []
-
-
-def class_cond_info(imgnet_cat):
-
- def rand_cond_fn(batch_size):
- cats = torch.randint(
- low=0, high=NUM_CLASSES, size=(batch_size,), device=device
- )
- return {"y": cats}
-
- def class_specific_cond(batch_size):
- cats = torch.tensor([imgnet_cat, ] * batch_size, device=device)
- return {"y": cats}
-
- if imgnet_cat == -1:
- return rand_cond_fn
- else:
- return class_specific_cond
-
-
-def _sqrt(x):
- if isinstance(x, float):
- return sqrt(x)
- else:
- assert isinstance(x, torch.Tensor)
- return torch.sqrt(x)
-
-
-class GuidedDDPM(ScoreAdapter):
- def __init__(self, model, lsun_cat, imgnet_cat):
- print(PRETRAINED_REGISTRY)
- cfgs = PRETRAINED_REGISTRY.get(model)(
- **({"category": lsun_cat} if model.startswith("m_lsun") else {})
- )
-
- self.unet, self.classifier = img_gen(
- cfgs, load_only=True, ckpt_root=self.checkpoint_root() / "guided_ddpm"
- )
-
- H, W = cfgs['image_size'], cfgs['image_size']
- self._data_shape = (3, H, W)
-
- if cfgs['class_cond'] or (self.classifier is not None):
- cond_func = class_cond_info(imgnet_cat)
- else:
- cond_func = lambda *args, **kwargs: {}
- self.cond_func = cond_func
-
- self._unet_is_cond = bool(cfgs['class_cond'])
-
- noise_schedule = cfgs['noise_schedule']
- assert noise_schedule in ("linear", "cosine")
- self.M = 1000
- if noise_schedule == "linear":
- self.us = self.linear_us(self.M)
- self._σ_min = 0.01
- else:
- self.us = self.cosine_us(self.M)
- self._σ_min = 0.0064
- self.noise_schedule = noise_schedule
-
- self._device = next(self.unet.parameters()).device
-
- def data_shape(self):
- return self._data_shape
-
- @property
- def σ_max(self):
- return self.us[0]
-
- @property
- def σ_min(self):
- return self.us[-1]
-
- @torch.no_grad()
- def denoise(self, xs, σ, **model_kwargs):
- N = xs.shape[0]
- cond_t, σ = self.time_cond_vec(N, σ)
- output = self.unet(
- xs / _sqrt(1 + σ**2), cond_t, **model_kwargs
- )
- # not using the var pred
- n_hat = torch.split(output, xs.shape[1], dim=1)[0]
- Ds = xs - σ * n_hat
- return Ds
-
- def cond_info(self, batch_size):
- return self.cond_func(batch_size)
-
- def unet_is_cond(self):
- return self._unet_is_cond
-
- def use_cls_guidance(self):
- return (self.classifier is not None)
-
- @torch.no_grad()
- def classifier_grad(self, xs, σ, ys):
- N = xs.shape[0]
- cond_t, σ = self.time_cond_vec(N, σ)
- with torch.enable_grad():
- x_in = xs.detach().requires_grad_(True)
- logits = self.classifier(x_in, cond_t)
- log_probs = F.log_softmax(logits, dim=-1)
- selected = log_probs[range(len(logits)), ys.view(-1)]
- grad = torch.autograd.grad(selected.sum(), x_in)[0]
-
- grad = grad * (1 / sqrt(1 + σ**2))
- return grad
-
- def snap_t_to_nearest_tick(self, t):
- j = np.abs(t - self.us).argmin()
- return self.us[j], j
-
- def time_cond_vec(self, N, σ):
- if isinstance(σ, float):
- σ, j = self.snap_t_to_nearest_tick(σ) # σ might change due to snapping
- cond_t = (self.M - 1) - j
- cond_t = torch.tensor([cond_t] * N, device=self.device)
- return cond_t, σ
- else:
- assert isinstance(σ, torch.Tensor)
- σ = σ.reshape(-1).cpu().numpy()
- σs = []
- js = []
- for elem in σ:
- _σ, _j = self.snap_t_to_nearest_tick(elem)
- σs.append(_σ)
- js.append((self.M - 1) - _j)
-
- cond_t = torch.tensor(js, device=self.device)
- σs = torch.tensor(σs, device=self.device, dtype=torch.float32).reshape(-1, 1, 1, 1)
- return cond_t, σs
-
- @staticmethod
- def cosine_us(M=1000):
- assert M == 1000
-
- def α_bar(j):
- return sin(pi / 2 * j / (M * (0.008 + 1))) ** 2
-
- us = [0, ]
- for j in reversed(range(0, M)): # [M-1, 0], inclusive
- u_j = sqrt(((us[-1] ** 2) + 1) / (max(α_bar(j) / α_bar(j+1), 0.001)) - 1)
- us.append(u_j)
-
- us = np.array(us)
- us = us[1:]
- us = us[::-1]
- return us
-
- @staticmethod
- def linear_us(M=1000):
- assert M == 1000
- β_start = 0.0001
- β_end = 0.02
- βs = np.linspace(β_start, β_end, M, dtype=np.float64)
- αs = np.cumprod(1 - βs)
- us = np.sqrt((1 - αs) / αs)
- us = us[::-1]
- return us
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/evaluator/multi_datasets_evaluator.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/evaluator/multi_datasets_evaluator.py
deleted file mode 100644
index f01aa70f645d5a9f61fe02386ff214dc72bcffb4..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/evaluation/evaluator/multi_datasets_evaluator.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import warnings
-from collections import OrderedDict
-from typing import Sequence, Union
-
-from mmengine.dist import (broadcast_object_list, collect_results,
- is_main_process)
-from mmengine.evaluator import BaseMetric, Evaluator
-from mmengine.evaluator.metric import _to_cpu
-
-from mmocr.registry import EVALUATOR
-from mmocr.utils.typing_utils import ConfigType
-
-
-@EVALUATOR.register_module()
-class MultiDatasetsEvaluator(Evaluator):
- """Wrapper class to compose class: `ConcatDataset` and multiple
- :class:`BaseMetric` instances.
- The metrics will be evaluated on each dataset slice separately. The name of
- the each metric is the concatenation of the dataset prefix, the metric
- prefix and the key of metric - e.g.
- `dataset_prefix/metric_prefix/accuracy`.
-
- Args:
- metrics (dict or BaseMetric or Sequence): The config of metrics.
- dataset_prefixes (Sequence[str]): The prefix of each dataset. The
- length of this sequence should be the same as the length of the
- datasets.
- """
-
- def __init__(self, metrics: Union[ConfigType, BaseMetric, Sequence],
- dataset_prefixes: Sequence[str]) -> None:
- super().__init__(metrics)
- self.dataset_prefixes = dataset_prefixes
-
- def evaluate(self, size: int) -> dict:
- """Invoke ``evaluate`` method of each metric and collect the metrics
- dictionary.
-
- Args:
- size (int): Length of the entire validation dataset. When batch
- size > 1, the dataloader may pad some data samples to make
- sure all ranks have the same length of dataset slice. The
- ``collect_results`` function will drop the padded data based on
- this size.
-
- Returns:
- dict: Evaluation results of all metrics. The keys are the names
- of the metrics, and the values are corresponding results.
- """
- metrics_results = OrderedDict()
- dataset_slices = self.dataset_meta.get('cumulative_sizes', [size])
- assert len(dataset_slices) == len(self.dataset_prefixes)
- for metric in self.metrics:
- if len(metric.results) == 0:
- warnings.warn(
- f'{metric.__class__.__name__} got empty `self.results`.'
- 'Please ensure that the processed results are properly '
- 'added into `self.results` in `process` method.')
-
- results = collect_results(metric.results, size,
- metric.collect_device)
-
- if is_main_process():
- # cast all tensors in results list to cpu
- results = _to_cpu(results)
- for start, end, dataset_prefix in zip([0] +
- dataset_slices[:-1],
- dataset_slices,
- self.dataset_prefixes):
- metric_results = metric.compute_metrics(
- results[start:end]) # type: ignore
- # Add prefix to metric names
-
- if metric.prefix:
- final_prefix = '/'.join(
- (dataset_prefix, metric.prefix))
- else:
- final_prefix = dataset_prefix
- metric_results = {
- '/'.join((final_prefix, k)): v
- for k, v in metric_results.items()
- }
-
- # Check metric name conflicts
- for name in metric_results.keys():
- if name in metrics_results:
- raise ValueError(
- 'There are multiple evaluation results with '
- f'the same metric name {name}. Please make '
- 'sure all metrics have different prefixes.')
- metrics_results.update(metric_results)
- metric.results.clear()
- if is_main_process():
- metrics_results = [metrics_results]
- else:
- metrics_results = [None] # type: ignore
- broadcast_object_list(metrics_results)
-
- return metrics_results[0]
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/abinet.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/abinet.py
deleted file mode 100644
index f8ee3a5cafd021d6072d33b1648a9722a91bcf10..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/abinet.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmocr.registry import MODELS
-from .encoder_decoder_recognizer import EncoderDecoderRecognizer
-
-
-@MODELS.register_module()
-class ABINet(EncoderDecoderRecognizer):
- """Implementation of `Read Like Humans: Autonomous, Bidirectional and
- Iterative LanguageModeling for Scene Text Recognition.
-
- `_
- """
diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/kie/closeset_to_openset.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/kie/closeset_to_openset.py
deleted file mode 100644
index 2057e9797bd0586fd8820ef3ae161486bea22d32..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/kie/closeset_to_openset.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import argparse
-import json
-from functools import partial
-
-import mmengine
-
-from mmocr.utils import list_from_file, list_to_file
-
-
-def convert(closeset_line, merge_bg_others=False, ignore_idx=0, others_idx=25):
- """Convert line-json str of closeset to line-json str of openset. Note that
- this function is designed for closeset-wildreceipt to openset-wildreceipt.
- It may not be suitable to your own dataset.
-
- Args:
- closeset_line (str): The string to be deserialized to
- the closeset dictionary object.
- merge_bg_others (bool): If True, give the same label to "background"
- class and "others" class.
- ignore_idx (int): Index for ``ignore`` class.
- others_idx (int): Index for ``others`` class.
- """
- # Two labels at the same index of the following two lists
- # make up a key-value pair. For example, in wildreceipt,
- # closeset_key_inds[0] maps to "Store_name_key"
- # and closeset_value_inds[0] maps to "Store_addr_value".
- closeset_key_inds = list(range(2, others_idx, 2))
- closeset_value_inds = list(range(1, others_idx, 2))
-
- openset_node_label_mapping = {'bg': 0, 'key': 1, 'value': 2, 'others': 3}
- if merge_bg_others:
- openset_node_label_mapping['others'] = openset_node_label_mapping['bg']
-
- closeset_obj = json.loads(closeset_line)
- openset_obj = {
- 'file_name': closeset_obj['file_name'],
- 'height': closeset_obj['height'],
- 'width': closeset_obj['width'],
- 'annotations': []
- }
-
- edge_idx = 1
- label_to_edge = {}
- for anno in closeset_obj['annotations']:
- label = anno['label']
- if label == ignore_idx:
- anno['label'] = openset_node_label_mapping['bg']
- anno['edge'] = edge_idx
- edge_idx += 1
- elif label == others_idx:
- anno['label'] = openset_node_label_mapping['others']
- anno['edge'] = edge_idx
- edge_idx += 1
- else:
- edge = label_to_edge.get(label, None)
- if edge is not None:
- anno['edge'] = edge
- if label in closeset_key_inds:
- anno['label'] = openset_node_label_mapping['key']
- elif label in closeset_value_inds:
- anno['label'] = openset_node_label_mapping['value']
- else:
- tmp_key = 'key'
- if label in closeset_key_inds:
- label_with_same_edge = closeset_value_inds[
- closeset_key_inds.index(label)]
- elif label in closeset_value_inds:
- label_with_same_edge = closeset_key_inds[
- closeset_value_inds.index(label)]
- tmp_key = 'value'
- edge_counterpart = label_to_edge.get(label_with_same_edge,
- None)
- if edge_counterpart is not None:
- anno['edge'] = edge_counterpart
- else:
- anno['edge'] = edge_idx
- edge_idx += 1
- anno['label'] = openset_node_label_mapping[tmp_key]
- label_to_edge[label] = anno['edge']
-
- openset_obj['annotations'] = closeset_obj['annotations']
-
- return json.dumps(openset_obj, ensure_ascii=False)
-
-
-def process(closeset_file, openset_file, merge_bg_others=False, n_proc=10):
- closeset_lines = list_from_file(closeset_file)
-
- convert_func = partial(convert, merge_bg_others=merge_bg_others)
-
- openset_lines = mmengine.track_parallel_progress(
- convert_func, closeset_lines, nproc=n_proc)
-
- list_to_file(openset_file, openset_lines)
-
-
-def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument('in_file', help='Annotation file for closeset.')
- parser.add_argument('out_file', help='Annotation file for openset.')
- parser.add_argument(
- '--merge',
- action='store_true',
- help='Merge two classes: "background" and "others" in closeset '
- 'to one class in openset.')
- parser.add_argument(
- '--n_proc', type=int, default=10, help='Number of process.')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
-
- process(args.in_file, args.out_file, args.merge, args.n_proc)
-
- print('finish')
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_convert.py b/spaces/MuGeminorum/insecta/khandy/boxes/boxes_convert.py
deleted file mode 100644
index 6d1f6a955cf1aeadbe1c829220e8bb887ae850a1..0000000000000000000000000000000000000000
--- a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_convert.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import numpy as np
-
-
-def convert_xyxy_to_xywh(boxes, copy=True):
- """Convert [x_min, y_min, x_max, y_max] format to [x_min, y_min, width, height] format.
- """
- if copy:
- boxes = boxes.copy()
- boxes[..., 2:4] -= boxes[..., 0:2]
- return boxes
-
-
-def convert_xywh_to_xyxy(boxes, copy=True):
- """Convert [x_min, y_min, width, height] format to [x_min, y_min, x_max, y_max] format.
- """
- if copy:
- boxes = boxes.copy()
- boxes[..., 2:4] += boxes[..., 0:2]
- return boxes
-
-
-def convert_xywh_to_cxcywh(boxes, copy=True):
- """Convert [x_min, y_min, width, height] format to [cx, cy, width, height] format.
- """
- if copy:
- boxes = boxes.copy()
- boxes[..., 0:2] += boxes[..., 2:4] * 0.5
- return boxes
-
-
-def convert_cxcywh_to_xywh(boxes, copy=True):
- """Convert [cx, cy, width, height] format to [x_min, y_min, width, height] format.
- """
- if copy:
- boxes = boxes.copy()
- boxes[..., 0:2] -= boxes[..., 2:4] * 0.5
- return boxes
-
-
-def convert_xyxy_to_cxcywh(boxes, copy=True):
- """Convert [x_min, y_min, x_max, y_max] format to [cx, cy, width, height] format.
- """
- if copy:
- boxes = boxes.copy()
- boxes[..., 2:4] -= boxes[..., 0:2]
- boxes[..., 0:2] += boxes[..., 2:4] * 0.5
- return boxes
-
-
-def convert_cxcywh_to_xyxy(boxes, copy=True):
- """Convert [cx, cy, width, height] format to [x_min, y_min, x_max, y_max] format.
- """
- if copy:
- boxes = boxes.copy()
- boxes[..., 0:2] -= boxes[..., 2:4] * 0.5
- boxes[..., 2:4] += boxes[..., 0:2]
- return boxes
-
-
-def convert_boxes_format(boxes, in_fmt, out_fmt, copy=True):
- """Converts boxes from given in_fmt to out_fmt.
-
- Supported in_fmt and out_fmt are:
- 'xyxy': boxes are represented via corners, x1, y1 being top left and x2, y2 being bottom right.
- 'xywh' : boxes are represented via corner, width and height, x1, y2 being top left, w, h being width and height.
- 'cxcywh' : boxes are represented via centre, width and height, cx, cy being center of box, w, h
- being width and height.
-
- Args:
- boxes: boxes which will be converted.
- in_fmt (str): Input format of given boxes. Supported formats are ['xyxy', 'xywh', 'cxcywh'].
- out_fmt (str): Output format of given boxes. Supported formats are ['xyxy', 'xywh', 'cxcywh']
-
- Returns:
- boxes: Boxes into converted format.
-
- References:
- torchvision.ops.box_convert
- """
- allowed_fmts = ("xyxy", "xywh", "cxcywh")
- if in_fmt not in allowed_fmts or out_fmt not in allowed_fmts:
- raise ValueError("Unsupported Bounding Box Conversions for given in_fmt and out_fmt")
- if copy:
- boxes = boxes.copy()
- if in_fmt == out_fmt:
- return boxes
-
- if (in_fmt, out_fmt) == ("xyxy", "xywh"):
- boxes = convert_xyxy_to_xywh(boxes, copy=False)
- elif (in_fmt, out_fmt) == ("xywh", "xyxy"):
- boxes = convert_xywh_to_xyxy(boxes, copy=False)
- elif (in_fmt, out_fmt) == ("xywh", "cxcywh"):
- boxes = convert_xywh_to_cxcywh(boxes, copy=False)
- elif (in_fmt, out_fmt) == ("cxcywh", "xywh"):
- boxes = convert_cxcywh_to_xywh(boxes, copy=False)
- elif (in_fmt, out_fmt) == ("xyxy", "cxcywh"):
- boxes = convert_xyxy_to_cxcywh(boxes, copy=False)
- elif (in_fmt, out_fmt) == ("cxcywh", "xyxy"):
- boxes = convert_cxcywh_to_xyxy(boxes, copy=False)
- return boxes
-
\ No newline at end of file
diff --git a/spaces/MuGeminorum/insecta/khandy/image/misc.py b/spaces/MuGeminorum/insecta/khandy/image/misc.py
deleted file mode 100644
index 8d6cc6e17cdf1ed4856368a6d588da498e758ea9..0000000000000000000000000000000000000000
--- a/spaces/MuGeminorum/insecta/khandy/image/misc.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import os
-import imghdr
-import numbers
-import warnings
-from io import BytesIO
-
-import cv2
-import khandy
-import numpy as np
-from PIL import Image
-
-
-def imread(file_or_buffer, flags=-1):
- """Improvement on cv2.imread, make it support filename including chinese character.
- """
- try:
- if isinstance(file_or_buffer, bytes):
- return cv2.imdecode(np.frombuffer(file_or_buffer, dtype=np.uint8), flags)
- else:
- # support type: file or str or Path
- return cv2.imdecode(np.fromfile(file_or_buffer, dtype=np.uint8), flags)
- except Exception as e:
- print(e)
- return None
-
-
-def imread_cv(file_or_buffer, flags=-1):
- warnings.warn('khandy.imread_cv will be deprecated, use khandy.imread instead!')
- return imread(file_or_buffer, flags)
-
-
-def imwrite(filename, image, params=None):
- """Improvement on cv2.imwrite, make it support filename including chinese character.
- """
- cv2.imencode(os.path.splitext(filename)[-1], image, params)[1].tofile(filename)
-
-
-def imwrite_cv(filename, image, params=None):
- warnings.warn('khandy.imwrite_cv will be deprecated, use khandy.imwrite instead!')
- return imwrite(filename, image, params)
-
-
-def imread_pil(file_or_buffer, to_mode=None):
- """Improvement on Image.open to avoid ResourceWarning.
- """
- try:
- if isinstance(file_or_buffer, bytes):
- buffer = BytesIO()
- buffer.write(file_or_buffer)
- buffer.seek(0)
- file_or_buffer = buffer
-
- if hasattr(file_or_buffer, 'read'):
- image = Image.open(file_or_buffer)
- if to_mode is not None:
- image = image.convert(to_mode)
- else:
- # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
- with open(file_or_buffer, 'rb') as f:
- image = Image.open(f)
- # If convert outside with statement, will raise "seek of closed file" as
- # https://github.com/microsoft/Swin-Transformer/issues/66
- if to_mode is not None:
- image = image.convert(to_mode)
- return image
- except Exception as e:
- print(e)
- return None
-
-
-def imwrite_bytes(filename, image_bytes: bytes, update_extension: bool = True):
- """Write image bytes to file.
-
- Args:
- filename: str
- filename which image_bytes is written into.
- image_bytes: bytes
- image content to be written.
- update_extension: bool
- whether update extension according to image_bytes or not.
- the cost of update extension is smaller than update image format.
- """
- extension = imghdr.what('', image_bytes)
- file_extension = khandy.get_path_extension(filename)
- # imghdr.what fails to determine image format sometimes!
- # so when its return value is None, never update extension.
- if extension is None:
- image = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), -1)
- image_bytes = cv2.imencode(file_extension, image)[1]
- elif (extension.lower() != file_extension.lower()[1:]):
- if update_extension:
- filename = khandy.replace_path_extension(filename, extension)
- else:
- image = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), -1)
- image_bytes = cv2.imencode(file_extension, image)[1]
-
- with open(filename, "wb") as f:
- f.write(image_bytes)
- return filename
-
-
-def rescale_image(image: np.ndarray, rescale_factor='auto', dst_dtype=np.float32):
- """Rescale image by rescale_factor.
-
- Args:
- img (ndarray): Image to be rescaled.
- rescale_factor (str, int or float, *optional*, defaults to `'auto'`):
- rescale the image by the specified scale factor. When is `'auto'`,
- rescale the image to [0, 1).
- dtype (np.dtype, *optional*, defaults to `np.float32`):
- The dtype of the output image. Defaults to `np.float32`.
-
- Returns:
- ndarray: The rescaled image.
- """
- if rescale_factor == 'auto':
- if np.issubdtype(image.dtype, np.unsignedinteger):
- rescale_factor = 1. / np.iinfo(image.dtype).max
- else:
- raise TypeError(f'Only support uint dtype ndarray when `rescale_factor` is `auto`, got {image.dtype}')
- elif issubclass(rescale_factor, (int, float)):
- pass
- else:
- raise TypeError('rescale_factor must be "auto", int or float')
- image = image.astype(dst_dtype, copy=True)
- image *= rescale_factor
- image = image.astype(dst_dtype)
- return image
-
-
-def normalize_image_value(image: np.ndarray, mean, std, rescale_factor=None):
- """Normalize an image with mean and std, rescale optionally.
-
- Args:
- image (ndarray): Image to be normalized.
- mean (int, float, Sequence[int], Sequence[float], ndarray): The mean to be used for normalize.
- std (int, float, Sequence[int], Sequence[float], ndarray): The std to be used for normalize.
- rescale_factor (None, 'auto', int or float, *optional*, defaults to `None`):
- rescale the image by the specified scale factor. When is `'auto'`,
- rescale the image to [0, 1); When is `None`, do not rescale.
-
- Returns:
- ndarray: The normalized image which dtype is np.float32.
- """
- dst_dtype = np.float32
- mean = np.array(mean, dtype=dst_dtype).flatten()
- std = np.array(std, dtype=dst_dtype).flatten()
- if rescale_factor == 'auto':
- if np.issubdtype(image.dtype, np.unsignedinteger):
- mean *= np.iinfo(image.dtype).max
- std *= np.iinfo(image.dtype).max
- else:
- raise TypeError(f'Only support uint dtype ndarray when `rescale_factor` is `auto`, got {image.dtype}')
- elif isinstance(rescale_factor, (int, float)):
- mean *= rescale_factor
- std *= rescale_factor
- image = image.astype(dst_dtype, copy=True)
- image -= mean
- image /= std
- return image
-
-
-def normalize_image_dtype(image, keep_num_channels=False):
- """Normalize image dtype to uint8 (usually for visualization).
-
- Args:
- image : ndarray
- Input image.
- keep_num_channels : bool, optional
- If this is set to True, the result is an array which has
- the same shape as input image, otherwise the result is
- an array whose channels number is 3.
-
- Returns:
- out: ndarray
- Image whose dtype is np.uint8.
- """
- assert (image.ndim == 3 and image.shape[-1] in [1, 3]) or (image.ndim == 2)
-
- image = image.astype(np.float32)
- image = khandy.minmax_normalize(image, axis=None, copy=False)
- image = np.array(image * 255, dtype=np.uint8)
-
- if not keep_num_channels:
- if image.ndim == 2:
- image = np.expand_dims(image, -1)
- if image.shape[-1] == 1:
- image = np.tile(image, (1,1,3))
- return image
-
-
-def normalize_image_channel(image, swap_rb=False):
- """Normalize image channel number and order to RGB or BGR.
-
- Args:
- image : ndarray
- Input image.
- swap_rb : bool, optional
- whether swap red and blue channel or not
-
- Returns:
- out: ndarray
- Image whose shape is (..., 3).
- """
- if image.ndim == 2:
- image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
- elif image.ndim == 3:
- num_channels = image.shape[-1]
- if num_channels == 1:
- image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
- elif num_channels == 3:
- if swap_rb:
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- elif num_channels == 4:
- if swap_rb:
- image = cv2.cvtColor(image, cv2.COLOR_BGRA2RGB)
- else:
- image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)
- else:
- raise ValueError(f'Unsupported image channel number, only support 1, 3 and 4, got {num_channels}!')
- else:
- raise ValueError(f'Unsupported image ndarray ndim, only support 2 and 3, got {image.ndim}!')
- return image
-
-
-def normalize_image_shape(image, swap_rb=False):
- warnings.warn('khandy.normalize_image_shape will be deprecated, use khandy.normalize_image_channel instead!')
- return normalize_image_channel(image, swap_rb)
-
-
-def stack_image_list(image_list, dtype=np.float32):
- """Join a sequence of image along a new axis before first axis.
-
- References:
- `im_list_to_blob` in `py-faster-rcnn-master/lib/utils/blob.py`
- """
- assert isinstance(image_list, (tuple, list))
-
- max_dimension = np.array([image.ndim for image in image_list]).max()
- assert max_dimension in [2, 3]
- max_shape = np.array([image.shape[:2] for image in image_list]).max(axis=0)
-
- num_channels = []
- for image in image_list:
- if image.ndim == 2:
- num_channels.append(1)
- else:
- num_channels.append(image.shape[-1])
- assert len(set(num_channels) - set([1])) in [0, 1]
- max_num_channels = np.max(num_channels)
-
- blob = np.empty((len(image_list), max_shape[0], max_shape[1], max_num_channels), dtype=dtype)
- for k, image in enumerate(image_list):
- blob[k, :image.shape[0], :image.shape[1], :] = np.atleast_3d(image).astype(dtype, copy=False)
- if max_dimension == 2:
- blob = np.squeeze(blob, axis=-1)
- return blob
-
-
-def is_numpy_image(image):
- return isinstance(image, np.ndarray) and image.ndim in {2, 3}
-
-
-def is_gray_image(image, tol=3):
- assert is_numpy_image(image)
- if image.ndim == 2:
- return True
- elif image.ndim == 3:
- num_channels = image.shape[-1]
- if num_channels == 1:
- return True
- elif num_channels == 3:
- gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
- gray3 = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
- mae = np.mean(cv2.absdiff(image, gray3))
- return mae <= tol
- elif num_channels == 4:
- rgb = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)
- gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
- gray3 = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
- mae = np.mean(cv2.absdiff(rgb, gray3))
- return mae <= tol
- else:
- return False
- else:
- return False
-
-
-def is_solid_color_image(image, tol=4):
- assert is_numpy_image(image)
- mean = np.array(cv2.mean(image)[:-1], dtype=np.float32)
-
- if image.ndim == 2:
- mae = np.mean(np.abs(image - mean[0]))
- return mae <= tol
- elif image.ndim == 3:
- num_channels = image.shape[-1]
- if num_channels == 1:
- mae = np.mean(np.abs(image - mean[0]))
- return mae <= tol
- elif num_channels == 3:
- mae = np.mean(np.abs(image - mean))
- return mae <= tol
- elif num_channels == 4:
- mae = np.mean(np.abs(image[:,:,:-1] - mean))
- return mae <= tol
- else:
- return False
- else:
- return False
-
-
-def create_solid_color_image(image_width, image_height, color, dtype=None):
- if isinstance(color, numbers.Real):
- image = np.full((image_height, image_width), color, dtype=dtype)
- elif isinstance(color, (tuple, list)):
- if len(color) == 1:
- image = np.full((image_height, image_width), color[0], dtype=dtype)
- elif len(color) in (3, 4):
- image = np.full((1, 1, len(color)), color, dtype=dtype)
- image = cv2.copyMakeBorder(image, 0, image_height-1, 0, image_width-1,
- cv2.BORDER_CONSTANT, value=color)
- else:
- color = np.asarray(color, dtype=dtype)
- image = np.empty((image_height, image_width, len(color)), dtype=dtype)
- image[:] = color
- else:
- raise TypeError(f'Invalid type {type(color)} for `color`.')
- return image
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/CaptionModel.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/CaptionModel.py
deleted file mode 100644
index 221ecd1e173d2e20e0103d4cde328d82bfd6b66c..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/CaptionModel.py
+++ /dev/null
@@ -1,407 +0,0 @@
-# This file contains ShowAttendTell and AllImg model
-
-# ShowAttendTell is from Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
-# https://arxiv.org/abs/1502.03044
-
-# AllImg is a model where
-# img feature is concatenated with word embedding at every time step as the input of lstm
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.autograd import *
-from ..utils import misc as utils
-from . import utils as model_utils
-
-
-class CaptionModel(nn.Module):
- def __init__(self):
- super(CaptionModel, self).__init__()
-
- # implements beam search
- # calls beam_step and returns the final set of beams
- # augments log-probabilities with diversity terms when number of groups > 1
-
- def forward(self, *args, **kwargs):
- mode = kwargs.get('mode', 'forward')
- if 'mode' in kwargs:
- del kwargs['mode']
- return getattr(self, '_'+mode)(*args, **kwargs)
-
- def beam_search(self, init_state, init_logprobs, *args, **kwargs):
-
- # function computes the similarity score to be augmented
- def add_diversity(beam_seq_table, logprobs, t, divm, diversity_lambda, bdash):
- local_time = t - divm
- unaug_logprobs = logprobs.clone()
- batch_size = beam_seq_table[0].shape[0]
-
- if divm > 0:
- change = logprobs.new_zeros(batch_size, logprobs.shape[-1])
- for prev_choice in range(divm):
- prev_decisions = beam_seq_table[prev_choice][:, :, local_time] # Nxb
- for prev_labels in range(bdash):
- change.scatter_add_(1, prev_decisions[:, prev_labels].unsqueeze(-1), change.new_ones(batch_size, 1))
-
- if local_time == 0:
- logprobs = logprobs - change * diversity_lambda
- else:
- logprobs = logprobs - self.repeat_tensor(bdash, change) * diversity_lambda
-
- return logprobs, unaug_logprobs
-
-
- # does one step of classical beam search
-
- def beam_step(logprobs, unaug_logprobs, beam_size, t, beam_seq, beam_seq_logprobs, beam_logprobs_sum, state):
- #INPUTS:
- #logprobs: probabilities augmented after diversity N*bxV
- #beam_size: obvious
- #t : time instant
- #beam_seq : tensor contanining the beams
- #beam_seq_logprobs: tensor contanining the beam logprobs
- #beam_logprobs_sum: tensor contanining joint logprobs
- #OUPUTS:
- #beam_seq : tensor containing the word indices of the decoded captions Nxbxl
- #beam_seq_logprobs : log-probability of each decision made, NxbxlxV
- #beam_logprobs_sum : joint log-probability of each beam Nxb
-
- batch_size = beam_logprobs_sum.shape[0]
- vocab_size = logprobs.shape[-1]
- logprobs = logprobs.reshape(batch_size, -1, vocab_size) # NxbxV
- if t == 0:
- assert logprobs.shape[1] == 1
- beam_logprobs_sum = beam_logprobs_sum[:, :1]
- candidate_logprobs = beam_logprobs_sum.unsqueeze(-1) + logprobs # beam_logprobs_sum Nxb logprobs is NxbxV
- ys, ix = torch.sort(candidate_logprobs.reshape(candidate_logprobs.shape[0], -1), -1, True)
- ys, ix = ys[:,:beam_size], ix[:,:beam_size]
- beam_ix = ix // vocab_size # Nxb which beam
- selected_ix = ix % vocab_size # Nxb # which world
- state_ix = (beam_ix + torch.arange(batch_size).type_as(beam_ix).unsqueeze(-1) * logprobs.shape[1]).reshape(-1) # N*b which in Nxb beams
-
-
- if t > 0:
- # gather according to beam_ix
- assert (beam_seq.gather(1, beam_ix.unsqueeze(-1).expand_as(beam_seq)) == beam_seq.reshape(-1, beam_seq.shape[-1])[state_ix].view_as(beam_seq)).all()
- beam_seq = beam_seq.gather(1, beam_ix.unsqueeze(-1).expand_as(beam_seq))
-
- beam_seq_logprobs = beam_seq_logprobs.gather(1, beam_ix.unsqueeze(-1).unsqueeze(-1).expand_as(beam_seq_logprobs))
-
- beam_seq = torch.cat([beam_seq, selected_ix.unsqueeze(-1)], -1) # beam_seq Nxbxl
- beam_logprobs_sum = beam_logprobs_sum.gather(1, beam_ix) + \
- logprobs.reshape(batch_size, -1).gather(1, ix)
- assert (beam_logprobs_sum == ys).all()
- _tmp_beam_logprobs = unaug_logprobs[state_ix].reshape(batch_size, -1, vocab_size)
- beam_logprobs = unaug_logprobs.reshape(batch_size, -1, vocab_size).gather(1, beam_ix.unsqueeze(-1).expand(-1, -1, vocab_size)) # NxbxV
- assert (_tmp_beam_logprobs == beam_logprobs).all()
- beam_seq_logprobs = torch.cat([
- beam_seq_logprobs,
- beam_logprobs.reshape(batch_size, -1, 1, vocab_size)], 2)
-
- new_state = [None for _ in state]
- for _ix in range(len(new_state)):
- # copy over state in previous beam q to new beam at vix
- new_state[_ix] = state[_ix][:, state_ix]
- state = new_state
- return beam_seq,beam_seq_logprobs,beam_logprobs_sum,state
-
- # Start diverse_beam_search
- opt = kwargs['opt']
- temperature = opt.get('temperature', 1) # This should not affect beam search, but will affect dbs
- beam_size = opt.get('beam_size', 10)
- group_size = opt.get('group_size', 1)
- diversity_lambda = opt.get('diversity_lambda', 0.5)
- decoding_constraint = opt.get('decoding_constraint', 0)
- remove_bad_endings = opt.get('remove_bad_endings', 0)
- suppress_UNK = opt.get('suppress_UNK', 0)
- length_penalty = utils.penalty_builder(opt.get('length_penalty', ''))
- bdash = beam_size // group_size # beam per group
-
- batch_size = init_logprobs.shape[0]
- device = init_logprobs.device
- # INITIALIZATIONS
- beam_seq_table = [torch.LongTensor(batch_size, bdash, 0).to(device) for _ in range(group_size)]
- beam_seq_logprobs_table = [torch.FloatTensor(batch_size, bdash, 0, self.vocab_size + 1).to(device) for _ in range(group_size)]
- beam_logprobs_sum_table = [torch.zeros(batch_size, bdash).to(device) for _ in range(group_size)]
-
- # logprobs # logprobs predicted in last time step, shape (beam_size, vocab_size+1)
- done_beams_table = [[[] for __ in range(group_size)] for _ in range(batch_size)]
- # state_table = [list(torch.unbind(_)) for _ in torch.stack(init_state).chunk(group_size, 2)]
- # state_table = list(zip(*[_.reshape(-1, batch_size * bdash, group_size, *_.shape[2:]).chunk(group_size, 2) for _ in init_state]))
- state_table = [[_.clone() for _ in init_state] for _ in range(group_size)]
- # logprobs_table = list(init_logprobs.reshape(batch_size * bdash, group_size, -1).chunk(group_size, 0))
- logprobs_table = [init_logprobs.clone() for _ in range(group_size)]
- # END INIT
-
- # Chunk elements in the args
- args = list(args)
- args = model_utils.split_tensors(group_size, args) # For each arg, turn (Bbg)x... to (Bb)x(g)x...
- if self.__class__.__name__ == 'AttEnsemble':
- args = [[[args[j][i][k] for i in range(len(self.models))] for j in range(len(args))] for k in range(group_size)] # group_name, arg_name, model_name
- else:
- args = [[args[i][j] for i in range(len(args))] for j in range(group_size)]
-
- for t in range(self.seq_length + group_size - 1):
- for divm in range(group_size):
- if t >= divm and t <= self.seq_length + divm - 1:
- # add diversity
- logprobs = logprobs_table[divm]
- # suppress previous word
- if decoding_constraint and t-divm > 0:
- logprobs.scatter_(1, beam_seq_table[divm][:, :, t-divm-1].reshape(-1, 1).to(device), float('-inf'))
- if remove_bad_endings and t-divm > 0:
- logprobs[torch.from_numpy(np.isin(beam_seq_table[divm][:, :, t-divm-1].cpu().numpy(), self.bad_endings_ix)).reshape(-1), 0] = float('-inf')
- # suppress UNK tokens in the decoding
- if suppress_UNK and hasattr(self, 'vocab') and self.vocab[str(logprobs.size(1)-1)] == 'UNK':
- logprobs[:,logprobs.size(1)-1] = logprobs[:, logprobs.size(1)-1] - 1000
- # diversity is added here
- # the function directly modifies the logprobs values and hence, we need to return
- # the unaugmented ones for sorting the candidates in the end. # for historical
- # reasons :-)
- logprobs, unaug_logprobs = add_diversity(beam_seq_table,logprobs,t,divm,diversity_lambda,bdash)
-
- # infer new beams
- beam_seq_table[divm],\
- beam_seq_logprobs_table[divm],\
- beam_logprobs_sum_table[divm],\
- state_table[divm] = beam_step(logprobs,
- unaug_logprobs,
- bdash,
- t-divm,
- beam_seq_table[divm],
- beam_seq_logprobs_table[divm],
- beam_logprobs_sum_table[divm],
- state_table[divm])
-
- # if time's up... or if end token is reached then copy beams
- for b in range(batch_size):
- is_end = beam_seq_table[divm][b, :, t-divm] == self.eos_idx
- assert beam_seq_table[divm].shape[-1] == t-divm+1
- if t == self.seq_length + divm - 1:
- is_end.fill_(1)
- for vix in range(bdash):
- if is_end[vix]:
- final_beam = {
- 'seq': beam_seq_table[divm][b, vix].clone(),
- 'logps': beam_seq_logprobs_table[divm][b, vix].clone(),
- 'unaug_p': beam_seq_logprobs_table[divm][b, vix].sum().item(),
- 'p': beam_logprobs_sum_table[divm][b, vix].item()
- }
- final_beam['p'] = length_penalty(t-divm+1, final_beam['p'])
- done_beams_table[b][divm].append(final_beam)
- beam_logprobs_sum_table[divm][b, is_end] -= 1000
-
- # move the current group one step forward in time
-
- it = beam_seq_table[divm][:, :, t-divm].reshape(-1).to(logprobs.device)
- logprobs_table[divm], state_table[divm] = self.get_logprobs_state(it, *(args[divm] + [state_table[divm]]))
- logprobs_table[divm] = F.log_softmax(logprobs_table[divm] / temperature, dim=-1)
-
- # all beams are sorted by their log-probabilities
- done_beams_table = [[sorted(done_beams_table[b][i], key=lambda x: -x['p'])[:bdash] for i in range(group_size)] for b in range(batch_size)]
- done_beams = [sum(_, []) for _ in done_beams_table]
- return done_beams
-
- def old_beam_search(self, init_state, init_logprobs, *args, **kwargs):
-
- # function computes the similarity score to be augmented
- def add_diversity(beam_seq_table, logprobsf, t, divm, diversity_lambda, bdash):
- local_time = t - divm
- unaug_logprobsf = logprobsf.clone()
- for prev_choice in range(divm):
- prev_decisions = beam_seq_table[prev_choice][local_time]
- for sub_beam in range(bdash):
- for prev_labels in range(bdash):
- logprobsf[sub_beam][prev_decisions[prev_labels]] = logprobsf[sub_beam][prev_decisions[prev_labels]] - diversity_lambda
- return unaug_logprobsf
-
- # does one step of classical beam search
-
- def beam_step(logprobsf, unaug_logprobsf, beam_size, t, beam_seq, beam_seq_logprobs, beam_logprobs_sum, state):
- #INPUTS:
- #logprobsf: probabilities augmented after diversity
- #beam_size: obvious
- #t : time instant
- #beam_seq : tensor contanining the beams
- #beam_seq_logprobs: tensor contanining the beam logprobs
- #beam_logprobs_sum: tensor contanining joint logprobs
- #OUPUTS:
- #beam_seq : tensor containing the word indices of the decoded captions
- #beam_seq_logprobs : log-probability of each decision made, same size as beam_seq
- #beam_logprobs_sum : joint log-probability of each beam
-
- ys,ix = torch.sort(logprobsf,1,True)
- candidates = []
- cols = min(beam_size, ys.size(1))
- rows = beam_size
- if t == 0:
- rows = 1
- for c in range(cols): # for each column (word, essentially)
- for q in range(rows): # for each beam expansion
- #compute logprob of expanding beam q with word in (sorted) position c
- local_logprob = ys[q,c].item()
- candidate_logprob = beam_logprobs_sum[q] + local_logprob
- # local_unaug_logprob = unaug_logprobsf[q,ix[q,c]]
- candidates.append({'c':ix[q,c], 'q':q, 'p':candidate_logprob, 'r':unaug_logprobsf[q]})
- candidates = sorted(candidates, key=lambda x: -x['p'])
-
- new_state = [_.clone() for _ in state]
- #beam_seq_prev, beam_seq_logprobs_prev
- if t >= 1:
- #we''ll need these as reference when we fork beams around
- beam_seq_prev = beam_seq[:t].clone()
- beam_seq_logprobs_prev = beam_seq_logprobs[:t].clone()
- for vix in range(beam_size):
- v = candidates[vix]
- #fork beam index q into index vix
- if t >= 1:
- beam_seq[:t, vix] = beam_seq_prev[:, v['q']]
- beam_seq_logprobs[:t, vix] = beam_seq_logprobs_prev[:, v['q']]
- #rearrange recurrent states
- for state_ix in range(len(new_state)):
- # copy over state in previous beam q to new beam at vix
- new_state[state_ix][:, vix] = state[state_ix][:, v['q']] # dimension one is time step
- #append new end terminal at the end of this beam
- beam_seq[t, vix] = v['c'] # c'th word is the continuation
- beam_seq_logprobs[t, vix] = v['r'] # the raw logprob here
- beam_logprobs_sum[vix] = v['p'] # the new (sum) logprob along this beam
- state = new_state
- return beam_seq,beam_seq_logprobs,beam_logprobs_sum,state,candidates
-
- # Start diverse_beam_search
- opt = kwargs['opt']
- temperature = opt.get('temperature', 1) # This should not affect beam search, but will affect dbs
- beam_size = opt.get('beam_size', 10)
- group_size = opt.get('group_size', 1)
- diversity_lambda = opt.get('diversity_lambda', 0.5)
- decoding_constraint = opt.get('decoding_constraint', 0)
- remove_bad_endings = opt.get('remove_bad_endings', 0)
- suppress_UNK = opt.get('suppress_UNK', 0)
- length_penalty = utils.penalty_builder(opt.get('length_penalty', ''))
- bdash = beam_size // group_size # beam per group
-
- # INITIALIZATIONS
- beam_seq_table = [torch.LongTensor(self.seq_length, bdash).zero_() for _ in range(group_size)]
- beam_seq_logprobs_table = [torch.FloatTensor(self.seq_length, bdash, self.vocab_size + 1).zero_() for _ in range(group_size)]
- beam_logprobs_sum_table = [torch.zeros(bdash) for _ in range(group_size)]
-
- # logprobs # logprobs predicted in last time step, shape (beam_size, vocab_size+1)
- done_beams_table = [[] for _ in range(group_size)]
- # state_table = [list(torch.unbind(_)) for _ in torch.stack(init_state).chunk(group_size, 2)]
- state_table = list(zip(*[_.chunk(group_size, 1) for _ in init_state]))
- logprobs_table = list(init_logprobs.chunk(group_size, 0))
- # END INIT
-
- # Chunk elements in the args
- args = list(args)
- if self.__class__.__name__ == 'AttEnsemble':
- args = [[_.chunk(group_size) if _ is not None else [None]*group_size for _ in args_] for args_ in args] # arg_name, model_name, group_name
- args = [[[args[j][i][k] for i in range(len(self.models))] for j in range(len(args))] for k in range(group_size)] # group_name, arg_name, model_name
- else:
- args = [_.chunk(group_size) if _ is not None else [None]*group_size for _ in args]
- args = [[args[i][j] for i in range(len(args))] for j in range(group_size)]
-
- for t in range(self.seq_length + group_size - 1):
- for divm in range(group_size):
- if t >= divm and t <= self.seq_length + divm - 1:
- # add diversity
- logprobsf = logprobs_table[divm]
- # suppress previous word
- if decoding_constraint and t-divm > 0:
- logprobsf.scatter_(1, beam_seq_table[divm][t-divm-1].unsqueeze(1).to(logprobsf.device), float('-inf'))
- if remove_bad_endings and t-divm > 0:
- logprobsf[torch.from_numpy(np.isin(beam_seq_table[divm][t-divm-1].cpu().numpy(), self.bad_endings_ix)), 0] = float('-inf')
- # suppress UNK tokens in the decoding
- if suppress_UNK and hasattr(self, 'vocab') and self.vocab[str(logprobsf.size(1)-1)] == 'UNK':
- logprobsf[:,logprobsf.size(1)-1] = logprobsf[:, logprobsf.size(1)-1] - 1000
- # diversity is added here
- # the function directly modifies the logprobsf values and hence, we need to return
- # the unaugmented ones for sorting the candidates in the end. # for historical
- # reasons :-)
- unaug_logprobsf = add_diversity(beam_seq_table,logprobsf,t,divm,diversity_lambda,bdash)
-
- # infer new beams
- beam_seq_table[divm],\
- beam_seq_logprobs_table[divm],\
- beam_logprobs_sum_table[divm],\
- state_table[divm],\
- candidates_divm = beam_step(logprobsf,
- unaug_logprobsf,
- bdash,
- t-divm,
- beam_seq_table[divm],
- beam_seq_logprobs_table[divm],
- beam_logprobs_sum_table[divm],
- state_table[divm])
-
- # if time's up... or if end token is reached then copy beams
- for vix in range(bdash):
- if beam_seq_table[divm][t-divm,vix] == self.eos_idx or t == self.seq_length + divm - 1:
- final_beam = {
- 'seq': beam_seq_table[divm][:, vix].clone(),
- 'logps': beam_seq_logprobs_table[divm][:, vix].clone(),
- 'unaug_p': beam_seq_logprobs_table[divm][:, vix].sum().item(),
- 'p': beam_logprobs_sum_table[divm][vix].item()
- }
- final_beam['p'] = length_penalty(t-divm+1, final_beam['p'])
- done_beams_table[divm].append(final_beam)
- # don't continue beams from finished sequences
- beam_logprobs_sum_table[divm][vix] = -1000
-
- # move the current group one step forward in time
-
- it = beam_seq_table[divm][t-divm].to(logprobsf.device)
- logprobs_table[divm], state_table[divm] = self.get_logprobs_state(it, *(args[divm] + [state_table[divm]]))
- logprobs_table[divm] = F.log_softmax(logprobs_table[divm] / temperature, dim=-1)
-
- # all beams are sorted by their log-probabilities
- done_beams_table = [sorted(done_beams_table[i], key=lambda x: -x['p'])[:bdash] for i in range(group_size)]
- done_beams = sum(done_beams_table, [])
- return done_beams
-
- def sample_next_word(self, logprobs, sample_method, temperature):
- if sample_method == 'greedy':
- sampleLogprobs, it = torch.max(logprobs.data, 1)
- it = it.view(-1).long()
- elif sample_method == 'gumbel': # gumbel softmax
- # ref: https://gist.github.com/yzh119/fd2146d2aeb329d067568a493b20172f
- def sample_gumbel(shape, eps=1e-20):
- U = torch.rand(shape).to(logprobs.device)
- return -torch.log(-torch.log(U + eps) + eps)
- def gumbel_softmax_sample(logits, temperature):
- y = logits + sample_gumbel(logits.size())
- return F.log_softmax(y / temperature, dim=-1)
- _logprobs = gumbel_softmax_sample(logprobs, temperature)
- _, it = torch.max(_logprobs.data, 1)
- sampleLogprobs = logprobs.gather(1, it.unsqueeze(1)) # gather the logprobs at sampled positions
- else:
- logprobs = logprobs / temperature
- if sample_method.startswith('top'): # topk sampling
- top_num = float(sample_method[3:])
- if 0 < top_num < 1:
- # nucleus sampling from # The Curious Case of Neural Text Degeneration
- probs = F.softmax(logprobs, dim=1)
- sorted_probs, sorted_indices = torch.sort(probs, descending=True, dim=1)
- _cumsum = sorted_probs.cumsum(1)
- mask = _cumsum < top_num
- mask = torch.cat([torch.ones_like(mask[:,:1]), mask[:,:-1]], 1)
- sorted_probs = sorted_probs * mask.to(sorted_probs)
- sorted_probs = sorted_probs / sorted_probs.sum(1, keepdim=True)
- logprobs.scatter_(1, sorted_indices, sorted_probs.log())
- else:
- the_k = int(top_num)
- tmp = torch.empty_like(logprobs).fill_(float('-inf'))
- topk, indices = torch.topk(logprobs, the_k, dim=1)
- tmp = tmp.scatter(1, indices, topk)
- logprobs = tmp
- it = torch.distributions.Categorical(logits=logprobs.detach()).sample()
- sampleLogprobs = logprobs.gather(1, it.unsqueeze(1)) # gather the logprobs at sampled positions
- return it, sampleLogprobs
-
-
- def decode_sequence(self, seq):
- return utils.decode_sequence(self.vocab, seq)
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/save/README.md b/spaces/NAACL2022/CLIP-Caption-Reward/save/README.md
deleted file mode 100644
index 91547b46ffedc91d209fec4c7ac0b8cfb9e447de..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/save/README.md
+++ /dev/null
@@ -1 +0,0 @@
-Directory for checkpoints
\ No newline at end of file
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/tasks/sentence_prediction.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/tasks/sentence_prediction.py
deleted file mode 100644
index b2eb0bf47de273408459e35cf45ff01ac69a9d2c..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/tasks/sentence_prediction.py
+++ /dev/null
@@ -1,190 +0,0 @@
-# Lint as: python3
-# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Sentence prediction (classification) task."""
-from absl import logging
-import dataclasses
-import numpy as np
-from scipy import stats
-from sklearn import metrics as sklearn_metrics
-import tensorflow as tf
-import tensorflow_hub as hub
-
-from official.core import base_task
-from official.modeling.hyperparams import config_definitions as cfg
-from official.nlp.configs import bert
-from official.nlp.data import sentence_prediction_dataloader
-from official.nlp.modeling import losses as loss_lib
-from official.nlp.tasks import utils
-
-
-@dataclasses.dataclass
-class SentencePredictionConfig(cfg.TaskConfig):
- """The model config."""
- # At most one of `init_checkpoint` and `hub_module_url` can
- # be specified.
- init_checkpoint: str = ''
- hub_module_url: str = ''
- metric_type: str = 'accuracy'
- network: bert.BertPretrainerConfig = bert.BertPretrainerConfig(
- num_masked_tokens=0, # No masked language modeling head.
- cls_heads=[
- bert.ClsHeadConfig(
- inner_dim=768,
- num_classes=3,
- dropout_rate=0.1,
- name='sentence_prediction')
- ])
- train_data: cfg.DataConfig = cfg.DataConfig()
- validation_data: cfg.DataConfig = cfg.DataConfig()
-
-
-@base_task.register_task_cls(SentencePredictionConfig)
-class SentencePredictionTask(base_task.Task):
- """Task object for sentence_prediction."""
-
- def __init__(self, params=cfg.TaskConfig):
- super(SentencePredictionTask, self).__init__(params)
- if params.hub_module_url and params.init_checkpoint:
- raise ValueError('At most one of `hub_module_url` and '
- '`pretrain_checkpoint_dir` can be specified.')
- if params.hub_module_url:
- self._hub_module = hub.load(params.hub_module_url)
- else:
- self._hub_module = None
- self.metric_type = params.metric_type
-
- def build_model(self):
- if self._hub_module:
- encoder_from_hub = utils.get_encoder_from_hub(self._hub_module)
- return bert.instantiate_bertpretrainer_from_cfg(
- self.task_config.network, encoder_network=encoder_from_hub)
- else:
- return bert.instantiate_bertpretrainer_from_cfg(self.task_config.network)
-
- def build_losses(self, labels, model_outputs, aux_losses=None) -> tf.Tensor:
- loss = loss_lib.weighted_sparse_categorical_crossentropy_loss(
- labels=labels,
- predictions=tf.nn.log_softmax(
- tf.cast(model_outputs['sentence_prediction'], tf.float32), axis=-1))
-
- if aux_losses:
- loss += tf.add_n(aux_losses)
- return loss
-
- def build_inputs(self, params, input_context=None):
- """Returns tf.data.Dataset for sentence_prediction task."""
- if params.input_path == 'dummy':
-
- def dummy_data(_):
- dummy_ids = tf.zeros((1, params.seq_length), dtype=tf.int32)
- x = dict(
- input_word_ids=dummy_ids,
- input_mask=dummy_ids,
- input_type_ids=dummy_ids)
- y = tf.ones((1, 1), dtype=tf.int32)
- return (x, y)
-
- dataset = tf.data.Dataset.range(1)
- dataset = dataset.repeat()
- dataset = dataset.map(
- dummy_data, num_parallel_calls=tf.data.experimental.AUTOTUNE)
- return dataset
-
- return sentence_prediction_dataloader.SentencePredictionDataLoader(
- params).load(input_context)
-
- def build_metrics(self, training=None):
- del training
- metrics = [tf.keras.metrics.SparseCategoricalAccuracy(name='cls_accuracy')]
- return metrics
-
- def process_metrics(self, metrics, labels, model_outputs):
- for metric in metrics:
- metric.update_state(labels, model_outputs['sentence_prediction'])
-
- def process_compiled_metrics(self, compiled_metrics, labels, model_outputs):
- compiled_metrics.update_state(labels, model_outputs['sentence_prediction'])
-
- def validation_step(self, inputs, model: tf.keras.Model, metrics=None):
- if self.metric_type == 'accuracy':
- return super(SentencePredictionTask,
- self).validation_step(inputs, model, metrics)
- features, labels = inputs
- outputs = self.inference_step(features, model)
- loss = self.build_losses(
- labels=labels, model_outputs=outputs, aux_losses=model.losses)
- if self.metric_type == 'matthews_corrcoef':
- return {
- self.loss:
- loss,
- 'sentence_prediction':
- tf.expand_dims(
- tf.math.argmax(outputs['sentence_prediction'], axis=1),
- axis=0),
- 'labels':
- labels,
- }
- if self.metric_type == 'pearson_spearman_corr':
- return {
- self.loss: loss,
- 'sentence_prediction': outputs['sentence_prediction'],
- 'labels': labels,
- }
-
- def aggregate_logs(self, state=None, step_outputs=None):
- if state is None:
- state = {'sentence_prediction': [], 'labels': []}
- state['sentence_prediction'].append(
- np.concatenate([v.numpy() for v in step_outputs['sentence_prediction']],
- axis=0))
- state['labels'].append(
- np.concatenate([v.numpy() for v in step_outputs['labels']], axis=0))
- return state
-
- def reduce_aggregated_logs(self, aggregated_logs):
- if self.metric_type == 'matthews_corrcoef':
- preds = np.concatenate(aggregated_logs['sentence_prediction'], axis=0)
- labels = np.concatenate(aggregated_logs['labels'], axis=0)
- return {
- self.metric_type: sklearn_metrics.matthews_corrcoef(preds, labels)
- }
- if self.metric_type == 'pearson_spearman_corr':
- preds = np.concatenate(aggregated_logs['sentence_prediction'], axis=0)
- labels = np.concatenate(aggregated_logs['labels'], axis=0)
- pearson_corr = stats.pearsonr(preds, labels)[0]
- spearman_corr = stats.spearmanr(preds, labels)[0]
- corr_metric = (pearson_corr + spearman_corr) / 2
- return {self.metric_type: corr_metric}
-
- def initialize(self, model):
- """Load a pretrained checkpoint (if exists) and then train from iter 0."""
- ckpt_dir_or_file = self.task_config.init_checkpoint
- if tf.io.gfile.isdir(ckpt_dir_or_file):
- ckpt_dir_or_file = tf.train.latest_checkpoint(ckpt_dir_or_file)
- if not ckpt_dir_or_file:
- return
-
- pretrain2finetune_mapping = {
- 'encoder':
- model.checkpoint_items['encoder'],
- 'next_sentence.pooler_dense':
- model.checkpoint_items['sentence_prediction.pooler_dense'],
- }
- ckpt = tf.train.Checkpoint(**pretrain2finetune_mapping)
- status = ckpt.restore(ckpt_dir_or_file)
- status.expect_partial().assert_existing_objects_matched()
- logging.info('finished loading pretrained checkpoint from %s',
- ckpt_dir_or_file)
diff --git a/spaces/NSect/VALL-E-X/utils/prompt_making.py b/spaces/NSect/VALL-E-X/utils/prompt_making.py
deleted file mode 100644
index 93e4a3d647052df4899253fea41be22f09e006b8..0000000000000000000000000000000000000000
--- a/spaces/NSect/VALL-E-X/utils/prompt_making.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import os
-import torch
-import torchaudio
-import logging
-import langid
-import whisper
-langid.set_languages(['en', 'zh', 'ja'])
-
-import numpy as np
-from data.tokenizer import (
- AudioTokenizer,
- tokenize_audio,
-)
-from data.collation import get_text_token_collater
-from utils.g2p import PhonemeBpeTokenizer
-
-from macros import *
-
-text_tokenizer = PhonemeBpeTokenizer(tokenizer_path="./utils/g2p/bpe_69.json")
-text_collater = get_text_token_collater()
-
-device = torch.device("cpu")
-if torch.cuda.is_available():
- device = torch.device("cuda", 0)
-
-codec = AudioTokenizer(device)
-
-whisper_model = None
-
-@torch.no_grad()
-def transcribe_one(model, audio_path):
- # load audio and pad/trim it to fit 30 seconds
- audio = whisper.load_audio(audio_path)
- audio = whisper.pad_or_trim(audio)
-
- # make log-Mel spectrogram and move to the same device as the model
- mel = whisper.log_mel_spectrogram(audio).to(model.device)
-
- # detect the spoken language
- _, probs = model.detect_language(mel)
- print(f"Detected language: {max(probs, key=probs.get)}")
- lang = max(probs, key=probs.get)
- # decode the audio
- options = whisper.DecodingOptions(temperature=1.0, best_of=5, fp16=False if device == torch.device("cpu") else True, sample_len=150)
- result = whisper.decode(model, mel, options)
-
- # print the recognized text
- print(result.text)
-
- text_pr = result.text
- if text_pr.strip(" ")[-1] not in "?!.,。,?!。、":
- text_pr += "."
- return lang, text_pr
-
-def make_prompt(name, audio_prompt_path, transcript=None):
- global model, text_collater, text_tokenizer, codec
- wav_pr, sr = torchaudio.load(audio_prompt_path)
- # check length
- if wav_pr.size(-1) / sr > 15:
- raise ValueError(f"Prompt too long, expect length below 15 seconds, got {wav_pr / sr} seconds.")
- if wav_pr.size(0) == 2:
- wav_pr = wav_pr.mean(0, keepdim=True)
- text_pr, lang_pr = make_transcript(name, wav_pr, sr, transcript)
-
- # tokenize audio
- encoded_frames = tokenize_audio(codec, (wav_pr, sr))
- audio_tokens = encoded_frames[0][0].transpose(2, 1).cpu().numpy()
-
- # tokenize text
- phonemes, langs = text_tokenizer.tokenize(text=f"{text_pr}".strip())
- text_tokens, enroll_x_lens = text_collater(
- [
- phonemes
- ]
- )
-
- message = f"Detected language: {lang_pr}\n Detected text {text_pr}\n"
-
- # save as npz file
- save_path = os.path.join("./customs/", f"{name}.npz")
- np.savez(save_path, audio_tokens=audio_tokens, text_tokens=text_tokens, lang_code=lang2code[lang_pr])
- logging.info(f"Successful. Prompt saved to {save_path}")
-
-
-def make_transcript(name, wav, sr, transcript=None):
-
- if not isinstance(wav, torch.FloatTensor):
- wav = torch.tensor(wav)
- if wav.abs().max() > 1:
- wav /= wav.abs().max()
- if wav.size(-1) == 2:
- wav = wav.mean(-1, keepdim=False)
- if wav.ndim == 1:
- wav = wav.unsqueeze(0)
- assert wav.ndim and wav.size(0) == 1
- if transcript is None or transcript == "":
- logging.info("Transcript not given, using Whisper...")
- global whisper_model
- if whisper_model is None:
- whisper_model = whisper.load_model("medium")
- whisper_model.to(device)
- torchaudio.save(f"./prompts/{name}.wav", wav, sr)
- lang, text = transcribe_one(whisper_model, f"./prompts/{name}.wav")
- lang_token = lang2token[lang]
- text = lang_token + text + lang_token
- os.remove(f"./prompts/{name}.wav")
- whisper_model.cpu()
- else:
- text = transcript
- lang, _ = langid.classify(text)
- lang_token = lang2token[lang]
- text = lang_token + text + lang_token
-
- torch.cuda.empty_cache()
- return text, lang
\ No newline at end of file
diff --git a/spaces/NiuTaipu/moe-tts-test01/text/shanghainese.py b/spaces/NiuTaipu/moe-tts-test01/text/shanghainese.py
deleted file mode 100644
index 1c28c17d0dc0d920fd222c909a53d703c95e043b..0000000000000000000000000000000000000000
--- a/spaces/NiuTaipu/moe-tts-test01/text/shanghainese.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('chinese_dialect_lexicons/zaonhe')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ᴇ'),
- ('B', 'bi'),
- ('C', 'si'),
- ('D', 'di'),
- ('E', 'i'),
- ('F', 'ᴇf'),
- ('G', 'dʑi'),
- ('H', 'ᴇtɕʰ'),
- ('I', 'ᴀi'),
- ('J', 'dʑᴇ'),
- ('K', 'kʰᴇ'),
- ('L', 'ᴇl'),
- ('M', 'ᴇm'),
- ('N', 'ᴇn'),
- ('O', 'o'),
- ('P', 'pʰi'),
- ('Q', 'kʰiu'),
- ('R', 'ᴀl'),
- ('S', 'ᴇs'),
- ('T', 'tʰi'),
- ('U', 'ɦiu'),
- ('V', 'vi'),
- ('W', 'dᴀbɤliu'),
- ('X', 'ᴇks'),
- ('Y', 'uᴀi'),
- ('Z', 'zᴇ')
-]]
-
-
-def _number_to_shanghainese(num):
- num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两')
- return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num)
-
-
-def number_to_shanghainese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def shanghainese_to_ipa(text):
- text = number_to_shanghainese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/OAOA/DifFace/facelib/utils/misc.py b/spaces/OAOA/DifFace/facelib/utils/misc.py
deleted file mode 100644
index 4e8c7c0a2bd261135ae8c52c20c1ab2072d1049f..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/facelib/utils/misc.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import cv2
-import os
-import os.path as osp
-import torch
-from torch.hub import download_url_to_file, get_dir
-from urllib.parse import urlparse
-import gdown
-
-
-ROOT_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
-
-
-def download_pretrained_models(file_ids, save_path_root):
- os.makedirs(save_path_root, exist_ok=True)
-
- for file_name, file_id in file_ids.items():
- file_url = 'https://drive.google.com/uc?id='+file_id
- save_path = osp.abspath(osp.join(save_path_root, file_name))
- if osp.exists(save_path):
- user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\n')
- if user_response.lower() == 'y':
- print(f'Covering {file_name} to {save_path}')
- gdown.download(file_url, save_path, quiet=False)
- elif user_response.lower() == 'n':
- print(f'Skipping {file_name}')
- else:
- raise ValueError('Wrong input. Only accepts Y/N.')
- else:
- print(f'Downloading {file_name} to {save_path}')
- gdown.download(file_url, save_path, quiet=False)
-
-
-def imwrite(img, file_path, params=None, auto_mkdir=True):
- """Write image to file.
-
- Args:
- img (ndarray): Image array to be written.
- file_path (str): Image file path.
- params (None or list): Same as opencv's :func:`imwrite` interface.
- auto_mkdir (bool): If the parent folder of `file_path` does not exist,
- whether to create it automatically.
-
- Returns:
- bool: Successful or not.
- """
- if auto_mkdir:
- dir_name = os.path.abspath(os.path.dirname(file_path))
- os.makedirs(dir_name, exist_ok=True)
- return cv2.imwrite(file_path, img, params)
-
-
-def img2tensor(imgs, bgr2rgb=True, float32=True):
- """Numpy array to tensor.
-
- Args:
- imgs (list[ndarray] | ndarray): Input images.
- bgr2rgb (bool): Whether to change bgr to rgb.
- float32 (bool): Whether to change to float32.
-
- Returns:
- list[tensor] | tensor: Tensor images. If returned results only have
- one element, just return tensor.
- """
-
- def _totensor(img, bgr2rgb, float32):
- if img.shape[2] == 3 and bgr2rgb:
- if img.dtype == 'float64':
- img = img.astype('float32')
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- img = torch.from_numpy(img.transpose(2, 0, 1))
- if float32:
- img = img.float()
- return img
-
- if isinstance(imgs, list):
- return [_totensor(img, bgr2rgb, float32) for img in imgs]
- else:
- return _totensor(imgs, bgr2rgb, float32)
-
-
-def load_file_from_url(url, model_dir=None, progress=True, file_name=None):
- """Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py
- """
- if model_dir is None:
- hub_dir = get_dir()
- model_dir = os.path.join(hub_dir, 'checkpoints')
-
- os.makedirs(os.path.join(ROOT_DIR, model_dir), exist_ok=True)
-
- parts = urlparse(url)
- filename = os.path.basename(parts.path)
- if file_name is not None:
- filename = file_name
- cached_file = os.path.abspath(os.path.join(ROOT_DIR, model_dir, filename))
- if not os.path.exists(cached_file):
- print(f'Downloading: "{url}" to {cached_file}\n')
- download_url_to_file(url, cached_file, hash_prefix=None, progress=progress)
- return cached_file
-
-
-def scandir(dir_path, suffix=None, recursive=False, full_path=False):
- """Scan a directory to find the interested files.
- Args:
- dir_path (str): Path of the directory.
- suffix (str | tuple(str), optional): File suffix that we are
- interested in. Default: None.
- recursive (bool, optional): If set to True, recursively scan the
- directory. Default: False.
- full_path (bool, optional): If set to True, include the dir_path.
- Default: False.
- Returns:
- A generator for all the interested files with relative paths.
- """
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('"suffix" must be a string or tuple of strings')
-
- root = dir_path
-
- def _scandir(dir_path, suffix, recursive):
- for entry in os.scandir(dir_path):
- if not entry.name.startswith('.') and entry.is_file():
- if full_path:
- return_path = entry.path
- else:
- return_path = osp.relpath(entry.path, root)
-
- if suffix is None:
- yield return_path
- elif return_path.endswith(suffix):
- yield return_path
- else:
- if recursive:
- yield from _scandir(entry.path, suffix=suffix, recursive=recursive)
- else:
- continue
-
- return _scandir(dir_path, suffix=suffix, recursive=recursive)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/feature_request.md
deleted file mode 100644
index 93c8668041f8a7af29e4c11e905d8b56b946dd51..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE/feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: 🚀 Feature Request
-about: Submit a proposal/request for a new feature
-labels: 'enhancement, help wanted, needs triage'
----
-
-## 🚀 Feature Request
-
-
-### Motivation
-
-
-
-### Pitch
-
-
-
-### Alternatives
-
-
-
-### Additional context
-
-
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/fully_sharded_data_parallel/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/fully_sharded_data_parallel/README.md
deleted file mode 100644
index b9e44fef48bee5faeee27b3d1d1b1eb96b6a477f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/fully_sharded_data_parallel/README.md
+++ /dev/null
@@ -1,177 +0,0 @@
-# Fully Sharded Data Parallel (FSDP)
-
-## Overview
-Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and
-[Google](https://arxiv.org/abs/2004.13336) has shown that data parallel
-training can be made significantly more efficient by sharding the model
-parameters and optimizer state across data parallel workers. These ideas are
-encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper provided
-by [fairscale](https://github.com/facebookresearch/fairscale/).
-
-Compared to PyTorch DDP:
-* FSDP produces identical results as PyTorch DDP (it's still synchronous data parallel training)
-* FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs
-* FSDP is faster than PyTorch DDP because the optimizer step is sharded, and the communication can be overlapped with the forward pass
-* FSDP enables training 13B parameter models on 8 GPUs and 175B parameter models on 128 GPUs
-
-FSDP is fully supported in fairseq via the following new arguments:
-* `--ddp-backend=fully_sharded`: enables full sharding via FSDP
-* `--cpu-offload`: offloads the optimizer state and FP32 model copy to CPU (combine with `--optimizer=cpu_adam`)
-* `--no-reshard-after-forward`: increases training speed for large models (1B+ params) and is similar to ZeRO stage 2
-* other popular options (`--fp16`, `--update-freq`, `--checkpoint-activations`, `--offload-activations`, etc.) continue to work as normal
-
-Limitations
-
-FSDP currently has several limitations compared to fairseq's default DDP backend (PyTorch DDP):
-* while FSDP is full compatible with pointwise Optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.), it is not currently compatible with non-pointwise Optimizers (e.g., Adagrad, Adafactor, LAMB, etc.)
-* FSDP depends on flattening the parameters, so models that currently require `--fp16-no-flatten-grads` may not be supported
-
-See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed
-explanation of these and other limitations.
-
-
-
-How it works
-
-
-
-See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed
-explanation of how FSDP works.
-
-
-
-## Example usage
-
-The following examples illustrate how to train a very large language model with
-13 billion parameters on 1 GPU by offloading parameters and optimizer states to
-CPU, or on 8 GPUs by fully sharding the params and optimizer states across GPUs.
-
-These examples use the WikiText-103 dataset for demonstration purposes, but
-in practice a much larger dataset will be needed to achieve good results.
-Follow the [instructions here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.pretraining.md#1-preprocess-the-data)
-to preprocess the WikiText-103 dataset using the GPT-2/RoBERTa vocabulary.
-
-### 13B params on 1 V100 GPU (with CPU offloading)
-
-The following command trains a 13B parameter GPT-3 model on a single V100 GPU
-using the `--cpu-offload` feature to offload parameters and optimizer states to
-CPU. In this setting, the optimizer step (Adam) happens on CPU. We also use the
-`--checkpoint-activations` feature (sometimes called [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html)),
-which further saves memory in exchange for a small increase in computation.
-
-**Requirements:**
-- Install the latest master version of fairscale: `pip install git+https://github.com/facebookresearch/fairscale.git@master`
-- You'll need 32GB of GPU memory and ~256GB of system memory to train the 13B param model.
-- If you have less system memory, the 6.7B param model can be trained with ~128GB of system memory, just set `--arch transformer_lm_gpt3_6_7`
-- We use the CPU Adam optimizer from [DeepSpeed](https://github.com/microsoft/DeepSpeed), so you'll need to `pip install deepspeed` before running the command.
-
-**Notes:**
-- The command will take ~5 minutes to start training, during which time it will appear to be hung, since randomly initializing 13B weights can be slow.
-- The `--cpu-offload` feature requires training in mixed precision (`--fp16`).
-- Tune the `OMP_NUM_THREADS` env variable for best performance with CPU offloading.
-- The example command below stops training after 10 steps (`--max-update 10`) and does not save checkpoints (`--no-save`).
-
-```bash
-OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0 \
- fairseq-train data-bin/wikitext-103-roberta-bpe-bin \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 2048 --batch-size 8 \
- --arch transformer_lm_gpt3_13 \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 10 --no-save --log-format json --log-interval 1
-```
-
-Example output
-
-### 13B params on 8 V100 GPUs (with full parameter + optimizer state sharding)
-
-FSDP can also shard the parameters and optimizer states across multiple GPUs,
-reducing memory requirements significantly. On 8 x 32GB GPUs, sharding enables
-training the same 13B parameter model *without offloading the parameters to
-CPU*. However, without CPU offloading we'd only be able to fit a batch size of
-1 per GPU, which would cause training speed to suffer.
-
-We obtain the best performance on 8 GPUs by combining full sharding and CPU
-offloading. The following command trains the same 13B parameter GPT-3 model as
-before on 8 x 32GB V100 GPUs; training speed increases superlinearly from ~310
-words per second to ~3200 words per second.
-
-```bash
-OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
- fairseq-train data-bin/wikitext-103-roberta-bpe-bin \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 2048 --batch-size 8 \
- --arch transformer_lm_gpt3_13 \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 10 --no-save --log-format json --log-interval 1
-```
-
-Example output